diff options
author | Jim Cownie <james.h.cownie@intel.com> | 2014-10-07 16:25:50 +0000 |
---|---|---|
committer | Jim Cownie <james.h.cownie@intel.com> | 2014-10-07 16:25:50 +0000 |
commit | 4cc4bb4c60c786a253176106065e59d639bbc1a9 (patch) | |
tree | dfe5a4f9e4591505dd8a7d333b5b9201c50b52e3 /openmp/runtime/src/z_Linux_asm.s | |
parent | f72fa67fc35622a8cd18a2fffd979fb225d82400 (diff) | |
download | bcm5719-llvm-4cc4bb4c60c786a253176106065e59d639bbc1a9.tar.gz bcm5719-llvm-4cc4bb4c60c786a253176106065e59d639bbc1a9.zip |
I apologise in advance for the size of this check-in. At Intel we do
understand that this is not friendly, and are working to change our
internal code-development to make it easier to make development
features available more frequently and in finer (more functional)
chunks. Unfortunately we haven't got that in place yet, and unpicking
this into multiple separate check-ins would be non-trivial, so please
bear with me on this one. We should be better in the future.
Apologies over, what do we have here?
GGC 4.9 compatibility
--------------------
* We have implemented the new entrypoints used by code compiled by GCC
4.9 to implement the same functionality in gcc 4.8. Therefore code
compiled with gcc 4.9 that used to work will continue to do so.
However, there are some other new entrypoints (associated with task
cancellation) which are not implemented. Therefore user code compiled
by gcc 4.9 that uses these new features will not link against the LLVM
runtime. (It remains unclear how to handle those entrypoints, since
the GCC interface has potentially unpleasant performance implications
for join barriers even when cancellation is not used)
--- new parallel entry points ---
new entry points that aren't OpenMP 4.0 related
These are implemented fully :-
GOMP_parallel_loop_dynamic()
GOMP_parallel_loop_guided()
GOMP_parallel_loop_runtime()
GOMP_parallel_loop_static()
GOMP_parallel_sections()
GOMP_parallel()
--- cancellation entry points ---
Currently, these only give a runtime error if OMP_CANCELLATION is true
because our plain barriers don't check for cancellation while waiting
GOMP_barrier_cancel()
GOMP_cancel()
GOMP_cancellation_point()
GOMP_loop_end_cancel()
GOMP_sections_end_cancel()
--- taskgroup entry points ---
These are implemented fully.
GOMP_taskgroup_start()
GOMP_taskgroup_end()
--- target entry points ---
These are empty (as they are in libgomp)
GOMP_target()
GOMP_target_data()
GOMP_target_end_data()
GOMP_target_update()
GOMP_teams()
Improvements in Barriers and Fork/Join
--------------------------------------
* Barrier and fork/join code is now in its own file (which makes it
easier to understand and modify).
* Wait/release code is now templated and in its own file; suspend/resume code is also templated
* There's a new, hierarchical, barrier, which exploits the
cache-hierarchy of the Intel(r) Xeon Phi(tm) coprocessor to improve
fork/join and barrier performance.
***BEWARE*** the new source files have *not* been added to the legacy
Cmake build system. If you want to use that fixes wil be required.
Statistics Collection Code
--------------------------
* New code has been added to collect application statistics (if this
is enabled at library compile time; by default it is not). The
statistics code itself is generally useful, the lightweight timing
code uses the X86 rdtsc instruction, so will require changes for other
architectures.
The intent of this code is not for users to tune their codes but
rather
1) For timing code-paths inside the runtime
2) For gathering general properties of OpenMP codes to focus attention
on which OpenMP features are most used.
Nested Hot Teams
----------------
* The runtime now maintains more state to reduce the overhead of
creating and destroying inner parallel teams. This improves the
performance of code that repeatedly uses nested parallelism with the
same resource allocation. Set the new KMP_HOT_TEAMS_MAX_LEVEL
envirable to a depth to enable this (and, of course, OMP_NESTED=true
to enable nested parallelism at all).
Improved Intel(r) VTune(Tm) Amplifier support
---------------------------------------------
* The runtime provides additional information to Vtune via the
itt_notify interface to allow it to display better OpenMP specific
analyses of load-imbalance.
Support for OpenMP Composite Statements
---------------------------------------
* Implement new entrypoints required by some of the OpenMP 4.1
composite statements.
Improved ifdefs
---------------
* More separation of concepts ("Does this platform do X?") from
platforms ("Are we compiling for platform Y?"), which should simplify
future porting.
ScaleMP* contribution
---------------------
Stack padding to improve the performance in their environment where
cross-node coherency is managed at the page level.
Redesign of wait and release code
---------------------------------
The code is simplified and performance improved.
Bug Fixes
---------
*Fixes for Windows multiple processor groups.
*Fix Fortran module build on Linux: offload attribute added.
*Fix entry names for distribute-parallel-loop construct to be consistent with the compiler codegen.
*Fix an inconsistent error message for KMP_PLACE_THREADS environment variable.
llvm-svn: 219214
Diffstat (limited to 'openmp/runtime/src/z_Linux_asm.s')
-rw-r--r-- | openmp/runtime/src/z_Linux_asm.s | 220 |
1 files changed, 8 insertions, 212 deletions
diff --git a/openmp/runtime/src/z_Linux_asm.s b/openmp/runtime/src/z_Linux_asm.s index 64c80522614..2b982234307 100644 --- a/openmp/runtime/src/z_Linux_asm.s +++ b/openmp/runtime/src/z_Linux_asm.s @@ -1,7 +1,7 @@ // z_Linux_asm.s: - microtasking routines specifically // written for Intel platforms running Linux* OS -// $Revision: 42810 $ -// $Date: 2013-11-07 12:06:33 -0600 (Thu, 07 Nov 2013) $ +// $Revision: 43473 $ +// $Date: 2014-09-26 15:02:57 -0500 (Fri, 26 Sep 2014) $ // ////===----------------------------------------------------------------------===// @@ -489,118 +489,6 @@ __kmp_unnamed_critical_addr: //------------------------------------------------------------------------ // -// FUNCTION __kmp_test_then_add_real32 -// -// kmp_real32 -// __kmp_test_then_add_real32( volatile kmp_real32 *addr, kmp_real32 data ); -// - - PROC __kmp_test_then_add_real32 - -_addr = 8 -_data = 12 -_old_value = -4 -_new_value = -8 - - pushl %ebp - movl %esp, %ebp - subl $8, %esp - pushl %esi - pushl %ebx - movl _addr(%ebp), %esi -L22: - flds (%esi) - // load <addr> - fsts _old_value(%ebp) - // store into old_value - fadds _data(%ebp) - fstps _new_value(%ebp) - // new_value = old_value + data - - movl _old_value(%ebp), %eax - // load old_value - movl _new_value(%ebp), %ebx - // load new_value - - lock - cmpxchgl %ebx,(%esi) - // Compare %EAX with <addr>. If equal set - // ZF and load %EBX into <addr>. Else, clear - // ZF and load <addr> into %EAX. - jnz L22 - - - flds _old_value(%ebp) - // return old_value - popl %ebx - popl %esi - movl %ebp, %esp - popl %ebp - ret - - DEBUG_INFO __kmp_test_then_add_real32 - -//------------------------------------------------------------------------ -// -// FUNCTION __kmp_test_then_add_real64 -// -// kmp_real64 -// __kmp_test_then_add_real64( volatile kmp_real64 *addr, kmp_real64 data ); -// - PROC __kmp_test_then_add_real64 - -_addr = 8 -_data = 12 -_old_value = -8 -_new_value = -16 - - pushl %ebp - movl %esp, %ebp - subl $16, %esp - pushl %esi - pushl %ebx - pushl %ecx - pushl %edx - movl _addr(%ebp), %esi -L44: - fldl (%esi) - // load <addr> - fstl _old_value(%ebp) - // store into old_value - faddl _data(%ebp) - fstpl _new_value(%ebp) - // new_value = old_value + data - - movl _old_value+4(%ebp), %edx - movl _old_value(%ebp), %eax - // load old_value - movl _new_value+4(%ebp), %ecx - movl _new_value(%ebp), %ebx - // load new_value - - lock - cmpxchg8b (%esi) - // Compare %EDX:%EAX with <addr>. If equal set - // ZF and load %ECX:%EBX into <addr>. Else, clear - // ZF and load <addr> into %EDX:%EAX. - jnz L44 - - - fldl _old_value(%ebp) - // return old_value - popl %edx - popl %ecx - popl %ebx - popl %esi - movl %ebp, %esp - popl %ebp - ret - - DEBUG_INFO __kmp_test_then_add_real64 - - -//------------------------------------------------------------------------ -// // FUNCTION __kmp_load_x87_fpu_control_word // // void @@ -758,30 +646,7 @@ L44: .data ALIGN 4 -// AC: The following #if hiden the .text thus moving the rest of code into .data section on MIC. -// To prevent this in future .text added to every routine definition for x86_64. -# if __MIC__ || __MIC2__ - -# else - -//------------------------------------------------------------------------ -// -// FUNCTION __kmp_x86_pause -// -// void -// __kmp_x86_pause( void ); -// - - .text - PROC __kmp_x86_pause - - pause_op - ret - - DEBUG_INFO __kmp_x86_pause - -# endif // __MIC__ || __MIC2__ - +// To prevent getting our code into .data section .text added to every routine definition for x86_64. //------------------------------------------------------------------------ // // FUNCTION __kmp_x86_cpuid @@ -1176,79 +1041,6 @@ L44: # if ! (__MIC__ || __MIC2__) -//------------------------------------------------------------------------ -// -// FUNCTION __kmp_test_then_add_real32 -// -// kmp_real32 -// __kmp_test_then_add_real32( volatile kmp_real32 *addr, kmp_real32 data ); -// -// parameters: -// addr: %rdi -// data: %xmm0 (lower 4 bytes) -// -// return: %xmm0 (lower 4 bytes) - - .text - PROC __kmp_test_then_add_real32 -1: - movss (%rdi), %xmm1 // load value of <addr> - movd %xmm1, %eax // save old value of <addr> - - addss %xmm0, %xmm1 // new value = old value + <data> - movd %xmm1, %ecx // move new value to GP reg. - - lock - cmpxchgl %ecx, (%rdi) // Compare %EAX with <addr>. If equal set - // ZF and exchange %ECX with <addr>. Else, - // clear ZF and load <addr> into %EAX. - jz 2f - pause_op - jmp 1b -2: - movd %eax, %xmm0 // load old value into return register - ret - - DEBUG_INFO __kmp_test_then_add_real32 - - -//------------------------------------------------------------------------ -// -// FUNCTION __kmp_test_then_add_real64 -// -// kmp_real64 -// __kmp_test_then_add_real64( volatile kmp_real64 *addr, kmp_real64 data ); -// -// parameters: -// addr: %rdi -// data: %xmm0 (lower 8 bytes) -// return: %xmm0 (lower 8 bytes) -// - - .text - PROC __kmp_test_then_add_real64 -1: - movlpd (%rdi), %xmm1 // load value of <addr> - movd %xmm1, %rax // save old value of <addr> - - addsd %xmm0, %xmm1 // new value = old value + <data> - movd %xmm1, %rcx // move new value to GP reg. - - lock - cmpxchgq %rcx, (%rdi) // Compare %RAX with <addr>. If equal set - // ZF and exchange %RCX with <addr>. Else, - // clear ZF and load <addr> into %RAX. - jz 2f - pause_op - jmp 1b - -2: - movd %rax, %xmm0 // load old value into return register - ret - - DEBUG_INFO __kmp_test_then_add_real64 - - # if !KMP_ASM_INTRINS //------------------------------------------------------------------------ @@ -1382,7 +1174,7 @@ L44: // typedef void (*microtask_t)( int *gtid, int *tid, ... ); // // int -// __kmp_invoke_microtask( void (*pkfn) (int *gtid, int *tid, ...), +// __kmp_invoke_microtask( void (*pkfn) (int gtid, int tid, ...), // int gtid, int tid, // int argc, void *p_argv[] ) { // (*pkfn)( & gtid, & tid, argv[0], ... ); @@ -1597,5 +1389,9 @@ __kmp_unnamed_critical_addr: #endif /* KMP_ARCH_PPC64 */ #if defined(__linux__) +# if KMP_ARCH_ARM +.section .note.GNU-stack,"",%progbits +# else .section .note.GNU-stack,"",@progbits +# endif #endif |