diff options
| author | Jim Cownie <james.h.cownie@intel.com> | 2014-10-07 16:25:50 +0000 |
|---|---|---|
| committer | Jim Cownie <james.h.cownie@intel.com> | 2014-10-07 16:25:50 +0000 |
| commit | 4cc4bb4c60c786a253176106065e59d639bbc1a9 (patch) | |
| tree | dfe5a4f9e4591505dd8a7d333b5b9201c50b52e3 /openmp/runtime/src/kmp_global.c | |
| parent | f72fa67fc35622a8cd18a2fffd979fb225d82400 (diff) | |
| download | bcm5719-llvm-4cc4bb4c60c786a253176106065e59d639bbc1a9.tar.gz bcm5719-llvm-4cc4bb4c60c786a253176106065e59d639bbc1a9.zip | |
I apologise in advance for the size of this check-in. At Intel we do
understand that this is not friendly, and are working to change our
internal code-development to make it easier to make development
features available more frequently and in finer (more functional)
chunks. Unfortunately we haven't got that in place yet, and unpicking
this into multiple separate check-ins would be non-trivial, so please
bear with me on this one. We should be better in the future.
Apologies over, what do we have here?
GGC 4.9 compatibility
--------------------
* We have implemented the new entrypoints used by code compiled by GCC
4.9 to implement the same functionality in gcc 4.8. Therefore code
compiled with gcc 4.9 that used to work will continue to do so.
However, there are some other new entrypoints (associated with task
cancellation) which are not implemented. Therefore user code compiled
by gcc 4.9 that uses these new features will not link against the LLVM
runtime. (It remains unclear how to handle those entrypoints, since
the GCC interface has potentially unpleasant performance implications
for join barriers even when cancellation is not used)
--- new parallel entry points ---
new entry points that aren't OpenMP 4.0 related
These are implemented fully :-
GOMP_parallel_loop_dynamic()
GOMP_parallel_loop_guided()
GOMP_parallel_loop_runtime()
GOMP_parallel_loop_static()
GOMP_parallel_sections()
GOMP_parallel()
--- cancellation entry points ---
Currently, these only give a runtime error if OMP_CANCELLATION is true
because our plain barriers don't check for cancellation while waiting
GOMP_barrier_cancel()
GOMP_cancel()
GOMP_cancellation_point()
GOMP_loop_end_cancel()
GOMP_sections_end_cancel()
--- taskgroup entry points ---
These are implemented fully.
GOMP_taskgroup_start()
GOMP_taskgroup_end()
--- target entry points ---
These are empty (as they are in libgomp)
GOMP_target()
GOMP_target_data()
GOMP_target_end_data()
GOMP_target_update()
GOMP_teams()
Improvements in Barriers and Fork/Join
--------------------------------------
* Barrier and fork/join code is now in its own file (which makes it
easier to understand and modify).
* Wait/release code is now templated and in its own file; suspend/resume code is also templated
* There's a new, hierarchical, barrier, which exploits the
cache-hierarchy of the Intel(r) Xeon Phi(tm) coprocessor to improve
fork/join and barrier performance.
***BEWARE*** the new source files have *not* been added to the legacy
Cmake build system. If you want to use that fixes wil be required.
Statistics Collection Code
--------------------------
* New code has been added to collect application statistics (if this
is enabled at library compile time; by default it is not). The
statistics code itself is generally useful, the lightweight timing
code uses the X86 rdtsc instruction, so will require changes for other
architectures.
The intent of this code is not for users to tune their codes but
rather
1) For timing code-paths inside the runtime
2) For gathering general properties of OpenMP codes to focus attention
on which OpenMP features are most used.
Nested Hot Teams
----------------
* The runtime now maintains more state to reduce the overhead of
creating and destroying inner parallel teams. This improves the
performance of code that repeatedly uses nested parallelism with the
same resource allocation. Set the new KMP_HOT_TEAMS_MAX_LEVEL
envirable to a depth to enable this (and, of course, OMP_NESTED=true
to enable nested parallelism at all).
Improved Intel(r) VTune(Tm) Amplifier support
---------------------------------------------
* The runtime provides additional information to Vtune via the
itt_notify interface to allow it to display better OpenMP specific
analyses of load-imbalance.
Support for OpenMP Composite Statements
---------------------------------------
* Implement new entrypoints required by some of the OpenMP 4.1
composite statements.
Improved ifdefs
---------------
* More separation of concepts ("Does this platform do X?") from
platforms ("Are we compiling for platform Y?"), which should simplify
future porting.
ScaleMP* contribution
---------------------
Stack padding to improve the performance in their environment where
cross-node coherency is managed at the page level.
Redesign of wait and release code
---------------------------------
The code is simplified and performance improved.
Bug Fixes
---------
*Fixes for Windows multiple processor groups.
*Fix Fortran module build on Linux: offload attribute added.
*Fix entry names for distribute-parallel-loop construct to be consistent with the compiler codegen.
*Fix an inconsistent error message for KMP_PLACE_THREADS environment variable.
llvm-svn: 219214
Diffstat (limited to 'openmp/runtime/src/kmp_global.c')
| -rw-r--r-- | openmp/runtime/src/kmp_global.c | 57 |
1 files changed, 47 insertions, 10 deletions
diff --git a/openmp/runtime/src/kmp_global.c b/openmp/runtime/src/kmp_global.c index d3c31952d0f..5f188d03c5d 100644 --- a/openmp/runtime/src/kmp_global.c +++ b/openmp/runtime/src/kmp_global.c @@ -1,7 +1,7 @@ /* * kmp_global.c -- KPTS global variables for runtime support library - * $Revision: 42816 $ - * $Date: 2013-11-11 15:33:37 -0600 (Mon, 11 Nov 2013) $ + * $Revision: 43473 $ + * $Date: 2014-09-26 15:02:57 -0500 (Fri, 26 Sep 2014) $ */ @@ -25,6 +25,20 @@ kmp_key_t __kmp_gtid_threadprivate_key; kmp_cpuinfo_t __kmp_cpuinfo = { 0 }; // Not initialized +#if KMP_STATS_ENABLED +#include "kmp_stats.h" +// lock for modifying the global __kmp_stats_list +kmp_tas_lock_t __kmp_stats_lock = KMP_TAS_LOCK_INITIALIZER(__kmp_stats_lock); + +// global list of per thread stats, the head is a sentinel node which accumulates all stats produced before __kmp_create_worker is called. +kmp_stats_list __kmp_stats_list; + +// thread local pointer to stats node within list +__thread kmp_stats_list* __kmp_stats_thread_ptr = &__kmp_stats_list; + +// gives reference tick for all events (considered the 0 tick) +tsc_tick_count __kmp_stats_start_time; +#endif /* ----------------------------------------------------- */ /* INITIALIZATION VARIABLES */ @@ -53,6 +67,7 @@ unsigned int __kmp_next_wait = KMP_DEFAULT_NEXT_WAIT; /* susequent number of s size_t __kmp_stksize = KMP_DEFAULT_STKSIZE; size_t __kmp_monitor_stksize = 0; // auto adjust size_t __kmp_stkoffset = KMP_DEFAULT_STKOFFSET; +int __kmp_stkpadding = KMP_MIN_STKPADDING; size_t __kmp_malloc_pool_incr = KMP_DEFAULT_MALLOC_POOL_INCR; @@ -94,7 +109,7 @@ char const *__kmp_barrier_type_name [ bs_last_barrier ] = , "reduction" #endif // KMP_FAST_REDUCTION_BARRIER }; -char const *__kmp_barrier_pattern_name [ bp_last_bar ] = { "linear", "tree", "hyper" }; +char const *__kmp_barrier_pattern_name [ bp_last_bar ] = { "linear", "tree", "hyper", "hierarchical" }; int __kmp_allThreadsSpecified = 0; @@ -114,16 +129,17 @@ int __kmp_dflt_team_nth_ub = 0; int __kmp_tp_capacity = 0; int __kmp_tp_cached = 0; int __kmp_dflt_nested = FALSE; -#if OMP_30_ENABLED int __kmp_dflt_max_active_levels = KMP_MAX_ACTIVE_LEVELS_LIMIT; /* max_active_levels limit */ -#endif // OMP_30_ENABLED +#if KMP_NESTED_HOT_TEAMS +int __kmp_hot_teams_mode = 0; /* 0 - free extra threads when reduced */ + /* 1 - keep extra threads when reduced */ +int __kmp_hot_teams_max_level = 1; /* nesting level of hot teams */ +#endif enum library_type __kmp_library = library_none; enum sched_type __kmp_sched = kmp_sch_default; /* scheduling method for runtime scheduling */ enum sched_type __kmp_static = kmp_sch_static_greedy; /* default static scheduling method */ enum sched_type __kmp_guided = kmp_sch_guided_iterative_chunked; /* default guided scheduling method */ -#if OMP_30_ENABLED enum sched_type __kmp_auto = kmp_sch_guided_analytical_chunked; /* default auto scheduling method */ -#endif // OMP_30_ENABLED int __kmp_dflt_blocktime = KMP_DEFAULT_BLOCKTIME; int __kmp_monitor_wakeups = KMP_MIN_MONITOR_WAKEUPS; int __kmp_bt_intervals = KMP_INTERVALS_FROM_BLOCKTIME( KMP_DEFAULT_BLOCKTIME, KMP_MIN_MONITOR_WAKEUPS ); @@ -242,7 +258,6 @@ unsigned int __kmp_place_num_threads_per_core = 0; unsigned int __kmp_place_core_offset = 0; #endif -#if OMP_30_ENABLED kmp_tasking_mode_t __kmp_tasking_mode = tskm_task_teams; /* This check ensures that the compiler is passing the correct data type @@ -255,8 +270,6 @@ KMP_BUILD_ASSERT( sizeof(kmp_tasking_flags_t) == 4 ); kmp_int32 __kmp_task_stealing_constraint = 1; /* Constrain task stealing by default */ -#endif /* OMP_30_ENABLED */ - #ifdef DEBUG_SUSPEND int __kmp_suspend_count = 0; #endif @@ -364,6 +377,29 @@ kmp_global_t __kmp_global = {{ 0 }}; /* ----------------------------------------------- */ /* GLOBAL SYNCHRONIZATION LOCKS */ /* TODO verify the need for these locks and if they need to be global */ + +#if KMP_USE_INTERNODE_ALIGNMENT +/* Multinode systems have larger cache line granularity which can cause + * false sharing if the alignment is not large enough for these locks */ +KMP_ALIGN_CACHE_INTERNODE + +kmp_bootstrap_lock_t __kmp_initz_lock = KMP_BOOTSTRAP_LOCK_INITIALIZER( __kmp_initz_lock ); /* Control initializations */ +KMP_ALIGN_CACHE_INTERNODE +kmp_bootstrap_lock_t __kmp_forkjoin_lock; /* control fork/join access */ +KMP_ALIGN_CACHE_INTERNODE +kmp_bootstrap_lock_t __kmp_exit_lock; /* exit() is not always thread-safe */ +KMP_ALIGN_CACHE_INTERNODE +kmp_bootstrap_lock_t __kmp_monitor_lock; /* control monitor thread creation */ +KMP_ALIGN_CACHE_INTERNODE +kmp_bootstrap_lock_t __kmp_tp_cached_lock; /* used for the hack to allow threadprivate cache and __kmp_threads expansion to co-exist */ + +KMP_ALIGN_CACHE_INTERNODE +kmp_lock_t __kmp_global_lock; /* Control OS/global access */ +KMP_ALIGN_CACHE_INTERNODE +kmp_queuing_lock_t __kmp_dispatch_lock; /* Control dispatch access */ +KMP_ALIGN_CACHE_INTERNODE +kmp_lock_t __kmp_debug_lock; /* Control I/O access for KMP_DEBUG */ +#else KMP_ALIGN_CACHE kmp_bootstrap_lock_t __kmp_initz_lock = KMP_BOOTSTRAP_LOCK_INITIALIZER( __kmp_initz_lock ); /* Control initializations */ @@ -378,6 +414,7 @@ KMP_ALIGN(128) kmp_queuing_lock_t __kmp_dispatch_lock; /* Control dispatch access */ KMP_ALIGN(128) kmp_lock_t __kmp_debug_lock; /* Control I/O access for KMP_DEBUG */ +#endif /* ----------------------------------------------- */ |

