summaryrefslogtreecommitdiffstats
path: root/llvm/docs
diff options
context:
space:
mode:
authorLang Hames <lhames@gmail.com>2019-08-02 15:21:37 +0000
committerLang Hames <lhames@gmail.com>2019-08-02 15:21:37 +0000
commit809e9d1efa2f58b6333b1a0445e8574beedffc22 (patch)
tree2e9b52fc726ab240ba28823ba6d5f3ccacaac248 /llvm/docs
parent7eacefedab6fcdaab760ce28665fff57458a4e2f (diff)
downloadbcm5719-llvm-809e9d1efa2f58b6333b1a0445e8574beedffc22.tar.gz
bcm5719-llvm-809e9d1efa2f58b6333b1a0445e8574beedffc22.zip
[ORC] Change the locking scheme for ThreadSafeModule.
ThreadSafeModule/ThreadSafeContext are used to manage lifetimes and locking for LLVMContexts in ORCv2. Prior to this patch contexts were locked as soon as an associated Module was emitted (to be compiled and linked), and were not unlocked until the emit call returned. This could lead to deadlocks if interdependent modules that shared contexts were compiled on different threads: when, during emission of the first module, the dependence was discovered the second module (which would provide the required symbol) could not be emitted as the thread emitting the first module still held the lock. This patch eliminates this possibility by moving to a finer-grained locking scheme. Each client holds the module lock only while they are actively operating on it. To make this finer grained locking simpler/safer to implement this patch removes the explicit lock method, 'getContextLock', from ThreadSafeModule and replaces it with a new method, 'withModuleDo', that implicitly locks the context, calls a user-supplied function object to operate on the Module, then implicitly unlocks the context before returning the result. ThreadSafeModule TSM = getModule(...); size_t NumFunctions = TSM.withModuleDo( [](Module &M) { // <- context locked before entry to lambda. return M.size(); }); Existing ORCv2 layers that operate on ThreadSafeModules are updated to use the new method. This method is used to introduce Module locking into each of the existing layers. llvm-svn: 367686
Diffstat (limited to 'llvm/docs')
-rw-r--r--llvm/docs/ORCv2.rst66
1 files changed, 38 insertions, 28 deletions
diff --git a/llvm/docs/ORCv2.rst b/llvm/docs/ORCv2.rst
index 4f9e08b9a15..d39fcd17407 100644
--- a/llvm/docs/ORCv2.rst
+++ b/llvm/docs/ORCv2.rst
@@ -153,7 +153,7 @@ Design Overview
ORC's JIT'd program model aims to emulate the linking and symbol resolution
rules used by the static and dynamic linkers. This allows ORC to JIT
arbitrary LLVM IR, including IR produced by an ordinary static compiler (e.g.
-clang) that uses constructs like symbol linkage and visibility, and weak [4]_
+clang) that uses constructs like symbol linkage and visibility, and weak [3]_
and common symbol definitions.
To see how this works, imagine a program ``foo`` which links against a pair
@@ -441,7 +441,7 @@ ThreadSafeModule and ThreadSafeContext are wrappers around Modules and
LLVMContexts respectively. A ThreadSafeModule is a pair of a
std::unique_ptr<Module> and a (possibly shared) ThreadSafeContext value. A
ThreadSafeContext is a pair of a std::unique_ptr<LLVMContext> and a lock.
-This design serves two purposes: providing both a locking scheme and lifetime
+This design serves two purposes: providing a locking scheme and lifetime
management for LLVMContexts. The ThreadSafeContext may be locked to prevent
accidental concurrent access by two Modules that use the same LLVMContext.
The underlying LLVMContext is freed once all ThreadSafeContext values pointing
@@ -471,33 +471,49 @@ Before using a ThreadSafeContext, clients should ensure that either the context
is only accessible on the current thread, or that the context is locked. In the
example above (where the context is never locked) we rely on the fact that both
``TSM1`` and ``TSM2``, and TSCtx are all created on one thread. If a context is
-going to be shared between threads then it must be locked before the context,
-or any Modules attached to it, are accessed. When code is added to in-tree IR
-layers this locking is is done automatically by the
-``BasicIRLayerMaterializationUnit::materialize`` method. In all other
-situations, for example when writing a custom IR materialization unit, or
-constructing a new ThreadSafeModule from higher-level program representations,
-locking must be done explicitly:
+going to be shared between threads then it must be locked before any accessing
+or creating any Modules attached to it. E.g.
.. code-block:: c++
- void HighLevelRepresentationLayer::emit(MaterializationResponsibility R,
- HighLevelProgramRepresentation H) {
- // Get or create a context value that may be shared between threads.
- ThreadSafeContext TSCtx = getContext();
- // Lock the context to prevent concurrent access.
- auto Lock = TSCtx.getLock();
+ ThreadSafeContext TSCtx(llvm::make_unique<LLVMContext>());
- // IRGen a module onto the locked Context.
- ThreadSafeModule TSM(IRGen(H, *TSCtx.getContext()), TSCtx);
+ ThreadPool TP(NumThreads);
+ JITStack J;
- // Emit the module to the base layer with the context still locked.
- BaseIRLayer.emit(std::move(R), std::move(TSM));
- }
+ for (auto &ModulePath : ModulePaths) {
+ TP.async(
+ [&]() {
+ auto Lock = TSCtx.getLock();
+
+ auto M = loadModuleOnContext(ModulePath, TSCtx.getContext());
+
+ J.addModule(ThreadSafeModule(std::move(M), TSCtx));
+ });
+ }
+
+ TP.wait();
+
+To make exclusive access to Modules easier to manage the ThreadSafeModule class
+provides a convenince function, ``withModuleDo``, that implicitly (1) locks the
+associated context, (2) runs a given function object, (3) unlocks the context,
+and (3) returns the result generated by the function object. E.g.
+
+ .. code-block:: c++
+
+ ThreadSafeModule TSM = getModule(...);
+
+ // Dump the module:
+ size_t NumFunctionsInModule =
+ TSM.withModuleDo(
+ [](Module &M) { // <- Context locked before entering lambda.
+ return M.size();
+ } // <- Context unlocked after leaving.
+ );
Clients wishing to maximize possibilities for concurrent compilation will want
-to create every new ThreadSafeModule on a new ThreadSafeContext [3]_. For this
+to create every new ThreadSafeModule on a new ThreadSafeContext. For this
reason a convenience constructor for ThreadSafeModule is provided that implicitly
constructs a new ThreadSafeContext value from a std::unique_ptr<LLVMContext>:
@@ -620,13 +636,7 @@ TBD: Speculative compilation. Object Caches.
across processes, however this functionality appears not to have been
used.
-.. [3] Sharing ThreadSafeModules in a concurrent compilation can be dangerous:
- if interdependent modules are loaded on the same context, but compiled
- on different threads a deadlock may occur, with each compile waiting for
- the other to complete, and the other unable to proceed because the
- context is locked.
-
-.. [4] Weak definitions are currently handled correctly within dylibs, but if
+.. [3] Weak definitions are currently handled correctly within dylibs, but if
multiple dylibs provide a weak definition of a symbol then each will end
up with its own definition (similar to how weak definitions are handled
in Windows DLLs). This will be fixed in the future.
OpenPOWER on IntegriCloud