summaryrefslogtreecommitdiffstats
path: root/llvm/docs
diff options
context:
space:
mode:
authorAndrew Lenharth <andrewl@lenharth.org>2008-02-16 01:24:58 +0000
committerAndrew Lenharth <andrewl@lenharth.org>2008-02-16 01:24:58 +0000
commit9b254eed320283371ee43f900e76794c3991957e (patch)
treed5b65d5cff5240a980e09ee4d6cd995ee077bb79 /llvm/docs
parent27055194b769d52c89fead4a4dc3614266fa59ba (diff)
downloadbcm5719-llvm-9b254eed320283371ee43f900e76794c3991957e.tar.gz
bcm5719-llvm-9b254eed320283371ee43f900e76794c3991957e.zip
llvm.memory.barrier, and impl for x86 and alpha
llvm-svn: 47204
Diffstat (limited to 'llvm/docs')
-rw-r--r--llvm/docs/LangRef.html106
1 files changed, 106 insertions, 0 deletions
diff --git a/llvm/docs/LangRef.html b/llvm/docs/LangRef.html
index d9d5ca83735..6267cf806cc 100644
--- a/llvm/docs/LangRef.html
+++ b/llvm/docs/LangRef.html
@@ -204,6 +204,11 @@
<li><a href="#int_it">'<tt>llvm.init.trampoline</tt>' Intrinsic</a></li>
</ol>
</li>
+ <li><a href="#int_atomics">Atomic intrinsics</a>
+ <ol>
+ <li><a href="#int_memory_barrier"><tt>llvm.memory_barrier</tt></li>
+ </ol>
+ </li>
<li><a href="#int_general">General intrinsics</a>
<ol>
<li><a href="#int_var_annotation">
@@ -5234,6 +5239,107 @@ declare i8* @llvm.init.trampoline(i8* &lt;tramp&gt;, i8* &lt;func&gt;, i8* &lt;n
<!-- ======================================================================= -->
<div class="doc_subsection">
+ <a name="int_atomics">Atomic Operations and Synchronization Intrinsics</a>
+</div>
+
+<div class="doc_text">
+<p>
+ These intrinsic functions expand the "universal IR" of LLVM to represent
+ hardware constructs for atomic operations and memory synchronization. This
+ provides an interface to the hardware, not an interface to the programmer. It
+ is aimed at a low enough level to allow any programming models or APIs which
+ need atomic behaviors to map cleanly onto it. It is also modeled primarily on
+ hardware behavior. Just as hardware provides a "universal IR" for source
+ languages, it also provides a starting point for developing a "universal"
+ atomic operation and synchronization IR.
+</p>
+<p>
+ These do <em>not</em> form an API such as high-level threading libraries,
+ software transaction memory systems, atomic primitives, and intrinsic
+ functions as found in BSD, GNU libc, atomic_ops, APR, and other system and
+ application libraries. The hardware interface provided by LLVM should allow
+ a clean implementation of all of these APIs and parallel programming models.
+ No one model or paradigm should be selected above others unless the hardware
+ itself ubiquitously does so.
+
+</p>
+</div>
+
+<!-- _______________________________________________________________________ -->
+<div class="doc_subsubsection">
+ <a name="int_memory_barrier">'<tt>llvm.memory.barrier</tt>' Intrinsic</a>
+</div>
+<div class="doc_text">
+<h5>Syntax:</h5>
+<pre>
+declare void @llvm.memory.barrier( i1 &lt;ll&gt;, i1 &lt;ls&gt;, i1 &lt;sl&gt;, i1 &lt;ss&gt;,
+i1 &lt;device&gt; )
+
+</pre>
+<h5>Overview:</h5>
+<p>
+ The <tt>llvm.memory.barrier</tt> intrinsic guarantees ordering between
+ specific pairs of memory access types.
+</p>
+<h5>Arguments:</h5>
+<p>
+ The <tt>llvm.memory.barrier</tt> intrinsic requires five boolean arguments.
+ The first four arguments enables a specific barrier as listed below. The fith
+ argument specifies that the barrier applies to io or device or uncached memory.
+
+</p>
+ <ul>
+ <li><tt>ll</tt>: load-load barrier</li>
+ <li><tt>ls</tt>: load-store barrier</li>
+ <li><tt>sl</tt>: store-load barrier</li>
+ <li><tt>ss</tt>: store-store barrier</li>
+ <li><tt>device</tt>: barrier applies to device and uncached memory also.
+ </ul>
+<h5>Semantics:</h5>
+<p>
+ This intrinsic causes the system to enforce some ordering constraints upon
+ the loads and stores of the program. This barrier does not indicate
+ <em>when</em> any events will occur, it only enforces an <em>order</em> in
+ which they occur. For any of the specified pairs of load and store operations
+ (f.ex. load-load, or store-load), all of the first operations preceding the
+ barrier will complete before any of the second operations succeeding the
+ barrier begin. Specifically the semantics for each pairing is as follows:
+</p>
+ <ul>
+ <li><tt>ll</tt>: All loads before the barrier must complete before any load
+ after the barrier begins.</li>
+
+ <li><tt>ls</tt>: All loads before the barrier must complete before any
+ store after the barrier begins.</li>
+ <li><tt>ss</tt>: All stores before the barrier must complete before any
+ store after the barrier begins.</li>
+ <li><tt>sl</tt>: All stores before the barrier must complete before any
+ load after the barrier begins.</li>
+ </ul>
+<p>
+ These semantics are applied with a logical "and" behavior when more than one
+ is enabled in a single memory barrier intrinsic.
+</p>
+<p>
+ Backends may implement stronger barriers than those requested when they do not
+ support as fine grained a barrier as requested. Some architectures do not
+ need all types of barriers and on such architectures, these become noops.
+</p>
+<h5>Example:</h5>
+<pre>
+%ptr = malloc i32
+ store i32 4, %ptr
+
+%result1 = load i32* %ptr <i>; yields {i32}:result1 = 4</i>
+ call void @llvm.memory.barrier( i1 false, i1 true, i1 false, i1 false )
+ <i>; guarantee the above finishes</i>
+ store i32 8, %ptr <i>; before this begins</i>
+</pre>
+</div>
+
+
+<!-- ======================================================================= -->
+<div class="doc_subsection">
<a name="int_general">General Intrinsics</a>
</div>
OpenPOWER on IntegriCloud