summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms/AtomicExpand
Commit message (Collapse)AuthorAgeFilesLines
* AMDGPU: Fix copy-pasted test name errorMatt Arsenault2019-12-111-2/+2
|
* AMDGPU: Select global atomicrmw faddMatt Arsenault2019-11-061-0/+145
| | | | This only works if there is no use of the return value.
* [lit] Delete empty lines at the end of lit.local.cfg NFCFangrui Song2019-06-172-2/+0
| | | | llvm-svn: 363538
* AtomicExpand: Don't crash on non-0 allocaMatt Arsenault2019-06-111-0/+37
| | | | | | | This now produces garbage on AMDGPU with a call to an nonexistent, anonymous libcall but won't assert. llvm-svn: 363022
* AMDGPU: Expand < 32-bit atomicsMatt Arsenault2019-06-113-45/+422
| | | | | | Also fix AtomicExpand asserting on atomicrmw fadd/fsub. llvm-svn: 363021
* Revert "Temporarily Revert "Add basic loop fusion pass.""Eric Christopher2019-04-1727-0/+2509
| | | | | | | | The reversion apparently deleted the test/Transforms directory. Will be re-reverting again. llvm-svn: 358552
* Temporarily Revert "Add basic loop fusion pass."Eric Christopher2019-04-1727-2509/+0
| | | | | | | | As it's causing some bot failures (and per request from kbarton). This reverts commit r358543/ab70da07286e618016e78247e4a24fcb84077fda. llvm-svn: 358546
* [AtomicExpand] Allow libcall expansion for non-zero address spaces (try 2)Philip Reames2019-03-061-0/+34
| | | | | | | | | | Restore a reverted commit, with the silly mistake fixed. Sorry for the previous breakage. Be consistent about how we treat atomics in non-zero address spaces. If we get to the backend, we tend to lower them as if in address space 0. Do the same if we need to insert a libcall instead. Differential Revision: https://reviews.llvm.org/D58760 llvm-svn: 355540
* Revert "[AtomicExpand] Allow libcall expansion for non-zero address spaces" ↵Mitch Phillips2019-03-061-36/+0
| | | | | | for buildbot failures. llvm-svn: 355461
* [AtomicExpand] Allow libcall expansion for non-zero address spacesPhilip Reames2019-03-051-0/+36
| | | | | | | | Be consistent about how we treat atomics in non-zero address spaces. If we get to the backend, we tend to lower them as if in address space 0. Do the same if we need to insert a libcall instead. Differential Revision: https://reviews.llvm.org/D58760 llvm-svn: 355453
* Codegen support for atomicrmw fadd/fsubMatt Arsenault2019-01-223-0/+577
| | | | llvm-svn: 351851
* Reapply "IR: Add fp operations to atomicrmw"Matt Arsenault2019-01-228-0/+264
| | | | | | | This reapplies commits r351778 and r351782 with RISCV test fixes. llvm-svn: 351850
* Revert r351778: IR: Add fp operations to atomicrmwChandler Carruth2019-01-228-252/+0
| | | | | | | | | | | | | This broke the RISCV build, and even with that fixed, one of the RISCV tests behaves surprisingly differently with asserts than without, leaving there no clear test pattern to use. Generally it seems bad for hte IR to differ substantially due to asserts (as in, an alloca is used with asserts that isn't needed without!) and nothing I did simply would fix it so I'm reverting back to green. This also required reverting the RISCV build fix in r351782. llvm-svn: 351796
* IR: Add fp operations to atomicrmwMatt Arsenault2019-01-228-0/+252
| | | | | | Add just fadd/fsub for now. llvm-svn: 351778
* Allow FP types for atomicrmw xchgMatt Arsenault2019-01-173-0/+102
| | | | llvm-svn: 351427
* AMDGPU: Expand atomicrmw nand in IRMatt Arsenault2018-10-022-0/+62
| | | | llvm-svn: 343559
* [AtomicExpandPass] Widen partword atomicrmw or/xor/and before tryExpandAtomicRMWAlex Bradbury2018-08-171-0/+25
| | | | | | | | | | | | | | | | | | | This patch performs a widening transformation of bitwise atomicrmw {or,xor,and} and applies it prior to tryExpandAtomicRMW. This operates similarly to convertCmpXchgToIntegerType. For these operations, the i8/i16 atomicrmw can be implemented in terms of the 32-bit atomicrmw by appropriately manipulating the operands. There is no functional change for the handling of partword or/xor, but the transformation for partword 'and' is new. The advantage of performing this transformation early is that the same code-path can be used regardless of the approach used to expand the atomicrmw (AtomicExpansionKind). i.e. the same logic is used for AtomicExpansionKind::CmpXchg and can also be used by the intrinsic-based expansion in D47882. Differential Revision: https://reviews.llvm.org/D48129 llvm-svn: 340027
* Add address space mangling to lifetime intrinsicsMatt Arsenault2017-04-101-22/+22
| | | | | | In preparation for allowing allocas to have non-0 addrspace. llvm-svn: 299876
* Support expanding partial-word cmpxchg to full-word cmpxchg in AtomicExpandPass.James Y Knight2016-06-171-0/+166
| | | | | | | | | | | | | | | | | Many CPUs only have the ability to do a 4-byte cmpxchg (or ll/sc), not 1 or 2-byte. For those, you need to mask and shift the 1 or 2 byte values appropriately to use the 4-byte instruction. This change adds support for cmpxchg-based instruction sets (only SPARC, in LLVM). The support can be extended for LL/SC-based PPC and MIPS in the future, supplanting the ISel expansions those architectures currently use. Tests added for the IR transform and SPARCv9. Differential Revision: http://reviews.llvm.org/D21029 llvm-svn: 273025
* ARM: use a pseudo-instruction for cmpxchg at -O0.Tim Northover2016-04-183-3/+3
| | | | | | | | | | | | | | | | | The fast register-allocator cannot cope with inter-block dependencies without spilling. This is fine for ldrex/strex loops coming from atomicrmw instructions where any value produced within a block is dead by the end, but not for cmpxchg. So we lower a cmpxchg at -O0 via a pseudo-inst that gets expanded after regalloc. Fortunately this is at -O0 so we don't have to care about performance. This simplifies the various axes of expansion considerably: we assume a strong seq_cst operation and ensure ordering via the always-present DMB instructions rather than v8 acquire/release instructions. Should fix the 32-bit part of PR25526. llvm-svn: 266679
* Add __atomic_* lowering to AtomicExpandPass.James Y Knight2016-04-122-0/+259
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (Recommit of r266002, with r266011, r266016, and not accidentally including an extra unused/uninitialized element in LibcallRoutineNames) AtomicExpandPass can now lower atomic load, atomic store, atomicrmw, and cmpxchg instructions to __atomic_* library calls, when the target doesn't support atomics of a given size. This is the first step towards moving all atomic lowering from clang into llvm. When all is done, the behavior of __sync_* builtins, __atomic_* builtins, and C11 atomics will be unified. Previously LLVM would pass everything through to the ISelLowering code. There, unsupported atomic instructions would turn into __sync_* library calls. Because of that behavior, Clang currently avoids emitting llvm IR atomic instructions when this would happen, and emits __atomic_* library functions itself, in the frontend. This change makes LLVM able to emit __atomic_* libcalls, and thus will eventually allow clang to depend on LLVM to do the right thing. It is advantageous to do the new lowering to atomic libcalls in AtomicExpandPass, before ISel time, because it's important that all atomic operations for a given size either lower to __atomic_* libcalls (which may use locks), or native instructions which won't. No mixing and matching. At the moment, this code is enabled only for SPARC, as a demonstration. The next commit will expand support to all of the other targets. Differential Revision: http://reviews.llvm.org/D18200 llvm-svn: 266115
* This reverts commit r266002, r266011 and r266016.Rafael Espindola2016-04-122-259/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | They broke the msan bot. Original message: Add __atomic_* lowering to AtomicExpandPass. AtomicExpandPass can now lower atomic load, atomic store, atomicrmw,and cmpxchg instructions to __atomic_* library calls, when the target doesn't support atomics of a given size. This is the first step towards moving all atomic lowering from clang into llvm. When all is done, the behavior of __sync_* builtins, __atomic_* builtins, and C11 atomics will be unified. Previously LLVM would pass everything through to the ISelLowering code. There, unsupported atomic instructions would turn into __sync_* library calls. Because of that behavior, Clang currently avoids emitting llvm IR atomic instructions when this would happen, and emits __atomic_* library functions itself, in the frontend. This change makes LLVM able to emit __atomic_* libcalls, and thus will eventually allow clang to depend on LLVM to do the right thing. It is advantageous to do the new lowering to atomic libcalls in AtomicExpandPass, before ISel time, because it's important that all atomic operations for a given size either lower to __atomic_* libcalls (which may use locks), or native instructions which won't. No mixing and matching. At the moment, this code is enabled only for SPARC, as a demonstration. The next commit will expand support to all of the other targets. Differential Revision: http://reviews.llvm.org/D18200 llvm-svn: 266062
* Add __atomic_* lowering to AtomicExpandPass.James Y Knight2016-04-112-0/+259
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | AtomicExpandPass can now lower atomic load, atomic store, atomicrmw, and cmpxchg instructions to __atomic_* library calls, when the target doesn't support atomics of a given size. This is the first step towards moving all atomic lowering from clang into llvm. When all is done, the behavior of __sync_* builtins, __atomic_* builtins, and C11 atomics will be unified. Previously LLVM would pass everything through to the ISelLowering code. There, unsupported atomic instructions would turn into __sync_* library calls. Because of that behavior, Clang currently avoids emitting llvm IR atomic instructions when this would happen, and emits __atomic_* library functions itself, in the frontend. This change makes LLVM able to emit __atomic_* libcalls, and thus will eventually allow clang to depend on LLVM to do the right thing. It is advantageous to do the new lowering to atomic libcalls in AtomicExpandPass, before ISel time, because it's important that all atomic operations for a given size either lower to __atomic_* libcalls (which may use locks), or native instructions which won't. No mixing and matching. At the moment, this code is enabled only for SPARC, as a demonstration. The next commit will expand support to all of the other targets. Differential Revision: http://reviews.llvm.org/D18200 llvm-svn: 266002
* ARM: sink atomic release barrier as far as possible into cmpxchg.Tim Northover2016-02-222-17/+122
| | | | | | | | | | | | | | | | | | | | | | | | | | | DMB instructions can be expensive, so it's best to avoid them if possible. In atomicrmw operations there will always be an attempted store so a release barrier is always needed, but in the cmpxchg case we can delay the DMB until we know we'll definitely try to perform a store (and so need release semantics). In the strong cmpxchg case this isn't quite free: we must duplicate the LDREX instructions to skip the barrier on subsequent iterations. The basic outline becomes: ldrex rOld, [rAddr] cmp rOld, rDesired bne Ldone dmb Lloop: strex rRes, rNew, [rAddr] cbz rRes Ldone ldrex rOld, [rAddr] cmp rOld, rDesired beq Lloop Ldone: So we'll skip this version for strong operations in "minsize" functions. llvm-svn: 261568
* [IR] Extend cmpxchg to allow pointer type operandsPhilip Reames2016-02-191-0/+85
| | | | | | | | | | | | Today, we do not allow cmpxchg operations with pointer arguments. We require the frontend to insert ptrtoint casts and do the cmpxchg in integers. While correct, this is problematic from a couple of perspectives: 1) It makes the IR harder to analyse (for instance, it make capture tracking overly conservative) 2) It pushes work onto the frontend authors for no real gain This patch implements the simplest form of IR support. As we did with floating point loads and stores, we teach AtomicExpand to convert back to the old representation. This prevents us needing to change all backends in a single lock step change. Over time, we can migrate each backend to natively selecting the pointer type. In the meantime, we get the advantages of a cleaner IR representation without waiting for the backend changes. Differential Revision: http://reviews.llvm.org/D17413 llvm-svn: 261281
* [IR] Add support for floating pointer atomic loads and storesPhilip Reames2015-12-161-0/+82
| | | | | | | | | | | | | | | | This patch allows atomic loads and stores of floating point to be specified in the IR and adds an adapter to allow them to be lowered via existing backend support for bitcast-to-equivalent-integer idiom. Previously, the only way to specify a atomic float operation was to bitcast the pointer to a i32, load the value as an i32, then bitcast to a float. At it's most basic, this patch simply moves this expansion step to the point we start lowering to the backend. This patch does not add canonicalization rules to convert the bitcast idioms to the appropriate atomic loads. I plan to do that in the future, but for now, let's simply add the support. I'd like to get instruction selection working through at least one backend (x86-64) without the bitcast conversion before canonicalizing into this form. Similarly, I haven't yet added the target hooks to opt out of the lowering step I added to AtomicExpand. I figured it would more sense to add those once at least one backend (x86) was ready to actually opt out. As you can see from the included tests, the generated code quality is not great. I plan on submitting some patches to fix this, but help from others along that line would be very welcome. I'm not super familiar with the backend and my ramp up time may be material. Differential Revision: http://reviews.llvm.org/D15471 llvm-svn: 255737
* [ARM] Emit clrex in the expanded cmpxchg fail block.Ahmed Bougacha2015-09-223-12/+56
| | | | | | | | | | | | | | | | | | | ARM counterpart to r248291: In the comparison failure block of a cmpxchg expansion, the initial ldrex/ldxr will not be followed by a matching strex/stxr. On ARM/AArch64, this unnecessarily ties up the execution monitor, which might have a negative performance impact on some uarchs. Instead, release the monitor in the failure block. The clrex instruction was designed for this: use it. Also see ARMARM v8-A B2.10.2: "Exclusive access instructions and Shareable memory locations". Differential Revision: http://reviews.llvm.org/D13033 llvm-svn: 248294
* Fix an alignment error in `llvm::expandAtomicRMWToCmpXchg` without breaking ↵Richard Diamond2015-08-062-0/+13
| | | | | | | | | | | | | | the build where X86 isn't enabled. Summary: Divide the primitive size in bits by eight so the initial load's alignment is in bytes as expected. Tested with the included unit test. Reviewers: rengolin, jfb Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11804 llvm-svn: 244229
* Revert "Divide the primitive size in bits by eight so the initial load's ↵Renato Golin2015-08-061-8/+0
| | | | | | | | | alignment is in bytes as expected. Tested with the included unit test." This reverts commit r244155, as it was breaking the buildbots for too long. Should be reapplied with proper fix. llvm-svn: 244205
* Divide the primitive size in bits by eight so the initial load's alignment is inRichard Diamond2015-08-051-0/+8
| | | | | | bytes as expected. Tested with the included unit test. llvm-svn: 244155
* Use target-dependent emitLeading/TrailingFence instead of the ↵Robin Morisset2014-09-032-43/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | target-independent insertLeading/TrailingFence (in AtomicExpandPass) Fixes two latent bugs: - There was no fence inserted before expanded seq_cst load (unsound on Power) - There was only a fence release before seq_cst stores (again unsound, in particular on Power) It is not even clear if this is correct on ARM swift processors (where release fences are DMB ishst instead of DMB ish). This behaviour is currently preserved on ARM Swift as it is not clear whether it is incorrect. I would love to get documentation stating whether it is correct or not. These two bugs were not triggered because Power is not (yet) using this pass, and these behaviours happen to be (mostly?) working on ARM (although they completely butchered the semantics of the llvm IR). See: http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-August/075821.html for an example of the problems that can be caused by the second of these bugs. I couldn't see a way of fixing these in a completely target-independent way without adding lots of unnecessary fences on ARM, hence the target-dependent parts of this patch. This patch implements the new target-dependent parts only for ARM (the default of not doing anything is enough for AArch64), other architectures will use this infrastructure in later patches. llvm-svn: 217076
* Rename AtomicExpandLoadLinked into AtomicExpandRobin Morisset2014-08-214-0/+690
AtomicExpandLoadLinked is currently rather ARM-specific. This patch is the first of a group that aim at making it more target-independent. See http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-August/075873.html for details The command line option is "atomic-expand" llvm-svn: 216231
OpenPOWER on IntegriCloud