summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/ARM/ARMAtomicExpandPass.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Atomics: promote ARM's IR-based atomics pass to CodeGen.Tim Northover2014-04-171-406/+0
| | | | | | | | | | | | Still only 32-bit ARM using it at this stage, but the promotion allows direct testing via opt and is a reasonably self-contained patch on the way to switching ARM64. At this point, other targets should be able to make use of it without too much difficulty if they want. (See ARM64 commit coming soon for an example). llvm-svn: 206485
* ARM: skip cmpxchg failure barrier if ordering is monotonic.Tim Northover2014-04-031-12/+21
| | | | | | | | | | The terminal barrier of a cmpxchg expansion will be either Acquire or SequentiallyConsistent. In either case it can be skipped if the operation has Monotonic requirements on failure. rdar://problem/15996804 llvm-svn: 205535
* ARM: expand atomic ldrex/strex loops in IRTim Northover2014-04-031-0/+397
The previous situation where ATOMIC_LOAD_WHATEVER nodes were expanded at MachineInstr emission time had grown to be extremely large and involved, to account for the subtly different code needed for the various flavours (8/16/32/64 bit, cmpxchg/add/minmax). Moving this transformation into the IR clears up the code substantially, and makes future optimisations much easier: 1. an atomicrmw followed by using the *new* value can be more efficient. As an IR pass, simple CSE could handle this efficiently. 2. Making use of cmpxchg success/failure orderings only has to be done in one (simpler) place. 3. The common "cmpxchg; did we store?" idiom can be exposed to optimisation. I intend to gradually improve this situation within the ARM backend and make sure there are no hidden issues before moving the code out into CodeGen to be shared with (at least ARM64/AArch64, though I think PPC & Mips could benefit too). llvm-svn: 205525
OpenPOWER on IntegriCloud