diff options
author | Alex Bradbury <asb@lowrisc.org> | 2018-08-17 14:03:37 +0000 |
---|---|---|
committer | Alex Bradbury <asb@lowrisc.org> | 2018-08-17 14:03:37 +0000 |
commit | 3291f9aa8168408ce5ef4757012c06c196a72c41 (patch) | |
tree | 250f4e4c3d2990d0ecfb1c1bb086e0e9febd3ddc /llvm/test/Transforms | |
parent | 1962621a7e3350ed3677645dba6d0f20a765db4f (diff) | |
download | bcm5719-llvm-3291f9aa8168408ce5ef4757012c06c196a72c41.tar.gz bcm5719-llvm-3291f9aa8168408ce5ef4757012c06c196a72c41.zip |
[AtomicExpandPass] Widen partword atomicrmw or/xor/and before tryExpandAtomicRMW
This patch performs a widening transformation of bitwise atomicrmw
{or,xor,and} and applies it prior to tryExpandAtomicRMW. This operates
similarly to convertCmpXchgToIntegerType. For these operations, the i8/i16
atomicrmw can be implemented in terms of the 32-bit atomicrmw by appropriately
manipulating the operands. There is no functional change for the handling of
partword or/xor, but the transformation for partword 'and' is new.
The advantage of performing this transformation early is that the same
code-path can be used regardless of the approach used to expand the atomicrmw
(AtomicExpansionKind). i.e. the same logic is used for
AtomicExpansionKind::CmpXchg and can also be used by the intrinsic-based
expansion in D47882.
Differential Revision: https://reviews.llvm.org/D48129
llvm-svn: 340027
Diffstat (limited to 'llvm/test/Transforms')
-rw-r--r-- | llvm/test/Transforms/AtomicExpand/SPARC/partword.ll | 25 |
1 files changed, 25 insertions, 0 deletions
diff --git a/llvm/test/Transforms/AtomicExpand/SPARC/partword.ll b/llvm/test/Transforms/AtomicExpand/SPARC/partword.ll index 9963d17c242..74c05615d0b 100644 --- a/llvm/test/Transforms/AtomicExpand/SPARC/partword.ll +++ b/llvm/test/Transforms/AtomicExpand/SPARC/partword.ll @@ -147,6 +147,31 @@ entry: ret i16 %ret } +; CHECK-LABEL: @test_or_i16( +; (I'm going to just assert on the bits that differ from add, above.) +; CHECK:atomicrmw.start: +; CHECK: %new = or i32 %loaded, %ValOperand_Shifted +; CHECK: %6 = cmpxchg i32* %AlignedAddr, i32 %loaded, i32 %new monotonic monotonic +; CHECK:atomicrmw.end: +define i16 @test_or_i16(i16* %arg, i16 %val) { +entry: + %ret = atomicrmw or i16* %arg, i16 %val seq_cst + ret i16 %ret +} + +; CHECK-LABEL: @test_and_i16( +; (I'm going to just assert on the bits that differ from add, above.) +; CHECK: %AndOperand = or i32 %Inv_Mask, %ValOperand_Shifted +; CHECK:atomicrmw.start: +; CHECK: %new = and i32 %loaded, %AndOperand +; CHECK: %6 = cmpxchg i32* %AlignedAddr, i32 %loaded, i32 %new monotonic monotonic +; CHECK:atomicrmw.end: +define i16 @test_and_i16(i16* %arg, i16 %val) { +entry: + %ret = atomicrmw and i16* %arg, i16 %val seq_cst + ret i16 %ret +} + ; CHECK-LABEL: @test_min_i16( ; CHECK:atomicrmw.start: ; CHECK: %6 = lshr i32 %loaded, %ShiftAmt |