diff options
| author | Tim Northover <tnorthover@apple.com> | 2014-07-14 15:31:13 +0000 |
|---|---|---|
| committer | Tim Northover <tnorthover@apple.com> | 2014-07-14 15:31:13 +0000 |
| commit | 6c647eae8b6c334eed00323413d63a16e44413ae (patch) | |
| tree | b7dbb69697cec80f756eb041eebfd4f1fd662d47 /llvm | |
| parent | 4d5e23f53a68937e3108293a5b408f2790b309e8 (diff) | |
| download | bcm5719-llvm-6c647eae8b6c334eed00323413d63a16e44413ae.tar.gz bcm5719-llvm-6c647eae8b6c334eed00323413d63a16e44413ae.zip | |
X86: remove temporary atomicrmw used during lowering.
We construct a temporary "atomicrmw xchg" instruction when lowering atomic
stores for widths that aren't supported natively. This isn't on the top-level
worklist though, so it won't be removed automatically and we have to do it
ourselves once that itself has been lowered.
Thanks Saleem for pointing this out!
llvm-svn: 212948
Diffstat (limited to 'llvm')
| -rw-r--r-- | llvm/lib/Target/X86/X86AtomicExpandPass.cpp | 7 | ||||
| -rw-r--r-- | llvm/test/CodeGen/X86/atomic128.ll | 1 |
2 files changed, 6 insertions, 2 deletions
diff --git a/llvm/lib/Target/X86/X86AtomicExpandPass.cpp b/llvm/lib/Target/X86/X86AtomicExpandPass.cpp index 1637b55b6d3..61eefbbf75b 100644 --- a/llvm/lib/Target/X86/X86AtomicExpandPass.cpp +++ b/llvm/lib/Target/X86/X86AtomicExpandPass.cpp @@ -277,8 +277,11 @@ bool X86AtomicExpandPass::expandAtomicStore(StoreInst *SI) { SI->getValueOperand(), Order); // Now we have an appropriate swap instruction, lower it as usual. - if (shouldExpandAtomicRMW(AI)) - return expandAtomicRMW(AI); + if (shouldExpandAtomicRMW(AI)) { + expandAtomicRMW(AI); + AI->eraseFromParent(); + return true; + } return AI; } diff --git a/llvm/test/CodeGen/X86/atomic128.ll b/llvm/test/CodeGen/X86/atomic128.ll index ddc53a53202..741d2904229 100644 --- a/llvm/test/CodeGen/X86/atomic128.ll +++ b/llvm/test/CodeGen/X86/atomic128.ll @@ -277,6 +277,7 @@ define void @atomic_store_seq_cst(i128* %p, i128 %in) { ; CHECK: lock ; CHECK: cmpxchg16b (%rdi) ; CHECK: jne [[LOOP]] +; CHECK-NOT: callq ___sync_lock_test_and_set_16 store atomic i128 %in, i128* %p seq_cst, align 16 ret void |

