summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen/MachineInstr.cpp
diff options
context:
space:
mode:
authorPhilip Reames <listmail@philipreames.com>2019-03-14 17:20:59 +0000
committerPhilip Reames <listmail@philipreames.com>2019-03-14 17:20:59 +0000
commit70d156991ca425b2c472935e144e98ce205c666c (patch)
tree9f22561d367508f82225bced0c46ae8a9a0f880e /llvm/lib/CodeGen/MachineInstr.cpp
parent6f8dddf169336449aac339e6e107d604e8adf6a4 (diff)
downloadbcm5719-llvm-70d156991ca425b2c472935e144e98ce205c666c.tar.gz
bcm5719-llvm-70d156991ca425b2c472935e144e98ce205c666c.zip
Allow code motion (and thus folding) for atomic (but unordered) memory operands
Building on the work done in D57601, now that we can distinguish between atomic and volatile memory accesses, go ahead and allow code motion of unordered atomics. As seen in the diffs, this allows much better folding of memory operations into using instructions. (Mostly done by the PeepholeOpt pass.) Note: I have not reviewed all callers of hasOrderedMemoryRef since one of them - isSafeToMove - is very widely used. I'm relying on the documented semantics of each method to judge correctness. Differential Revision: https://reviews.llvm.org/D59345 llvm-svn: 356170
Diffstat (limited to 'llvm/lib/CodeGen/MachineInstr.cpp')
-rw-r--r--llvm/lib/CodeGen/MachineInstr.cpp4
1 files changed, 1 insertions, 3 deletions
diff --git a/llvm/lib/CodeGen/MachineInstr.cpp b/llvm/lib/CodeGen/MachineInstr.cpp
index 95f5eb91ee1..17bd0f38964 100644
--- a/llvm/lib/CodeGen/MachineInstr.cpp
+++ b/llvm/lib/CodeGen/MachineInstr.cpp
@@ -1291,10 +1291,8 @@ bool MachineInstr::hasOrderedMemoryRef() const {
return true;
// Check if any of our memory operands are ordered.
- // TODO: This should probably be be isUnordered (see D57601), but the callers
- // need audited and test cases written to be sure.
return llvm::any_of(memoperands(), [](const MachineMemOperand *MMO) {
- return MMO->isVolatile() || MMO->isAtomic();
+ return !MMO->isUnordered();
});
}
OpenPOWER on IntegriCloud