diff options
| author | Philip Reames <listmail@philipreames.com> | 2019-03-01 18:00:07 +0000 |
|---|---|---|
| committer | Philip Reames <listmail@philipreames.com> | 2019-03-01 18:00:07 +0000 |
| commit | 77982868c533ad9993eff4954eb7f8a7f687ad9e (patch) | |
| tree | de8c4edc402a65594a1e2ff237d0151ce117cf73 /llvm/lib | |
| parent | 39f6d7e6160b27aaffbd03f1ecc1560b77b7d1df (diff) | |
| download | bcm5719-llvm-77982868c533ad9993eff4954eb7f8a7f687ad9e.tar.gz bcm5719-llvm-77982868c533ad9993eff4954eb7f8a7f687ad9e.zip | |
[InstCombine] Extend "idempotent" atomicrmw optimizations to floating point
An idempotent atomicrmw is one that does not change memory in the process of execution. We have already added handling for the various integer operations; this patch extends the same handling to floating point operations which were recently added to IR.
Note: At the moment, we canonicalize idempotent fsub to fadd when ordering requirements prevent us from using a load. As discussed in the review, I will be replacing this with canonicalizing both floating point ops to integer ops in the near future.
Differential Revision: https://reviews.llvm.org/D58251
llvm-svn: 355210
Diffstat (limited to 'llvm/lib')
| -rw-r--r-- | llvm/lib/Transforms/InstCombine/InstCombineAtomicRMW.cpp | 19 |
1 files changed, 17 insertions, 2 deletions
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineAtomicRMW.cpp b/llvm/lib/Transforms/InstCombine/InstCombineAtomicRMW.cpp index b857741e840..d3a7d32ec75 100644 --- a/llvm/lib/Transforms/InstCombine/InstCombineAtomicRMW.cpp +++ b/llvm/lib/Transforms/InstCombine/InstCombineAtomicRMW.cpp @@ -21,9 +21,18 @@ namespace { /// TODO: Common w/ the version in AtomicExpandPass, and change the term used. /// Idemptotent is confusing in this context. bool isIdempotentRMW(AtomicRMWInst& RMWI) { + if (auto CF = dyn_cast<ConstantFP>(RMWI.getValOperand())) + switch(RMWI.getOperation()) { + case AtomicRMWInst::FAdd: // -0.0 + return CF->isZero() && CF->isNegative(); + case AtomicRMWInst::FSub: // +0.0 + return CF->isZero() && !CF->isNegative(); + default: + return false; + }; + auto C = dyn_cast<ConstantInt>(RMWI.getValOperand()); if(!C) - // TODO: Handle fadd, fsub? return false; switch(RMWI.getOperation()) { @@ -116,12 +125,18 @@ Instruction *InstCombiner::visitAtomicRMWInst(AtomicRMWInst &RMWI) { // We chose to canonicalize all idempotent operations to an single // operation code and constant. This makes it easier for the rest of the - // optimizer to match easily. The choice of or w/zero is arbitrary. + // optimizer to match easily. The choices of or w/0 and fadd w/-0.0 are + // arbitrary. if (RMWI.getType()->isIntegerTy() && RMWI.getOperation() != AtomicRMWInst::Or) { RMWI.setOperation(AtomicRMWInst::Or); RMWI.setOperand(1, ConstantInt::get(RMWI.getType(), 0)); return &RMWI; + } else if (RMWI.getType()->isFloatingPointTy() && + RMWI.getOperation() != AtomicRMWInst::FAdd) { + RMWI.setOperation(AtomicRMWInst::FAdd); + RMWI.setOperand(1, ConstantFP::getNegativeZero(RMWI.getType())); + return &RMWI; } // Check if the required ordering is compatible with an atomic load. |

