diff options
author | Roman Lebedev <lebedev.ri@gmail.com> | 2018-06-06 19:38:27 +0000 |
---|---|---|
committer | Roman Lebedev <lebedev.ri@gmail.com> | 2018-06-06 19:38:27 +0000 |
commit | cbf8446359a226466efb12ad0e1dbc66a3afe592 (patch) | |
tree | e11cae33512a97012de523ef3c0f3f9b064a6acc /llvm/lib/Transforms | |
parent | 4771bc6c3581e5988e827ca3722753483d715f85 (diff) | |
download | bcm5719-llvm-cbf8446359a226466efb12ad0e1dbc66a3afe592.tar.gz bcm5719-llvm-cbf8446359a226466efb12ad0e1dbc66a3afe592.zip |
[InstCombine] PR37603: low bit mask canonicalization
Summary:
This is [[ https://bugs.llvm.org/show_bug.cgi?id=37603 | PR37603 ]].
https://godbolt.org/g/VCMNpS
https://rise4fun.com/Alive/idM
When doing bit manipulations, it is quite common to calculate some bit mask,
and apply it to some value via `and`.
The typical C code looks like:
```
int mask_signed_add(int nbits) {
return (1 << nbits) - 1;
}
```
which is translated into (with `-O3`)
```
define dso_local i32 @mask_signed_add(int)(i32) local_unnamed_addr #0 {
%2 = shl i32 1, %0
%3 = add nsw i32 %2, -1
ret i32 %3
}
```
But there is a second, less readable variant:
```
int mask_signed_xor(int nbits) {
return ~(-(1 << nbits));
}
```
which is translated into (with `-O3`)
```
define dso_local i32 @mask_signed_xor(int)(i32) local_unnamed_addr #0 {
%2 = shl i32 -1, %0
%3 = xor i32 %2, -1
ret i32 %3
}
```
Since we created such a mask, it is quite likely that we will use it in `and` next.
And then we may get rid of `not` op by folding into `andn`.
But now that i have actually looked:
https://godbolt.org/g/VTUDmU
_some_ backend changes will be needed too.
We clearly loose `bzhi` recognition.
Reviewers: spatel, craig.topper, RKSimon
Reviewed By: spatel
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D47428
llvm-svn: 334127
Diffstat (limited to 'llvm/lib/Transforms')
-rw-r--r-- | llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp | 27 |
1 files changed, 27 insertions, 0 deletions
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp b/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp index 6fd4afffbb5..0ad077108cd 100644 --- a/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp +++ b/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp @@ -1096,6 +1096,30 @@ Value *InstCombiner::SimplifyAddWithRemainder(BinaryOperator &I) { return nullptr; } +/// Fold +/// (1 << NBits) - 1 +/// Into: +/// ~(-(1 << NBits)) +/// Because a 'not' is better for bit-tracking analysis and other transforms +/// than an 'add'. The new shl is always nsw, and is nuw if old `and` was. +static Instruction *canonicalizeLowbitMask(BinaryOperator &I, + InstCombiner::BuilderTy &Builder) { + Value *NBits; + if (!match(&I, m_Add(m_OneUse(m_Shl(m_One(), m_Value(NBits))), m_AllOnes()))) + return nullptr; + + Constant *MinusOne = Constant::getAllOnesValue(NBits->getType()); + Value *NotMask = Builder.CreateShl(MinusOne, NBits, "notmask"); + // Be wary of constant folding. + if (auto *BOp = dyn_cast<BinaryOperator>(NotMask)) { + // Always NSW. But NUW propagates from `add`. + BOp->setHasNoSignedWrap(); + BOp->setHasNoUnsignedWrap(I.hasNoUnsignedWrap()); + } + + return BinaryOperator::CreateNot(NotMask, I.getName()); +} + Instruction *InstCombiner::visitAdd(BinaryOperator &I) { bool Changed = SimplifyAssociativeOrCommutative(I); Value *LHS = I.getOperand(0), *RHS = I.getOperand(1); @@ -1347,6 +1371,9 @@ Instruction *InstCombiner::visitAdd(BinaryOperator &I) { I.setHasNoUnsignedWrap(true); } + if (Instruction *V = canonicalizeLowbitMask(I, Builder)) + return V; + return Changed ? &I : nullptr; } |