summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
diff options
context:
space:
mode:
authorSanjay Patel <spatel@rotateright.com>2017-06-22 15:46:54 +0000
committerSanjay Patel <spatel@rotateright.com>2017-06-22 15:46:54 +0000
commitd1e811979c35061bf09c3eef52660e8b4c667b3a (patch)
tree1962a3e8bb457fdba54a82584bc397c2161ce3d8 /llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
parentcebe8241cac063c43784104114abe06b17c76673 (diff)
downloadbcm5719-llvm-d1e811979c35061bf09c3eef52660e8b4c667b3a.tar.gz
bcm5719-llvm-d1e811979c35061bf09c3eef52660e8b4c667b3a.zip
[InstCombine] reverse bitcast + bitwise-logic canonicalization (PR33138)
There are 2 parts to this patch made simultaneously to avoid a regression. We're reversing the canonicalization that moves bitwise vector ops before bitcasts. We're moving bitwise vector ops *after* bitcasts instead. That's the 1st and 3rd hunks of the patch. The motivation is that there's only one fold that currently depends on the existing canonicalization (see next), but there are many folds that would automatically benefit from the new canonicalization. PR33138 ( https://bugs.llvm.org/show_bug.cgi?id=33138 ) shows why/how we have these patterns in IR. There's an or(and,andn) pattern that requires an adjustment in order to continue matching to 'select' because the bitcast changes position. This match is unfortunately complicated because it requires 4 logic ops with optional bitcast and sext ops. Test diffs: 1. The bitcast.ll and bitcast-bigendian.ll changes show the most basic difference - bitcast comes before logic. 2. There are also tests with no diffs in bitcast.ll that verify that we're still doing folds that were enabled by the previous canonicalization. 3. icmp-xor-signbit.ll shows the payoff. We don't need to adjust existing icmp patterns to look through bitcasts. 4. logical-select.ll contains several tests for the or(and,andn) --> select fold to verify that we are still handling those cases. The lone diff shows the movement of the bitcast from the new canonicalization rule. Differential Revision: https://reviews.llvm.org/D33517 llvm-svn: 306011
Diffstat (limited to 'llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp')
-rw-r--r--llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp26
1 files changed, 10 insertions, 16 deletions
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp b/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
index 2de83a01062..98e3fde95b3 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombineAndOrXor.cpp
@@ -1097,20 +1097,11 @@ static Instruction *foldLogicCastConstant(BinaryOperator &Logic, CastInst *Cast,
Type *DestTy = Logic.getType();
Type *SrcTy = Cast->getSrcTy();
- // If the first operand is bitcast, move the logic operation ahead of the
- // bitcast (do the logic operation in the original type). This can eliminate
- // bitcasts and allow combines that would otherwise be impeded by the bitcast.
+ // Move the logic operation ahead of a zext if the constant is unchanged in
+ // the smaller source type. Performing the logic in a smaller type may provide
+ // more information to later folds, and the smaller logic instruction may be
+ // cheaper (particularly in the case of vectors).
Value *X;
- if (match(Cast, m_BitCast(m_Value(X)))) {
- Value *NewConstant = ConstantExpr::getBitCast(C, SrcTy);
- Value *NewOp = Builder->CreateBinOp(LogicOpc, X, NewConstant);
- return CastInst::CreateBitOrPointerCast(NewOp, DestTy);
- }
-
- // Similarly, move the logic operation ahead of a zext if the constant is
- // unchanged in the smaller source type. Performing the logic in a smaller
- // type may provide more information to later folds, and the smaller logic
- // instruction may be cheaper (particularly in the case of vectors).
if (match(Cast, m_OneUse(m_ZExt(m_Value(X))))) {
Constant *TruncC = ConstantExpr::getTrunc(C, SrcTy);
Constant *ZextTruncC = ConstantExpr::getZExt(TruncC, DestTy);
@@ -1579,11 +1570,14 @@ static Value *getSelectCondition(Value *A, Value *B,
// If A and B are sign-extended, look through the sexts to find the booleans.
Value *Cond;
+ Value *NotB;
if (match(A, m_SExt(m_Value(Cond))) &&
Cond->getType()->getScalarType()->isIntegerTy(1) &&
- match(B, m_CombineOr(m_Not(m_SExt(m_Specific(Cond))),
- m_SExt(m_Not(m_Specific(Cond))))))
- return Cond;
+ match(B, m_OneUse(m_Not(m_Value(NotB))))) {
+ NotB = peekThroughBitcast(NotB, true);
+ if (match(NotB, m_SExt(m_Specific(Cond))))
+ return Cond;
+ }
// All scalar (and most vector) possibilities should be handled now.
// Try more matches that only apply to non-splat constant vectors.
OpenPOWER on IntegriCloud