diff options
author | Weiming Zhao <weimingz@codeaurora.org> | 2014-04-30 21:07:24 +0000 |
---|---|---|
committer | Weiming Zhao <weimingz@codeaurora.org> | 2014-04-30 21:07:24 +0000 |
commit | 7f6daf1799a0dc93e37edc4d83ba4ef7c184e3a6 (patch) | |
tree | db7cc9066440bf1a9e4106da6c979e0ecdae8272 /llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp | |
parent | dd2647edcf99a9e7b1af9b407ff1489b4ee45739 (diff) | |
download | bcm5719-llvm-7f6daf1799a0dc93e37edc4d83ba4ef7c184e3a6.tar.gz bcm5719-llvm-7f6daf1799a0dc93e37edc4d83ba4ef7c184e3a6.zip |
[ARM64] Prevent bit extraction to be adjusted by following shift
For pattern like ((x >> C1) & Mask) << C2, DAG combiner may convert it
into (x >> (C1-C2)) & (Mask << C2), which makes pattern matching of ubfx
more difficult.
For example:
Given
%shr = lshr i64 %x, 4
%and = and i64 %shr, 15
%arrayidx = getelementptr inbounds [8 x [64 x i64]]* @arr, i64 0, %i64 2, i64 %and
%0 = load i64* %arrayidx
With current shift folding, it takes 3 instrs to compute base address:
lsr x8, x0, #1
and x8, x8, #0x78
add x8, x9, x8
If using ubfx, it only needs 2 instrs:
ubfx x8, x0, #4, #4
add x8, x9, x8, lsl #3
This fixes bug 19589
llvm-svn: 207702
Diffstat (limited to 'llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp')
-rw-r--r-- | llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp index 014e97d7b2a..290f2a1ea27 100644 --- a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp @@ -3867,6 +3867,9 @@ SDValue DAGCombiner::visitShiftByConstant(SDNode *N, ConstantSDNode *Amt) { return SDValue(); } + if (!TLI.isDesirableToCommuteWithShift(LHS)) + return SDValue(); + // Fold the constants, shifting the binop RHS by the shift amount. SDValue NewRHS = DAG.getNode(N->getOpcode(), SDLoc(LHS->getOperand(1)), N->getValueType(0), |