| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
llvm-svn: 306205
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a follow-up to https://reviews.llvm.org/D33879 / https://reviews.llvm.org/rL304939 ,
and was discussed in https://reviews.llvm.org/D33338.
We prefer this form because a narrower shift may be cheaper, and we can more easily fold a
zext than a sext.
http://rise4fun.com/Alive/slVe
Name: shz
%s = sext i8 %x to i12
%r = lshr i12 %s, 4
=>
%a = ashr i8 %x, 4
%r = zext i8 %a to i12
llvm-svn: 305190
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
InstSimplify
Summary: This matches the behavior we already had for compares and makes us consistent everywhere.
Reviewers: dberlin, hfinkel, spatel
Reviewed By: dberlin
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D33604
llvm-svn: 305049
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was discussed in D33338. We have larger pattern-matching ending in a truncate that
we can reduce or remove by handling these smaller patterns first. Further motivation is
that narrower shift ops are easier for value tracking and zext is better than sext.
http://rise4fun.com/Alive/rhh
Name: boolshift
%sext = sext i1 %x to i8
%r = lshr i8 %sext, 7
=>
%r = zext i1 %x to i8
Name: noboolshift
%sext = sext i3 %x to i8
%r = lshr i8 %sext, 7
=>
%sh = lshr i3 %x, 2
%r = zext i3 %sh to i8
Differential Revision: https://reviews.llvm.org/D33879
llvm-svn: 304939
|
|
|
|
|
|
|
|
|
|
| |
instruction to a few calls to isKnownPositive, isKnownNegative, and isKnownNonZero
Every other place in InstCombine that uses these methods in ValueTracking already pass this information. This makes the remaining sites consistent.
Differential Revision: https://reviews.llvm.org/D33567
llvm-svn: 304018
|
|
|
|
|
|
| |
AssumptionCache, DominatorTree, TargetLibraryInfo everywhere.
llvm-svn: 301464
|
|
|
|
|
|
|
|
| |
getSignBit is a static function that creates an APInt with only the sign bit set. getSignMask seems like a better name to convey its functionality. In fact several places use it and then store in an APInt named SignMask.
Differential Revision: https://reviews.llvm.org/D32108
llvm-svn: 300856
|
|
|
|
|
|
|
|
|
|
| |
This patch uses lshrInPlace to replace code where the object that lshr is called on is being overwritten with the result.
This adds an lshrInPlace(const APInt &) version as well.
Differential Revision: https://reviews.llvm.org/D32155
llvm-svn: 300566
|
|
|
|
|
|
|
|
|
|
| |
This fold already existed for vectors but only when 'C1' was a splat
constant (but 'C2' could be any constant).
There were no tests for any vector constants, so I'm adding a test
that shows non-splat constants for both operands.
llvm-svn: 294650
|
|
|
|
|
|
|
|
|
|
|
| |
Although this is 'no-functional-change-intended', I'm adding tests
for shl-shl and lshr-lshr pairs because there is no existing test
coverage for those folds.
It seems like we should be able to remove some code from foldShiftedShift()
at this point because we're handling those patterns on the general path.
llvm-svn: 293814
|
|
|
|
|
|
| |
with splat constants
llvm-svn: 293570
|
|
|
|
|
|
| |
constants
llvm-svn: 293562
|
|
|
|
|
|
| |
vectors with splat constants
llvm-svn: 293524
|
|
|
|
| |
llvm-svn: 293508
|
|
|
|
|
|
| |
with splat constants
llvm-svn: 293507
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original shift is bigger, so this may qualify as 'obvious',
but here's an attempt at an Alive-based proof:
Name: exact
Pre: (C1 u< C2)
%a = shl i8 %x, C1
%b = lshr exact i8 %a, C2
=>
%c = lshr exact i8 %x, C2 - C1
%b = and i8 %c, ((1 << width(C1)) - 1) u>> C2
Optimization is correct!
llvm-svn: 293498
|
|
|
|
| |
llvm-svn: 293489
|
|
|
|
|
|
| |
with splats
llvm-svn: 293435
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We already have this fold when the lshr has one use, but it doesn't need that
restriction. We may be able to remove some code from foldShiftedShift().
Also, move the similar:
(X << C) >>u C --> X & (-1 >>u C)
...directly into visitLShr to help clean up foldShiftByConstOfShiftByConst().
That whole function seems questionable since it is called by commonShiftTransforms(),
but there's really not much in common if we're checking the shift opcodes for every
fold.
llvm-svn: 293215
|
|
|
|
|
|
| |
splat vectors
llvm-svn: 293208
|
|
|
|
|
|
|
| |
We may be able to assert that no shl-shl or lshr-lshr pairs ever get here
because we should have already handled those in foldShiftedShift().
llvm-svn: 292726
|
|
|
|
| |
llvm-svn: 292230
|
|
|
|
| |
llvm-svn: 292164
|
|
|
|
|
|
|
|
| |
It's not clear what 'First' and 'Second' mean, so use 'Inner' and 'Outer'
to match foldShiftedShift() and add comments with formulas, so it's easier
to see what's going on.
llvm-svn: 292153
|
|
|
|
|
|
|
|
| |
constants
Some existing 'FIXME' tests are still not folded because of splat holes in value tracking.
llvm-svn: 292151
|
|
|
|
|
|
| |
Reduces code duplication and makes it easier to extend these folds for vectors.
llvm-svn: 292145
|
|
|
|
| |
llvm-svn: 292073
|
|
|
|
| |
llvm-svn: 292064
|
|
|
|
| |
llvm-svn: 292036
|
|
|
|
| |
llvm-svn: 291972
|
|
|
|
| |
llvm-svn: 291937
|
|
|
|
| |
llvm-svn: 291934
|
|
|
|
| |
llvm-svn: 291923
|
|
|
|
|
|
|
| |
Some of the callers are artificially limiting this transform to integer types;
this should make it easier to incrementally remove that restriction.
llvm-svn: 291620
|
|
|
|
|
|
|
| |
It is possible to perform a left shift before zero extending if the
shift would only shift out zeros.
llvm-svn: 290928
|
|
|
|
|
|
|
| |
This creates non-linear behavior in the inliner (see more details in
r289755's commit thread).
llvm-svn: 290086
|
|
|
|
|
|
|
|
|
| |
After r289755, the AssumptionCache is no longer needed. Variables affected by
assumptions are now found by using the new operand-bundle-based scheme. This
new scheme is more computationally efficient, and also we need much less
code...
llvm-svn: 289756
|
|
|
|
|
|
|
| |
These are currently limited to integer types, but we should
be able to extend to splat vectors and possibly general vectors.
llvm-svn: 289343
|
|
|
|
|
|
|
|
| |
Note that the non-splat lshr+lshr test folded, but that does not
work in general. Something is missing or wrong in computeKnownBits
as the non-splat shl+shl test still shows.
llvm-svn: 288005
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces the combine:
(C1 shift (A add C2)) -> ((C1 shift C2) shift A)
iff A and C2 are both positive
If both A and C2 are know to be positive then we can safely split into 2 shifts, permitting the folding of the Inner shift.
Fix for the spec benchmark case mentioned by @nadav on PR15141 (assuming we can prove that the inputs as positive).
Differential Revision: https://reviews.llvm.org/D26000
llvm-svn: 285696
|
|
|
|
|
|
| |
Follow up to r278902. I had missed "fall through", with a space.
llvm-svn: 278970
|
|
|
|
| |
llvm-svn: 277792
|
|
|
|
|
|
|
|
|
|
|
| |
A ConstantVector can have ConstantExpr operands and vice versa.
However, the folder had no ability to fold ConstantVectors which, in
some cases, was an optimization barrier.
Instead, rephrase the folder in terms of Constants instead of
ConstantExprs and teach callers how to deal with failure.
llvm-svn: 277099
|
|
|
|
| |
llvm-svn: 265970
|
|
|
|
| |
llvm-svn: 265969
|
|
|
|
| |
llvm-svn: 265968
|
|
|
|
|
|
|
|
|
| |
We need just a couple of logic tweaks to consolidate the shl and lshr cases.
This is step 5 of refactoring to solve PR26760:
https://llvm.org/bugs/show_bug.cgi?id=26760
llvm-svn: 265965
|
|
|
|
|
|
|
|
|
|
| |
This is the straightforward fix for PR26760:
https://llvm.org/bugs/show_bug.cgi?id=26760
But we still need to make some changes to generalize this helper function
and then send the lshr case into here.
llvm-svn: 265960
|
|
|
|
|
|
|
| |
This is step 3 of refactoring to solve PR26760:
https://llvm.org/bugs/show_bug.cgi?id=26760
llvm-svn: 265954
|
|
|
|
|
|
|
| |
This is step 2 of refactoring to solve PR26760:
https://llvm.org/bugs/show_bug.cgi?id=26760
llvm-svn: 265951
|