| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
x86_sse42_crc32_32_8 and was not mapped to a clang builtin. I'm not even sure why this form of the instruction is even called out explicitly in the docs. Also add AutoUpgrade support to convert it into the other intrinsic with appropriate trunc and zext.
llvm-svn: 192672
|
|
|
|
|
|
|
|
|
| |
integer overflow.
PR17026. Also avoid undefined shifts and shift amounts larger than 64 bits
(those are always undef because we can't represent integer types that large).
llvm-svn: 189672
|
|
|
|
|
|
|
| |
That's obviously wrong. Conservatively restrict it to the sign bit, which
matches the original intention of this analysis. Fixes PR15940.
llvm-svn: 181518
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
into their new header subdirectory: include/llvm/IR. This matches the
directory structure of lib, and begins to correct a long standing point
of file layout clutter in LLVM.
There are still more header files to move here, but I wanted to handle
them in separate commits to make tracking what files make sense at each
layer easier.
The only really questionable files here are the target intrinsic
tablegen files. But that's a battle I'd rather not fight today.
I've updated both CMake and Makefile build systems (I think, and my
tests think, but I may have missed something).
I've also re-sorted the includes throughout the project. I'll be
committing updates to Clang, DragonEgg, and Polly momentarily.
llvm-svn: 171366
|
|
|
|
| |
llvm-svn: 170990
|
|
|
|
|
|
| |
- Propagate "exact" bit of [l|a]shr instruction.
llvm-svn: 169942
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change attempts to simplify (X^Y) -> X or Y in the user's context if we know that
only bits from X or Y are demanded.
A minimized case is provided bellow. This change will simplify "t>>16" into "var1 >>16".
=============================================================
unsigned foo (unsigned val1, unsigned val2) {
unsigned t = val1 ^ 1234;
return (t >> 16) | t; // NOTE: t is used more than once.
}
=============================================================
Note that if the "t" were used only once, the expression would be finally optimized as well.
However, with with this change, the optimization will take place earlier.
Reviewed by Nadav, Thanks a lot!
llvm-svn: 169317
|
|
|
|
|
|
|
| |
The type of shirt-right (logical or arithemetic) should remain unchanged
when transforming "X << C1 >> C2" into "X << (C1-C2)"
llvm-svn: 169209
|
|
|
|
|
|
|
|
|
|
|
| |
This change tries to simmplify E1 = " X >> C1 << C2" into :
- E2 = "X << (C2 - C1)" if C2 > C1, or
- E2 = "X >> (C1 - C2)" if C1 > C2, or
- E2 = X if C1 == C2.
Reviewed by Nadav. Thanks!
llvm-svn: 169182
|
|
|
|
| |
llvm-svn: 165402
|
|
|
|
|
|
| |
See: http://en.wikipedia.org/wiki/If_and_only_if Commit 164767
llvm-svn: 164768
|
|
|
|
| |
llvm-svn: 164767
|
|
|
|
|
|
| |
vector
llvm-svn: 160835
|
|
|
|
|
|
| |
their operand
llvm-svn: 160823
|
|
|
|
|
|
| |
instcombine transformation.
llvm-svn: 160387
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
%shr = lshr i64 %key, 3
%0 = load i64* %val, align 8
%sub = add i64 %0, -1
%and = and i64 %sub, %shr
ret i64 %and
to:
%shr = lshr i64 %key, 3
%0 = load i64* %val, align 8
%sub = add i64 %0, 2305843009213693951
%and = and i64 %sub, %shr
ret i64 %and
The demanded bit optimization is actually a pessimization because add -1 would
be codegen'ed as a sub 1. Teach the demanded constant shrinking optimization
to check for negated constant to make sure it is actually reducing the width
of the constant.
rdar://11793464
llvm-svn: 160101
|
|
|
|
|
|
|
|
| |
This allows us to keep passing reduced masks to SimplifyDemandedBits, but
know about all the bits if SimplifyDemandedBits fails. This allows instcombine
to simplify cases like the one in the included testcase.
llvm-svn: 154011
|
|
|
|
|
|
|
| |
we should (theoretically optimize and codegen ConstantDataVector as well
as ConstantVector.
llvm-svn: 149116
|
|
|
|
| |
llvm-svn: 148934
|
|
|
|
| |
llvm-svn: 148929
|
|
|
|
| |
llvm-svn: 148806
|
|
|
|
|
|
| |
nsw bits on them.
llvm-svn: 147528
|
|
|
|
|
|
|
|
|
|
|
| |
smaller than 2^n.
This has the obvious advantage of being commutable and is always a win on x86 because
const - x wastes a register there. On less weird architectures this may lead to
a regression because other arithmetic doesn't fuse with it anymore. I'll address that
problem in a followup.
llvm-svn: 147254
|
|
|
|
|
|
|
|
| |
to be uniqued, without any benefit.
If someone prefers %tmp42 to %42, run instnamer.
llvm-svn: 140634
|
|
|
|
|
|
| |
Spotted by inspection.
llvm-svn: 139768
|
|
|
|
| |
llvm-svn: 135375
|
|
|
|
|
|
|
| |
crc32.[8|16|32] have been renamed to .crc32.32.[8|16|32] and
crc64.[8|16|32] have been renamed to .crc32.64.[8|64].
llvm-svn: 132163
|
|
|
|
| |
llvm-svn: 131708
|
|
|
|
|
|
| |
I'm not sure this is quite ideal, but I can't really think of any better way to do it.
llvm-svn: 131616
|
|
|
|
|
|
| |
rdar://problem/6945110
llvm-svn: 131493
|
|
|
|
|
|
| |
INT_MIN % -1.
llvm-svn: 127306
|
|
|
|
|
|
|
| |
then the result could go either way. If it's provably positive then so is the
srem. Fixes PR9343 #7!
llvm-svn: 127146
|
|
|
|
|
|
|
| |
are shifting out since they do require them to be zeros. Similarly
for NUW/NSW bits of shl
llvm-svn: 125263
|
|
|
|
|
|
|
|
| |
zextOrTrunc(), and APSInt methods extend(), extOrTrunc() and new method
trunc(), to be const and to return a new value instead of modifying the
object in place.
llvm-svn: 121120
|
|
|
|
|
|
| |
setAllBits(), setBit(unsigned), etc.
llvm-svn: 120564
|
|
|
|
| |
llvm-svn: 107016
|
|
|
|
| |
llvm-svn: 106737
|
|
|
|
|
|
|
| |
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.
llvm-svn: 101579
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with a fix for self-hosting
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101465
|
|
|
|
| |
llvm-svn: 101434
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with a fix
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101397
|
|
|
|
| |
llvm-svn: 101368
|
|
|
|
|
|
|
|
|
|
| |
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101364
|
|
|
|
|
|
|
| |
and T->isPointerTy(). Convert most instances of the first form to the second form.
Requested by Chris.
llvm-svn: 96344
|
|
|
|
|
|
| |
isInteger, we now have isFloatTy and isIntegerTy. Requested by Chris!
llvm-svn: 96223
|
|
|
|
| |
llvm-svn: 95616
|
|
|
|
|
|
|
|
|
|
|
| |
KnownOne
(via APInt &RHSKnownZero = KnownZero, etc) seems dangerous and confusing to me: it
is easy not to notice this, and then wonder why KnownZero/RHSKnownZero changed
underneath you when you modified RHSKnownZero/KnownZero etc. So get rid of this.
No intended functionality change (tested with "make check" + llvm-gcc bootstrap).
llvm-svn: 94802
|
|
|
|
|
|
|
| |
when it should have been and'd with LowBits. Fix that and while there beef
up the logic in the case of a negative LHS.
llvm-svn: 94745
|
|
lines out of instcombine.cpp
llvm-svn: 92465
|