| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
| |
llvm-svn: 26102
|
| |
|
|
|
|
| |
branches in their entry block that control whether or not the loop is a noop or not.
llvm-svn: 26101
|
| |
|
|
| |
llvm-svn: 26098
|
| |
|
|
| |
llvm-svn: 26093
|
| |
|
|
|
|
|
| |
uses of loop values outside the loop. We need loop-closed SSA form to do
this right, or to use SSA rewriting if we really care.
llvm-svn: 26089
|
| |
|
|
| |
llvm-svn: 26088
|
| |
|
|
|
|
|
|
|
|
|
| |
1. Teach it new tricks: in particular how to propagate through signed shr and sexts.
2. Teach it to return a bitset of known-1 and known-0 bits, instead of just zero.
3. Teach instcombine (AND X, C) to fold when we know all C bits of X.
This implements Regression/Transforms/InstCombine/bittest.ll, and allows
future things to be simplified.
llvm-svn: 26087
|
| |
|
|
|
|
| |
optimization where we reduce the number of bits in AND masks when possible.
llvm-svn: 26056
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
instruction onto the worklist (in case they are now dead).
Add a really trivial local DSE implementation to help out bitfield code.
We now fold this:
struct S {
unsigned char a : 1, b : 1, c : 1, d : 2, e : 3;
S();
};
S::S() : a(0), b(0), c(1), d(0), e(6) {}
to this:
void %_ZN1SC1Ev(%struct.S* %this) {
entry:
%tmp.1 = getelementptr %struct.S* %this, int 0, uint 0
store ubyte 38, ubyte* %tmp.1
ret void
}
much earlier (in gccas instead of only in gccld after DSE runs).
llvm-svn: 26050
|
| |
|
|
|
|
| |
test/Regression/Transforms/SCCP/select.ll
llvm-svn: 26049
|
| |
|
|
| |
llvm-svn: 26045
|
| |
|
|
| |
llvm-svn: 26040
|
| |
|
|
|
|
|
|
| |
is just as efficient as MVIZ and is also more general.
Fix a few minor bugs introduced in recent patches
llvm-svn: 26036
|
| |
|
|
|
|
|
|
| |
mask. This allows the code to be simpler and more efficient.
Also, generalize some of the cases in MVIZ a bit, making it slightly more aggressive.
llvm-svn: 26035
|
| |
|
|
| |
llvm-svn: 26034
|
| |
|
|
|
|
|
| |
'demanded bits', inspired by Nate's work in the dag combiner. This isn't
complete, but needs to unrelated instcombiner changes to continue.
llvm-svn: 26033
|
| |
|
|
|
|
|
|
| |
Turn A / (C1 << N), where C1 is "1<<C2" into A >> (N+C2) [udiv only].
Tested with: rem.ll:test5, div.ll:test10
llvm-svn: 26003
|
| |
|
|
|
|
| |
#LLVM LOC, and auto-cse's cast instructions.
llvm-svn: 25974
|
| |
|
|
|
|
|
|
|
|
| |
1. When rewriting code in outer loops, sometimes we would insert code into
inner loops that is invariant in that loop.
2. Notice that 4*(2+x) is 8+4*x and use that to simplify expressions.
This is a performance neutral change.
llvm-svn: 25964
|
| |
|
|
| |
llvm-svn: 25661
|
| |
|
|
| |
llvm-svn: 25633
|
| |
|
|
| |
llvm-svn: 25587
|
| |
|
|
| |
llvm-svn: 25572
|
| |
|
|
| |
llvm-svn: 25559
|
| |
|
|
| |
llvm-svn: 25530
|
| |
|
|
| |
llvm-svn: 25525
|
| |
|
|
| |
llvm-svn: 25514
|
| |
|
|
|
|
|
|
|
| |
1. Do not statically construct a map when the program starts up, this
is expensive and cannot be optimized. Instead, create a list.
2. Do not insert entries for all function in the module into a hashmap
that lives the full life of the compiler.
llvm-svn: 25512
|
| |
|
|
| |
llvm-svn: 25509
|
| |
|
|
|
|
|
|
|
| |
1. Use the varargs version of getOrInsertFunction to simplify code.
2. remove #include
3. Reduce the number of #ifdef's.
4. remove extraneous vertical whitespace.
llvm-svn: 25508
|
| |
|
|
|
|
| |
packed types correctly.
llvm-svn: 25470
|
| |
|
|
|
|
|
|
| |
Don't do floor->floorf conversion if floorf is not available. This checks
the compiler's host, not its target, which is incorrect for cross-compilers
Not sure that's important as we don't build many cross-compilers.
llvm-svn: 25456
|
| |
|
|
|
|
| |
need the float->double part.
llvm-svn: 25452
|
| |
|
|
|
|
| |
hypothetical future boog.
llvm-svn: 25430
|
| |
|
|
|
|
| |
unbreaks front-ends that don't use __main (like the new CFE).
llvm-svn: 25429
|
| |
|
|
|
|
| |
library list as well. This should help bugpoint.
llvm-svn: 25424
|
| |
|
|
| |
llvm-svn: 25407
|
| |
|
|
| |
llvm-svn: 25406
|
| |
|
|
|
|
|
| |
unsigned llvm.cttz.* intrinsic, fixing the 2005-05-11-Popcount-ffs-fls regression
last night.
llvm-svn: 25398
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch is an incremental step towards supporting a flat symbol table.
It de-overloads the intrinsic functions by providing type-specific intrinsics
and arranging for automatically upgrading from the old overloaded name to
the new non-overloaded name. Specifically:
llvm.isunordered -> llvm.isunordered.f32, llvm.isunordered.f64
llvm.sqrt -> llvm.sqrt.f32, llvm.sqrt.f64
llvm.ctpop -> llvm.ctpop.i8, llvm.ctpop.i16, llvm.ctpop.i32, llvm.ctpop.i64
llvm.ctlz -> llvm.ctlz.i8, llvm.ctlz.i16, llvm.ctlz.i32, llvm.ctlz.i64
llvm.cttz -> llvm.cttz.i8, llvm.cttz.i16, llvm.cttz.i32, llvm.cttz.i64
New code should not use the overloaded intrinsic names. Warnings will be
emitted if they are used.
llvm-svn: 25366
|
| |
|
|
| |
llvm-svn: 25363
|
| |
|
|
| |
llvm-svn: 25349
|
| |
|
|
|
|
| |
of doing it ourselves. This fixes Transforms/Inline/2006-01-14-CallGraphUpdate.ll
llvm-svn: 25321
|
| |
|
|
|
|
| |
llvm.stacksave/restore when it inserts calls to them.
llvm-svn: 25320
|
| |
|
|
| |
llvm-svn: 25315
|
| |
|
|
| |
llvm-svn: 25309
|
| |
|
|
| |
llvm-svn: 25299
|
| |
|
|
| |
llvm-svn: 25295
|
| |
|
|
| |
llvm-svn: 25294
|
| |
|
|
| |
llvm-svn: 25292
|