| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
llvm-svn: 25633
|
| |
|
|
| |
llvm-svn: 25587
|
| |
|
|
| |
llvm-svn: 25572
|
| |
|
|
| |
llvm-svn: 25559
|
| |
|
|
| |
llvm-svn: 25530
|
| |
|
|
| |
llvm-svn: 25525
|
| |
|
|
| |
llvm-svn: 25514
|
| |
|
|
|
|
|
|
|
| |
1. Do not statically construct a map when the program starts up, this
is expensive and cannot be optimized. Instead, create a list.
2. Do not insert entries for all function in the module into a hashmap
that lives the full life of the compiler.
llvm-svn: 25512
|
| |
|
|
| |
llvm-svn: 25509
|
| |
|
|
|
|
|
|
|
| |
1. Use the varargs version of getOrInsertFunction to simplify code.
2. remove #include
3. Reduce the number of #ifdef's.
4. remove extraneous vertical whitespace.
llvm-svn: 25508
|
| |
|
|
|
|
| |
packed types correctly.
llvm-svn: 25470
|
| |
|
|
|
|
|
|
| |
Don't do floor->floorf conversion if floorf is not available. This checks
the compiler's host, not its target, which is incorrect for cross-compilers
Not sure that's important as we don't build many cross-compilers.
llvm-svn: 25456
|
| |
|
|
|
|
| |
need the float->double part.
llvm-svn: 25452
|
| |
|
|
|
|
| |
hypothetical future boog.
llvm-svn: 25430
|
| |
|
|
|
|
| |
unbreaks front-ends that don't use __main (like the new CFE).
llvm-svn: 25429
|
| |
|
|
|
|
| |
library list as well. This should help bugpoint.
llvm-svn: 25424
|
| |
|
|
| |
llvm-svn: 25407
|
| |
|
|
| |
llvm-svn: 25406
|
| |
|
|
|
|
|
| |
unsigned llvm.cttz.* intrinsic, fixing the 2005-05-11-Popcount-ffs-fls regression
last night.
llvm-svn: 25398
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch is an incremental step towards supporting a flat symbol table.
It de-overloads the intrinsic functions by providing type-specific intrinsics
and arranging for automatically upgrading from the old overloaded name to
the new non-overloaded name. Specifically:
llvm.isunordered -> llvm.isunordered.f32, llvm.isunordered.f64
llvm.sqrt -> llvm.sqrt.f32, llvm.sqrt.f64
llvm.ctpop -> llvm.ctpop.i8, llvm.ctpop.i16, llvm.ctpop.i32, llvm.ctpop.i64
llvm.ctlz -> llvm.ctlz.i8, llvm.ctlz.i16, llvm.ctlz.i32, llvm.ctlz.i64
llvm.cttz -> llvm.cttz.i8, llvm.cttz.i16, llvm.cttz.i32, llvm.cttz.i64
New code should not use the overloaded intrinsic names. Warnings will be
emitted if they are used.
llvm-svn: 25366
|
| |
|
|
| |
llvm-svn: 25363
|
| |
|
|
| |
llvm-svn: 25349
|
| |
|
|
|
|
| |
of doing it ourselves. This fixes Transforms/Inline/2006-01-14-CallGraphUpdate.ll
llvm-svn: 25321
|
| |
|
|
|
|
| |
llvm.stacksave/restore when it inserts calls to them.
llvm-svn: 25320
|
| |
|
|
| |
llvm-svn: 25315
|
| |
|
|
| |
llvm-svn: 25309
|
| |
|
|
| |
llvm-svn: 25299
|
| |
|
|
| |
llvm-svn: 25295
|
| |
|
|
| |
llvm-svn: 25294
|
| |
|
|
| |
llvm-svn: 25292
|
| |
|
|
|
|
|
| |
InlineFunction handles this case safely. This implements
Transforms/Inline/dynamic_alloca_test.ll.
llvm-svn: 25288
|
| |
|
|
|
|
| |
resultant code with llvm.stacksave/llvm.stackrestore intrinsics.
llvm-svn: 25286
|
| |
|
|
|
|
| |
time in common C cases.
llvm-svn: 25285
|
| |
|
|
|
|
|
| |
it doesn't contain any calls. This is a fairly common case for C++ code,
so it will probably speed up the inliner marginally in these cases.
llvm-svn: 25284
|
| |
|
|
|
|
| |
"HandleInlinedInvoke". No functionality change.
llvm-svn: 25283
|
| |
|
|
|
|
| |
code being cloned if the client wants.
llvm-svn: 25281
|
| |
|
|
|
|
|
|
| |
function was not an alloca, we wouldn't check the entry block for any allocas,
leading to increased stack space in some cases. In practice, allocas are almost
always at the top of the block, so this was never noticed.
llvm-svn: 25280
|
| |
|
|
| |
llvm-svn: 25279
|
| |
|
|
| |
llvm-svn: 25203
|
| |
|
|
|
|
| |
Patch written by Daniel Berlin!
llvm-svn: 25202
|
| |
|
|
|
|
| |
Patch written by Daniel Berlin!
llvm-svn: 25201
|
| |
|
|
| |
llvm-svn: 25181
|
| |
|
|
| |
llvm-svn: 25180
|
| |
|
|
| |
llvm-svn: 25153
|
| |
|
|
| |
llvm-svn: 25137
|
| |
|
|
| |
llvm-svn: 25130
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the shifts.
This allows us to fold this (which is the 'integer add a constant' sequence
from cozmic's scheme compmiler):
int %x(uint %anf-temporary776) {
%anf-temporary777 = shr uint %anf-temporary776, ubyte 1
%anf-temporary800 = cast uint %anf-temporary777 to int
%anf-temporary804 = shl int %anf-temporary800, ubyte 1
%anf-temporary805 = add int %anf-temporary804, -2
%anf-temporary806 = or int %anf-temporary805, 1
ret int %anf-temporary806
}
into this:
int %x(uint %anf-temporary776) {
%anf-temporary776 = cast uint %anf-temporary776 to int
%anf-temporary776.mask1 = add int %anf-temporary776, -2
%anf-temporary805 = or int %anf-temporary776.mask1, 1
ret int %anf-temporary805
}
note that instcombine already knew how to eliminate the AND that the two
shifts fold into. This is tested by InstCombine/shift.ll:test26
-Chris
llvm-svn: 25128
|
| |
|
|
| |
llvm-svn: 25126
|
| |
|
|
|
|
| |
functionality changes.
llvm-svn: 25125
|
| |
|
|
|
|
|
|
| |
read the code.
Do not internalize debugger anchors.
llvm-svn: 25067
|