| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
work even if CallInst::op_* are private
llvm-svn: 107390
|
| |
|
|
|
|
|
|
|
| |
in terms of Op<> and ArgOffset. This works for
values of {0, 1} for ArgOffset.
Please note that ArgOffset will become 0 soon and
will go away eventually.
llvm-svn: 107129
|
| |
|
|
|
|
| |
SmallVector, and other SmallVector simplifications.
llvm-svn: 106452
|
| |
|
|
|
|
| |
as is done with most other cast opcode predicates.
llvm-svn: 105008
|
| |
|
|
|
|
| |
This will help reduce the amount of casting required on 64-bit targets.
llvm-svn: 104911
|
| |
|
|
|
|
|
|
| |
to fadd, fsub, and fmul, when used with a floating-point type. LLVM
has supported the new instructions since 2.6, so it's time to get
on board.
llvm-svn: 102971
|
| |
|
|
|
|
|
| |
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.
llvm-svn: 101579
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
with a fix for self-hosting
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101465
|
| |
|
|
| |
llvm-svn: 101434
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
with a fix
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101397
|
| |
|
|
| |
llvm-svn: 101368
|
| |
|
|
|
|
|
|
|
|
| |
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101364
|
| |
|
|
|
|
| |
be used in ImmutableCallSite too.
llvm-svn: 101292
|
| |
|
|
| |
llvm-svn: 100720
|
| |
|
|
|
|
|
|
|
| |
is necessary. Inherits from new templated baseclass CallSiteBase<>
which is highly customizable. Base CallSite on it too, in a configuration
that allows full mutation.
Adapt some call sites in analyses to employ ImmutableCallSite.
llvm-svn: 100100
|
| |
|
|
|
|
| |
instead of InlineFunction.
llvm-svn: 99483
|
| |
|
|
| |
llvm-svn: 99451
|
| |
|
|
|
|
|
|
|
|
|
|
| |
I have audited all getOperandNo calls now, fixing
hidden assumptions. CallSite related uglyness will
be eliminated successively.
Note this patch has a long and griveous history,
for all the back-and-forths have a look at
CallSite.h's log.
llvm-svn: 99399
|
| |
|
|
|
|
|
|
| |
for the noinline attribute, and make the inliner refuse to
inline a call site when the call site is marked noinline even
if the callee isn't. This fixes PR6682.
llvm-svn: 99341
|
| |
|
|
| |
llvm-svn: 99275
|
| |
|
|
| |
llvm-svn: 99171
|
| |
|
|
|
|
| |
we can reapply the InvokeInst operand reordering patch. (see r98957).
llvm-svn: 99170
|
| |
|
|
|
|
| |
http://smooshlab.apple.com:8010/builders/clang-x86_64-darwin10-fnt/builds/703 in the nightly test suite
llvm-svn: 98958
|
| |
|
|
|
|
|
|
|
|
|
| |
This time I did a self-hosted bootstrap on Linux x86-64,
with no problems. Let's see how darwin 64-bit self-hosting
goes. At the first sign of failure I'll back this out.
Maybe the valgrind bots give me a hint of what may be wrong
(it at all).
llvm-svn: 98957
|
| |
|
|
|
|
|
| |
and T->isPointerTy(). Convert most instances of the first form to the second form.
Requested by Chris.
llvm-svn: 96344
|
| |
|
|
|
|
| |
isInteger, we now have isFloatTy and isIntegerTy. Requested by Chris!
llvm-svn: 96223
|
| |
|
|
| |
llvm-svn: 95086
|
| |
|
|
|
|
|
|
| |
llvm-as: t.ll:1:25: error: invalid cast opcode for cast from '[4 x i8]' to '[1 x i32]'
@x = constant [1 x i32] bitcast ([4 x i8] c"abcd" to [1 x i32])
^
llvm-svn: 94595
|
| |
|
|
|
|
| |
in the case of empty and full ranges.
llvm-svn: 94548
|
| |
|
|
| |
llvm-svn: 94281
|
| |
|
|
|
|
| |
to a sext/zext
llvm-svn: 94280
|
| |
|
|
| |
llvm-svn: 94161
|
| |
|
|
|
|
| |
integer vectors as well as just integers.
llvm-svn: 93126
|
| |
|
|
| |
llvm-svn: 92771
|
| |
|
|
| |
llvm-svn: 92760
|
| |
|
|
|
|
| |
dereference the type pointer.
llvm-svn: 92726
|
| |
|
|
| |
llvm-svn: 92240
|
| |
|
|
|
|
|
|
|
|
|
| |
a convention (shadowing the setter with private forwarding function) to
prevent subclasses from accidentally using it.
This exposed some bogosity in ConstantExprs, which was propaging the
opcode of the constant expr into the NUW/NSW/Exact field in the
getWithOperands/getWithOperandReplaced methods.
llvm-svn: 92239
|
| |
|
|
|
|
| |
Integer negation only overflows with INT_MIN, but that's an important case.
llvm-svn: 91662
|
| |
|
|
|
|
| |
correctly
llvm-svn: 86712
|
| |
|
|
| |
llvm-svn: 86525
|
| |
|
|
| |
llvm-svn: 86365
|
| |
|
|
| |
llvm-svn: 86316
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here is the original commit message:
This commit updates malloc optimizations to operate on malloc calls that have constant int size arguments.
Update CreateMalloc so that its callers specify the size to allocate:
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.
Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.
Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses. The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.
Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses. The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.
Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.
Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.
llvm-svn: 86311
|
| |
|
|
|
|
| |
with correct calling convention
llvm-svn: 86290
|
| |
|
|
| |
llvm-svn: 86213
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.
Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.
Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses. The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.
Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses. The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.
Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.
Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.
llvm-svn: 86077
|
| |
|
|
| |
llvm-svn: 85496
|
| |
|
|
| |
llvm-svn: 85351
|
| |
|
|
| |
llvm-svn: 85327
|