| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
| |
start using them in a trivial way when -enable-jump-threading-lvi
is passed. enable-jump-threading-lvi will be my playground for
awhile.
llvm-svn: 86789
|
| |
|
|
|
|
| |
vend value constraint information to the optimizer.
llvm-svn: 86767
|
| |
|
|
|
|
| |
Analysis/Passes.h
llvm-svn: 86765
|
| |
|
|
| |
llvm-svn: 86748
|
| |
|
|
|
|
| |
into libanalysis and transformutils.
llvm-svn: 86735
|
| |
|
|
|
|
| |
Update InsertDeclare to return newly inserted llvm.dbg.declare intrinsic.
llvm-svn: 86727
|
| |
|
|
|
|
| |
size associated with a malloc; also extend PerformHeapAllocSRoA() to check if the optimized malloc's arg had its highest bit set, so that it is safe for ComputeMultiple() to look through sext instructions while determining the optimized malloc's array size
llvm-svn: 86676
|
| |
|
|
|
|
| |
Value V is a multiple of unsigned Base
llvm-svn: 86675
|
| |
|
|
| |
llvm-svn: 86648
|
| |
|
|
| |
llvm-svn: 86645
|
| |
|
|
|
|
|
|
| |
except that the result may not be a constant. Switch jump threading to
use it so that it gets things like (X & 0) -> 0, which occur when phi preds
are deleted and the remaining phi pred was a zero.
llvm-svn: 86637
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch forbids implicit conversion of DenseMap::const_iterator to
DenseMap::iterator which was possible because DenseMapIterator inherited
(publicly) from DenseMapConstIterator. Conversion the other way around is now
allowed as one may expect.
The template DenseMapConstIterator is removed and the template parameter
IsConst which specifies whether the iterator is constant is added to
DenseMapIterator.
Actually IsConst parameter is not necessary since the constness can be
determined from KeyT but this is not relevant to the fix and can be addressed
later.
Patch by Victor Zverovich!
llvm-svn: 86636
|
| |
|
|
| |
llvm-svn: 86635
|
| |
|
|
|
|
| |
simplification, this handles the foldable fcmp x,x cases among many others.
llvm-svn: 86627
|
| |
|
|
|
|
| |
and ConstantFoldCompareInstOperands.
llvm-svn: 86626
|
| |
|
|
|
|
|
| |
Simplify[IF]Cmp pieces. Add some predicates to CmpInst to
determine whether a predicate is fp or int.
llvm-svn: 86624
|
| |
|
|
|
|
| |
individual operands instead of taking a temporary array
llvm-svn: 86619
|
| |
|
|
|
|
|
|
| |
takes decimated instructions and applies identities to them. This
is pretty minimal at this point, but I plan to pull some instcombine
logic out into these and similar routines.
llvm-svn: 86613
|
| |
|
|
|
|
| |
GVN to be more aggressive. Patch by Hans Wennborg! (with a comment added by me)
llvm-svn: 86582
|
| |
|
|
| |
llvm-svn: 86565
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here is the original commit message:
This commit updates malloc optimizations to operate on malloc calls that have constant int size arguments.
Update CreateMalloc so that its callers specify the size to allocate:
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.
Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.
Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses. The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.
Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses. The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.
Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.
Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.
llvm-svn: 86311
|
| |
|
|
| |
llvm-svn: 86269
|
| |
|
|
| |
llvm-svn: 86259
|
| |
|
|
|
|
| |
from various APIs, addressing PR5325.
llvm-svn: 86231
|
| |
|
|
| |
llvm-svn: 86213
|
| |
|
|
| |
llvm-svn: 86161
|
| |
|
|
|
|
| |
a separate helper function.
llvm-svn: 86159
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
MallocInst-autoupgrade users use non-TargetData-computed allocation sizes.
Optimization uses use TargetData to compute the allocation size.
Now that malloc calls can have constant sizes, update isArrayMallocHelper() to use TargetData to determine the size of the malloced type and the size of malloced arrays.
Extend getMallocType() to support malloc calls that have non-bitcast uses.
Update OptimizeGlobalAddressOfMalloc() to optimize malloc calls that have non-bitcast uses. The bitcast use of a malloc call has to be treated specially here because the uses of the bitcast need to be replaced and the bitcast needs to be erased (just like the malloc call) for OptimizeGlobalAddressOfMalloc() to work correctly.
Update PerformHeapAllocSRoA() to optimize malloc calls that have non-bitcast uses. The bitcast use of the malloc is not handled specially here because ReplaceUsesOfMallocWithGlobal replaces through the bitcast use.
Update OptimizeOnceStoredGlobal() to not care about the malloc calls' bitcast use.
Update all globalopt malloc tests to not rely on autoupgraded-MallocInsts, but instead use explicit malloc calls with correct allocation sizes.
llvm-svn: 86077
|
| |
|
|
|
|
|
|
| |
variants encoded as DIDerivedType appropriately.
This improves bitfield support.
llvm-svn: 86073
|
| |
|
|
|
|
| |
and avoid redundant isFreeCall cases) in feedback to r85176
llvm-svn: 85936
|
| |
|
|
|
|
| |
APInt::getLimitedValue) based on feedback to r85814
llvm-svn: 85933
|
| |
|
|
| |
llvm-svn: 85866
|
| |
|
|
|
|
| |
the generic call code works fine.
llvm-svn: 85865
|
| |
|
|
| |
llvm-svn: 85814
|
| |
|
|
|
|
| |
instead of pow()
llvm-svn: 85781
|
| |
|
|
| |
llvm-svn: 85779
|
| |
|
|
|
|
| |
columns.
llvm-svn: 85732
|
| |
|
|
| |
llvm-svn: 85724
|
| |
|
|
| |
llvm-svn: 85717
|
| |
|
|
|
|
| |
of the ScalarEvolution pass without needing to #include ScalarEvolution.h.
llvm-svn: 85716
|
| |
|
|
| |
llvm-svn: 85715
|
| |
|
|
|
|
| |
an indirectbr.
llvm-svn: 85702
|
| |
|
|
|
|
| |
clears out more information than just the stored backedge taken count.
llvm-svn: 85664
|
| |
|
|
|
|
| |
underlying alias call even for non-identified-object values.
llvm-svn: 85656
|
| |
|
|
| |
llvm-svn: 85630
|
| |
|
|
| |
llvm-svn: 85619
|
| |
|
|
|
|
|
|
| |
until one find's dbg.declare intrinsic.
Patch by Sunae Seo.
llvm-svn: 85518
|
| |
|
|
| |
llvm-svn: 85480
|
| |
|
|
|
|
| |
pow is ambiguous call to overloaded function
llvm-svn: 85478
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
argument is:
ArraySize * ElementSize
ElementSize * ArraySize
ArraySize << log2(ElementSize)
ElementSize << log2(ArraySize)
Refactor isArrayMallocHelper and delete isSafeToGetMallocArraySize, so that there is only 1 copy of the malloc array determining logic.
Update users of getMallocArraySize() to not bother calling isArrayMalloc() as well.
llvm-svn: 85421
|