| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
llvm-svn: 108522
|
|
|
|
|
|
|
|
|
| |
that was actually useful here.
Chris, please double check that this is the correct interpretation. I was
pretty sure, and ran it by Nick as well.
llvm-svn: 108129
|
|
|
|
| |
llvm-svn: 108113
|
|
|
|
| |
llvm-svn: 107767
|
|
|
|
| |
llvm-svn: 106279
|
|
|
|
|
|
|
| |
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.
llvm-svn: 101579
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with a fix for self-hosting
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101465
|
|
|
|
| |
llvm-svn: 101434
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with a fix
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101397
|
|
|
|
| |
llvm-svn: 101368
|
|
|
|
|
|
|
|
|
|
| |
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101364
|
|
|
|
| |
llvm-svn: 101009
|
|
|
|
| |
llvm-svn: 98911
|
|
|
|
| |
llvm-svn: 98853
|
|
|
|
|
|
| |
the inner GEP is not a ConstantInt.
llvm-svn: 98359
|
|
|
|
| |
llvm-svn: 98178
|
|
|
|
|
|
|
|
|
|
|
| |
getelementptr. Despite only doing so in the case where x is a known
array object and c can be converted to an index within range, this
could still be invalid if c is actually the address of an object
allocated outside of LLVM. Also, SCEVExpander, the original motivation
for this code, has since been improved to avoid inttoptr+ptroint in
more cases.
llvm-svn: 96950
|
|
|
|
|
|
|
|
|
|
| |
operators.
The test difference is just due to the multiplication operands
being commuted (and thus requiring a more elaborate match). In
optimized code, that expression would be folded.
llvm-svn: 96816
|
|
|
|
| |
llvm-svn: 96808
|
|
|
|
| |
llvm-svn: 96432
|
|
|
|
|
|
|
| |
and T->isPointerTy(). Convert most instances of the first form to the second form.
Requested by Chris.
llvm-svn: 96344
|
|
|
|
|
|
| |
isInteger, we now have isFloatTy and isIntegerTy. Requested by Chris!
llvm-svn: 96223
|
|
|
|
| |
llvm-svn: 95582
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
cases, and implement target-independent folding rules for alignof and
offsetof. Also, reassociate reassociative operators when it leads to
more folding.
Generalize ScalarEvolution's isOffsetOf to recognize offsetof on
arrays. Rename getAllocSizeExpr to getSizeOfExpr, and getFieldOffsetExpr
to getOffsetOfExpr, for consistency with analagous ConstantExpr routines.
Make the target-dependent folder promote GEP array indices to
pointer-sized integers, to make implicit casting explicit and exposed
to subsequent folding.
And add a bunch of testcases for this new functionality, and a bunch
of related existing functionality.
llvm-svn: 94987
|
|
|
|
|
|
|
| |
result int by 8 for the first byte. While normally harmless,
if the result is smaller than a byte, this shift is invalid.
llvm-svn: 93018
|
|
|
|
|
|
| |
wrapping up PR3351.
llvm-svn: 92410
|
|
|
|
|
|
| |
folding a load from constant.
llvm-svn: 90545
|
|
|
|
| |
llvm-svn: 90369
|
|
|
|
|
|
| |
This permits the devirtualization of llvm.org/PR3100#c9 when compiled by clang.
llvm-svn: 90099
|
|
|
|
|
|
|
|
|
|
| |
ConstantExpr, not just the top-level operator. This allows it to
fold many more constants.
Also, make GlobalOpt call ConstantFoldConstantExpression on
GlobalVariable initializers.
llvm-svn: 89659
|
|
|
|
|
|
| |
individual operands instead of taking a temporary array
llvm-svn: 86619
|
|
|
|
|
|
| |
from various APIs, addressing PR5325.
llvm-svn: 86231
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows us to simplify this:
union vec2d {
double e[2];
double v __attribute__((vector_size(16)));
};
typedef union vec2d vec2d;
static vec2d a={{1,2}}, b={{3,4}};
vec2d foo () {
return (vec2d){ .v = a.v + b.v * (vec2d){{5,5}}.v };
}
down to:
define %0 @foo() nounwind ssp {
entry:
%mrv5 = insertvalue %0 undef, double 1.600000e+01, 0 ; <%0> [#uses=1]
%mrv6 = insertvalue %0 %mrv5, double 2.200000e+01, 1 ; <%0> [#uses=1]
ret %0 %mrv6
}
instead of:
define %0 @foo() nounwind ssp {
entry:
%mrv5 = insertvalue %0 undef, double extractelement (<2 x double> fadd (<2 x double> fmul (<2 x double> bitcast (<1 x i128> <i128 85174437667405312423031577302488055808> to <2 x double>), <2 x double> <double 3.000000e+00, double 4.000000e+00>), <2 x double> <double 1.000000e+00, double 2.000000e+00>), i32 0), 0 ; <%0> [#uses=1]
%mrv6 = insertvalue %0 %mrv5, double extractelement (<2 x double> fadd (<2 x double> fmul (<2 x double> bitcast (<1 x i128> <i128 85174437667405312423031577302488055808> to <2 x double>), <2 x double> <double 3.000000e+00, double 4.000000e+00>), <2 x double> <double 1.000000e+00, double 2.000000e+00>), i32 1), 1 ; <%0> [#uses=1]
ret %0 %mrv6
}
llvm-svn: 85040
|
|
|
|
|
|
| |
ConstantExpr::getBitCast in various places.
llvm-svn: 85039
|
|
|
|
|
|
| |
instead of returning null on failure. No functionality change.
llvm-svn: 85038
|
|
|
|
| |
llvm-svn: 84993
|
|
|
|
|
|
| |
Duncan for the nice tiny testcase.
llvm-svn: 84992
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
implements something out of Target/README.txt producing:
_foo: ## @foo
movl 4(%esp), %eax
movapd LCPI1_0, %xmm0
movapd %xmm0, (%eax)
ret $4
instead of:
_foo: ## @foo
movl 4(%esp), %eax
movapd _b, %xmm0
mulpd LCPI1_0, %xmm0
addpd _a, %xmm0
movapd %xmm0, (%eax)
ret $4
llvm-svn: 84942
|
|
|
|
|
|
| |
bytes (i256).
llvm-svn: 84941
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
non-type-safe constant initializers. This sort of thing happens
quite a bit for 4-byte loads out of string constants, unions,
bitfields, and an interesting endianness check from sqlite, which
is something like this:
const int sqlite3one = 1;
# define SQLITE_BIGENDIAN (*(char *)(&sqlite3one)==0)
# define SQLITE_LITTLEENDIAN (*(char *)(&sqlite3one)==1)
# define SQLITE_UTF16NATIVE (SQLITE_BIGENDIAN?SQLITE_UTF16BE:SQLITE_UTF16LE)
all of these macros now constant fold away.
This implements PR3152 and is based on a patch started by Eli, but heavily
modified and extended.
llvm-svn: 84936
|
|
|
|
| |
llvm-svn: 84841
|
|
|
|
|
|
|
| |
to libanalysis. Instcombine shrinking... does this even
make sense???
llvm-svn: 84840
|
|
|
|
|
|
|
|
|
| |
Analysis/ConstantFolding.cpp. This doesn't change the behavior of
instcombine but makes other clients of ConstantFoldInstruction
able to handle loads. This was partially extracted from Eli's patch
in PR3152.
llvm-svn: 84836
|
|
|
|
| |
llvm-svn: 83338
|
|
|
|
|
|
| |
ConstantFoldLoadThroughGEPConstantExpr.
llvm-svn: 83311
|
|
|
|
| |
llvm-svn: 83295
|
|
|
|
| |
llvm-svn: 83294
|
|
|
|
| |
llvm-svn: 83292
|
|
|
|
| |
llvm-svn: 81961
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
how to fold notionally-out-of-bounds array getelementptr indices instead
of just doing these in lib/Analysis/ConstantFolding.cpp, because it can
be done in a fairly general way without TargetData, and because not all
constants are visited by lib/Analysis/ConstantFolding.cpp. This enables
more constant folding.
Also, set the "inbounds" flag when the getelementptr indices are
one-past-the-end.
llvm-svn: 81483
|