| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
obviously is coupled to the IR.
llvm-svn: 202818
|
| |
|
|
|
|
|
|
|
| |
I am really sorry for the noise, but the current state where some parts of the
code use TD (from the old name: TargetData) and other parts use DL makes it
hard to write a patch that changes where those variables come from and how
they are passed along.
llvm-svn: 201827
|
| |
|
|
|
|
| |
transformation is illegal.
llvm-svn: 174156
|
| |
|
|
| |
llvm-svn: 174152
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
into their new header subdirectory: include/llvm/IR. This matches the
directory structure of lib, and begins to correct a long standing point
of file layout clutter in LLVM.
There are still more header files to move here, but I wanted to handle
them in separate commits to make tracking what files make sense at each
layer easier.
The only really questionable files here are the target intrinsic
tablegen files. But that's a battle I'd rather not fight today.
I've updated both CMake and Makefile build systems (I think, and my
tests think, but I may have missed something).
I've also re-sorted the includes throughout the project. I'll be
committing updates to Clang, DragonEgg, and Polly momentarily.
llvm-svn: 171366
|
| |
|
|
|
|
| |
No functionality change.
llvm-svn: 169703
|
| |
|
|
| |
llvm-svn: 169701
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.
Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]
llvm-svn: 169131
|
| |
|
|
| |
llvm-svn: 165402
|
| |
|
|
|
|
| |
See: http://en.wikipedia.org/wiki/If_and_only_if Commit 164767
llvm-svn: 164768
|
| |
|
|
| |
llvm-svn: 164767
|
| |
|
|
|
|
| |
type's size.
llvm-svn: 157548
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Original commit message:
Defer some shl transforms to DAGCombine.
The shl instruction is used to represent multiplication by a constant
power of two as well as bitwise left shifts. Some InstCombine
transformations would turn an shl instruction into a bit mask operation,
making it difficult for later analysis passes to recognize the
constsnt multiplication.
Disable those shl transformations, deferring them to DAGCombine time.
An 'shl X, C' instruction is now treated mostly the same was as 'mul X, C'.
These transformations are deferred:
(X >>? C) << C --> X & (-1 << C) (When X >> C has multiple uses)
(X >>? C1) << C2 --> X << (C2-C1) & (-1 << C2) (When C2 > C1)
(X >>? C1) << C2 --> X >>? (C1-C2) & (-1 << C2) (When C1 > C2)
The corresponding exact transformations are preserved, just like
div-exact + mul:
(X >>?,exact C) << C --> X
(X >>?,exact C1) << C2 --> X << (C2-C1)
(X >>?,exact C1) << C2 --> X >>?,exact (C1-C2)
The disabled transformations could also prevent the instruction selector
from recognizing rotate patterns in hash functions and cryptographic
primitives. I have a test case for that, but it is too fragile.
llvm-svn: 155362
|
| |
|
|
|
|
|
|
|
| |
While the patch was perfect and defect free, it exposed a really nasty
bug in X86 SelectionDAG that caused an llc crash when compiling lencod.
I'll put the patch back in after fixing the SelectionDAG problem.
llvm-svn: 155181
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The shl instruction is used to represent multiplication by a constant
power of two as well as bitwise left shifts. Some InstCombine
transformations would turn an shl instruction into a bit mask operation,
making it difficult for later analysis passes to recognize the
constsnt multiplication.
Disable those shl transformations, deferring them to DAGCombine time.
An 'shl X, C' instruction is now treated mostly the same was as 'mul X, C'.
These transformations are deferred:
(X >>? C) << C --> X & (-1 << C) (When X >> C has multiple uses)
(X >>? C1) << C2 --> X << (C2-C1) & (-1 << C2) (When C2 > C1)
(X >>? C1) << C2 --> X >>? (C1-C2) & (-1 << C2) (When C1 > C2)
The corresponding exact transformations are preserved, just like
div-exact + mul:
(X >>?,exact C) << C --> X
(X >>?,exact C1) << C2 --> X << (C2-C1)
(X >>?,exact C1) << C2 --> X >>?,exact (C1-C2)
The disabled transformations could also prevent the instruction selector
from recognizing rotate patterns in hash functions and cryptographic
primitives. I have a test case for that, but it is too fragile.
llvm-svn: 155136
|
| |
|
|
| |
llvm-svn: 149967
|
| |
|
|
| |
llvm-svn: 147529
|
| |
|
|
|
|
| |
nsw bits on them.
llvm-svn: 147528
|
| |
|
|
|
|
|
| |
'and' that would zero out the trailing bits, and to produce an exact shift
ourselves.
llvm-svn: 147391
|
| |
|
|
|
|
| |
Add FIXMEs to places that are non-trivial to fix.
llvm-svn: 145661
|
| |
|
|
|
|
| |
are combined together. <rdar://problem/9859829>
llvm-svn: 136435
|
| |
|
|
|
|
| |
it's used and not included where it isn't.
llvm-svn: 135628
|
| |
|
|
| |
llvm-svn: 135375
|
| |
|
|
| |
llvm-svn: 130489
|
| |
|
|
|
|
| |
Fixes PR9809.
llvm-svn: 130485
|
| |
|
|
|
|
|
|
|
|
|
| |
exact/nsw/nuw shifts and have instcombine infer them when it can prove
that the relevant properties are true for a given shift without them.
Also, a variety of refactoring to use the new patternmatch logic thrown
in for good luck. I believe that this takes care of a bunch of related
code quality issues attached to PR8862.
llvm-svn: 125267
|
| |
|
|
|
|
| |
improve interfaces to instsimplify to take this info.
llvm-svn: 125196
|
| |
|
|
|
|
|
|
|
|
| |
clang's -Wuninitialized-experimental warning.
While these don't look like real bugs, clang's
-Wuninitialized-experimental analysis is stricter
than GCC's, and these fixes have the benefit
of being general nice cleanups.
llvm-svn: 124073
|
| |
|
|
|
|
|
|
|
|
|
|
| |
While there, I noticed that the transform "undef >>a X -> undef" was wrong.
For example if X is 2 then the top two bits must be equal, so the result can
not be anything. I fixed this in the constant folder as well. Also, I made
the transform for "X << undef" stronger: it now folds to undef always, even
though X might be zero. This is in accordance with the LangRef, but I must
admit that it is fairly aggressive. Also, I added "i32 X << 32 -> undef"
following the LangRef and the constant folder, likewise fairly aggressive.
llvm-svn: 123417
|
| |
|
|
|
|
|
|
| |
verify are zero
are not the low bits of x, but the bits that WILL be the low bits after the operation completes.
llvm-svn: 122529
|
| |
|
|
|
|
| |
before eliminating the operation that zeros them. This fixes rdar://8739316.
llvm-svn: 121353
|
| |
|
|
|
|
|
|
|
|
| |
two.
E.g. -5 % 5 is 0 with srem and 1 with urem.
Also addresses Frits van Bommel's comments.
llvm-svn: 120049
|
| |
|
|
|
|
|
|
| |
positive.
This allows to transform the rem in "1 << ((int)x % 8);" to an and.
llvm-svn: 120028
|
| |
|
|
|
|
|
| |
order to reduce ((x<<30)>>24) to x<<6, check the
correct bits. PR 8547.
llvm-svn: 118665
|
| |
|
|
|
|
|
|
| |
will BECOME the low
bits are zero, not that the current low bits are zero. Fixes <rdar://problem/8606771>.
llvm-svn: 117953
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
element insertion from the pieces that feed into the vector.
This handles a pattern that occurs frequently due to code
generated for the x86-64 abi. We now compile something like
this:
struct S { float A, B, C, D; };
struct S g;
struct S bar() {
struct S A = g;
++A.A;
++A.C;
return A;
}
into all nice vector operations:
_bar: ## @bar
## BB#0: ## %entry
movq _g@GOTPCREL(%rip), %rax
movss LCPI1_0(%rip), %xmm1
movss (%rax), %xmm0
addss %xmm1, %xmm0
pshufd $16, %xmm0, %xmm0
movss 4(%rax), %xmm2
movss 12(%rax), %xmm3
pshufd $16, %xmm2, %xmm2
unpcklps %xmm2, %xmm0
addss 8(%rax), %xmm1
pshufd $16, %xmm1, %xmm1
pshufd $16, %xmm3, %xmm2
unpcklps %xmm2, %xmm1
ret
instead of icky integer operations:
_bar: ## @bar
movq _g@GOTPCREL(%rip), %rax
movss LCPI1_0(%rip), %xmm1
movss (%rax), %xmm0
addss %xmm1, %xmm0
movd %xmm0, %ecx
movl 4(%rax), %edx
movl 12(%rax), %esi
shlq $32, %rdx
addq %rcx, %rdx
movd %rdx, %xmm0
addss 8(%rax), %xmm1
movd %xmm1, %eax
shlq $32, %rsi
addq %rax, %rsi
movd %rsi, %xmm1
ret
This resolves rdar://8360454
llvm-svn: 112343
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A = shl x, 42
...
B = lshr ..., 38
which can be transformed into:
A = shl x, 4
...
iff we can prove that the would-be-shifted-in bits
are already zero. This eliminates two shifts in the testcase
and allows eliminate of the whole i128 chain in the real example.
llvm-svn: 112314
|
| |
|
|
|
|
|
|
|
|
|
|
| |
framework, which is good at ripping through bitfield
operations. This generalize a bunch of the existing
xforms that instcombine does, such as
(x << c) >> c -> and
to handle intermediate logical nodes. This is useful for
ripping up the "promote to large integer" code produced by
SRoA.
llvm-svn: 112304
|
| |
|
|
|
|
| |
more general simplify demanded bits logic.
llvm-svn: 112291
|
| |
|
|
| |
llvm-svn: 106707
|
| |
|
|
|
|
|
| |
Probably the best way to know that all getOperand() calls have been handled
is to replace that API instead of updating.
llvm-svn: 101579
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
with a fix for self-hosting
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101465
|
| |
|
|
| |
llvm-svn: 101434
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
with a fix
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101397
|
| |
|
|
| |
llvm-svn: 101368
|
| |
|
|
|
|
|
|
|
|
| |
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101364
|
| |
|
|
| |
llvm-svn: 94336
|
| |
|
|
|
|
| |
readme forever.
llvm-svn: 94318
|
| |
|
|
|
|
|
|
| |
aggressive changed the canonical form from sext(trunc(x)) to ashr(lshr(x)),
make sure to transform a couple more things into that canonical form,
and catch a case where we missed turning zext/shl/ashr into a single sext.
llvm-svn: 93787
|
| |
|
|
|
|
|
|
| |
lshr+ashr instead of trunc+sext. We want to avoid type
conversions whenever possible, it is easier to codegen expressions
without truncates and extensions.
llvm-svn: 93107
|