| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Approved by Chris Lattner.
llvm-svn: 167984
|
|
|
|
| |
llvm-svn: 165019
|
|
|
|
|
|
| |
I don't have a win32 system to test, so hopefully I got them all fixed here.
llvm-svn: 161519
|
|
|
|
|
|
|
|
|
| |
This patch is mostly just refactoring a bunch of copy-and-pasted code, but
it also adds a check that the call instructions are readnone or readonly.
That check was already present for sin, cos, sqrt, log2, and exp2 calls, but
it was missing for the rest of the builtins being handled in this code.
llvm-svn: 161282
|
|
|
|
| |
llvm-svn: 160354
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was done through the aid of a terrible Perl creation. I will not
paste any of the horrors here. Suffice to say, it require multiple
staged rounds of replacements, state carried between, and a few
nested-construct-parsing hacks that I'm not proud of. It happens, by
luck, to be able to deal with all the TCL-quoting patterns in evidence
in the LLVM test suite.
If anyone is maintaining large out-of-tree test trees, feel free to poke
me and I'll send you the steps I used to convert things, as well as
answer any painful questions etc. IRC works best for this type of thing
I find.
Once converted, switch the LLVM lit config to use ShTests the same as
Clang. In addition to being able to delete large amounts of Python code
from 'lit', this will also simplify the entire test suite and some of
lit's architecture.
Finally, the test suite runs 33% faster on Linux now. ;]
For my 16-hardware-thread (2x 4-core xeon e5520): 36s -> 24s
llvm-svn: 159525
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
use FileCheck.
Aside from removing a dependence on TCL-style quoting, this also makes
the tests ... significantly more robust. =] It would be really, *really*
great of the maintainer(s) of the CellSPU backend went through and
systematically rewrite these tests to use FileCheck. There are a lot
more that have nearly this bad of abuses.
Another step along the path to a TclTest-free testsuite.
llvm-svn: 159523
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is mostly to test the waters. I'd like to get results from FNT
build bots and other bots running on non-x86 platforms.
This feature has been pretty heavily tested over the last few months by
me, and it fixes several of the execution time regressions caused by the
inlining work by preventing inlining decisions from radically impacting
block layout.
I've seen very large improvements in yacr2 and ackermann benchmarks,
along with the expected noise across all of the benchmark suite whenever
code layout changes. I've analyzed all of the regressions and fixed
them, or found them to be impossible to fix. See my email to llvmdev for
more details.
I'd like for this to be in 3.1 as it complements the inliner changes,
but if any failures are showing up or anyone has concerns, it is just
a flag flip and so can be easily turned off.
I'm switching it on tonight to try and get at least one run through
various folks' performance suites in case SPEC or something else has
serious issues with it. I'll watch bots and revert if anything shows up.
llvm-svn: 154816
|
|
|
|
|
|
|
|
|
|
| |
shuffle node because it could introduce new shuffle nodes that were not
supported efficiently by the target.
2. Add a more restrictive shuffle-of-shuffle optimization for cases where the
second shuffle reverses the transformation of the first shuffle.
llvm-svn: 154266
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. Simplify xor/and/or (bitcast(A), bitcast(B)) -> bitcast(op (A,B))
(and also scalar_to_vector).
2. Xor/and/or are indifferent to the swizzle operation (shuffle of one src).
Simplify xor/and/or (shuff(A), shuff(B)) -> shuff(op (A, B))
3. Optimize swizzles of shuffles: shuff(shuff(x, y), undef) -> shuff(x, y).
4. Fix an X86ISelLowering optimization which was very bitcast-sensitive.
Code which was previously compiled to this:
movd (%rsi), %xmm0
movdqa .LCPI0_0(%rip), %xmm2
pshufb %xmm2, %xmm0
movd (%rdi), %xmm1
pshufb %xmm2, %xmm1
pxor %xmm0, %xmm1
pshufb .LCPI0_1(%rip), %xmm1
movd %xmm1, (%rdi)
ret
Now compiles to this:
movl (%rsi), %eax
xorl %eax, (%rdi)
ret
llvm-svn: 153848
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Removed test/lib/llvm.exp - it is no longer needed
* Deleted the dg.exp reading code from test/lit.cfg. There are no dg.exp files
left in the test suite so this code is no longer required. test/lit.cfg is
now much shorter and clearer
* Removed a lot of duplicate code in lit.local.cfg files that need access to
the root configuration, by adding a "root" attribute to the TestingConfig
object. This attribute is dynamically computed to provide the same
information as was previously provided by the custom getRoot functions.
* Documented the config.root attribute in docs/CommandGuide/lit.pod
llvm-svn: 153408
|
|
|
|
|
|
|
|
| |
run with LIT now and now Dejagnu. dg.exp is no longer needed.
Patch reviewed by Daniel Dunbar. It will be followed by additional cleanup patches.
llvm-svn: 150664
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
v8i8 -> v8i32 on AVX machines. The codegen often scalarizes ANY_EXTEND nodes.
The DAGCombiner has two optimizations that can mitigate the problem. First,
if all of the operands of a BUILD_VECTOR node are extracted from an ZEXT/ANYEXT
nodes, then it is possible to create a new simplified BUILD_VECTOR which uses
UNDEFS/ZERO values to eliminate the scalar ZEXT/ANYEXT nodes.
Second, another dag combine optimization lowers BUILD_VECTOR into a shuffle
vector instruction.
In the case of zext v8i8->v8i32 on AVX, a value in an XMM register is to be
shuffled into a wide YMM register.
This patch modifes the second optimization and allows the creation of
shuffle vectors even when the newly generated vector and the original vector
from which we extract the values are of different types.
llvm-svn: 150340
|
|
|
|
| |
llvm-svn: 148337
|
|
|
|
|
|
| |
Counting the number of occurences of each opcode is not a useful test.
llvm-svn: 144474
|
|
|
|
|
|
|
| |
across calls, and only check for nested dependences on the special
call-sequence-resource register.
llvm-svn: 143660
|
|
|
|
| |
llvm-svn: 143262
|
|
|
|
|
|
|
|
|
| |
fixes: Use a separate register, instead of SP, as the
calling-convention resource, to avoid spurious conflicts with
actual uses of SP. Also, fix unscheduling of calling sequences,
which can be triggered by pseudo-two-address dependencies.
llvm-svn: 143206
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
it fixes the dragonegg self-host (it looks like gcc is miscompiled).
Original commit messages:
Eliminate LegalizeOps' LegalizedNodes map and have it just call RAUW
on every node as it legalizes them. This makes it easier to use
hasOneUse() heuristics, since unneeded nodes can be removed from the
DAG earlier.
Make LegalizeOps visit the DAG in an operands-last order. It previously
used operands-first, because LegalizeTypes has to go operands-first, and
LegalizeTypes used to be part of LegalizeOps, but they're now split.
The operands-last order is more natural for several legalization tasks.
For example, it allows lowering code for nodes with floating-point or
vector constants to see those constants directly instead of seeing the
lowered form (often constant-pool loads). This makes some things
somewhat more complicated today, though it ought to allow things to be
simpler in the future. It also fixes some bugs exposed by Legalizing
using RAUW aggressively.
Remove the part of LegalizeOps that attempted to patch up invalid chain
operands on libcalls generated by LegalizeTypes, since it doesn't work
with the new LegalizeOps traversal order. Instead, define what
LegalizeTypes is doing to be correct, and transfer the responsibility
of keeping calls from having overlapping calling sequences into the
scheduler.
Teach the scheduler to model callseq_begin/end pairs as having a
physical register definition/use to prevent calls from having
overlapping calling sequences. This is also somewhat complicated, though
there are ways it might be simplified in the future.
This addresses rdar://9816668, rdar://10043614, rdar://8434668, and others.
Please direct high-level questions about this patch to management.
Delete #if 0 code accidentally left in.
llvm-svn: 143188
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on every node as it legalizes them. This makes it easier to use
hasOneUse() heuristics, since unneeded nodes can be removed from the
DAG earlier.
Make LegalizeOps visit the DAG in an operands-last order. It previously
used operands-first, because LegalizeTypes has to go operands-first, and
LegalizeTypes used to be part of LegalizeOps, but they're now split.
The operands-last order is more natural for several legalization tasks.
For example, it allows lowering code for nodes with floating-point or
vector constants to see those constants directly instead of seeing the
lowered form (often constant-pool loads). This makes some things
somewhat more complicated today, though it ought to allow things to be
simpler in the future. It also fixes some bugs exposed by Legalizing
using RAUW aggressively.
Remove the part of LegalizeOps that attempted to patch up invalid chain
operands on libcalls generated by LegalizeTypes, since it doesn't work
with the new LegalizeOps traversal order. Instead, define what
LegalizeTypes is doing to be correct, and transfer the responsibility
of keeping calls from having overlapping calling sequences into the
scheduler.
Teach the scheduler to model callseq_begin/end pairs as having a
physical register definition/use to prevent calls from having
overlapping calling sequences. This is also somewhat complicated, though
there are ways it might be simplified in the future.
This addresses rdar://9816668, rdar://10043614, rdar://8434668, and others.
Please direct high-level questions about this patch to management.
llvm-svn: 143177
|
|
|
|
|
|
| |
Changed tests which assumed that vectors are legalized by widening them.
llvm-svn: 142152
|
|
|
|
|
|
| |
no pattern.
llvm-svn: 142130
|
|
|
|
|
|
| |
Not having it confused assembly printing of jumptables.
llvm-svn: 141862
|
|
|
|
| |
llvm-svn: 139004
|
|
|
|
|
|
|
|
| |
been
needed since llvm-gcc 3.4 days.
llvm-svn: 133248
|
|
|
|
|
|
| |
are either unreduced or only test old syntax.
llvm-svn: 133228
|
|
|
|
| |
llvm-svn: 129184
|
|
|
|
|
|
|
|
|
|
|
| |
Optimize trivial branches in CodeGenPrepare, which often get created from the
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.
This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.
llvm-svn: 127498
|
|
|
|
|
|
| |
created from the", it broke some GCC test suite tests.
llvm-svn: 127477
|
|
|
|
|
|
|
|
|
|
| |
lowering of objectsize intrinsics. Unfortunately, a number of tests were relying
on llc not optimizing trivial branches, so I had to add an option to allow them
to continue to test what they originally tested.
This fixes <rdar://problem/8785296> and <rdar://problem/9112893>.
llvm-svn: 127459
|
|
|
|
| |
llvm-svn: 127366
|
|
|
|
|
|
| |
that contain only letters, digits and the characters "_" and ".".
llvm-svn: 127028
|
|
|
|
|
|
|
|
|
| |
There was a previous implementation with patterns that would
have matched e.g.
shl <v4i32> <i32>,
but this is not valid LLVM IR so they never were selected.
llvm-svn: 126998
|
|
|
|
|
|
| |
A 'load <4 x i32>* null' crashes llc before this fix.
llvm-svn: 126995
|
|
|
|
| |
llvm-svn: 126963
|
|
|
|
|
|
|
| |
is narrower than the shift register. Doing an anyext provides undefined bits in
the top part of the register.
llvm-svn: 125457
|
|
|
|
| |
llvm-svn: 123912
|
|
|
|
| |
llvm-svn: 123620
|
|
|
|
|
|
| |
Patch (slightly modified) by Visa Putkinen.
llvm-svn: 122052
|
|
|
|
|
|
| |
message instead of creating DBG_VALUE for undefined value in reg0.
llvm-svn: 121059
|
|
|
|
|
|
| |
shiftamount > 7.
llvm-svn: 120288
|
|
|
|
|
|
|
| |
This speeds up selected test cases with up to
5% - no slowdowns observed.
llvm-svn: 120286
|
|
|
|
|
|
| |
Fix by Visa Putkinen!
llvm-svn: 120090
|
|
|
|
|
|
| |
shifts.
llvm-svn: 120022
|
|
|
|
|
|
|
| |
In the attached testcase, the element was
never extracted (missing rotate).
llvm-svn: 119973
|
|
|
|
|
|
|
|
|
|
|
|
| |
support for the case where alignment<value size.
These cases were silently miscompiled before this patch.
Now they are overly verbose -especially storing is- and
any front-end should still avoid misaligned memory
accesses as much as possible. The bit juggling algorithm
added here probably has some room for improvement still.
llvm-svn: 118889
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The SPU ABI does not mention v64, and all examples
in C suggest v128 are treated similarily to arrays,
we use array alignment for v64 too. This makes the
alignment of e.g. [2 x <2 x i32>] behave "intuitively"
and similar to as if the elements were e.g. i32s.
This also makes an "unaligned store" test to be
aligned, with different (but functionally equivalent)
code generated.
llvm-svn: 117360
|
|
|
|
|
|
|
|
| |
The old algorithm inserted a 'rotqmbyi' instruction which was
both redundant and wrong - it made shufb select bytes from the
wrong end of the input quad.
llvm-svn: 116701
|
|
|
|
|
|
|
| |
Also remove some code that died in the process.
One now non-existant ori is checked for.
llvm-svn: 115306
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This cleans up after the mess r108567 left in the CellSPU backend.
ORCvt-instruction were used to reinterpret registers, and the ORs were then
removed by isMoveInstr(). This patch now removes 350 instrucions of format:
or $3, $3, $3
(from the 52 testcases in CodeGen/CellSPU). One case of a nonexistant or is
checked for.
Some moves of the form 'ori $., $., 0' and 'ai $., $., 0' still remain.
llvm-svn: 114074
|