| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
is narrower than the shift register. Doing an anyext provides undefined bits in
the top part of the register.
llvm-svn: 125457
|
| |
|
|
| |
llvm-svn: 123912
|
| |
|
|
| |
llvm-svn: 123620
|
| |
|
|
|
|
| |
Patch (slightly modified) by Visa Putkinen.
llvm-svn: 122052
|
| |
|
|
|
|
| |
message instead of creating DBG_VALUE for undefined value in reg0.
llvm-svn: 121059
|
| |
|
|
|
|
| |
shiftamount > 7.
llvm-svn: 120288
|
| |
|
|
|
|
|
| |
This speeds up selected test cases with up to
5% - no slowdowns observed.
llvm-svn: 120286
|
| |
|
|
|
|
| |
Fix by Visa Putkinen!
llvm-svn: 120090
|
| |
|
|
|
|
| |
shifts.
llvm-svn: 120022
|
| |
|
|
|
|
|
| |
In the attached testcase, the element was
never extracted (missing rotate).
llvm-svn: 119973
|
| |
|
|
|
|
|
|
|
|
|
|
| |
support for the case where alignment<value size.
These cases were silently miscompiled before this patch.
Now they are overly verbose -especially storing is- and
any front-end should still avoid misaligned memory
accesses as much as possible. The bit juggling algorithm
added here probably has some room for improvement still.
llvm-svn: 118889
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The SPU ABI does not mention v64, and all examples
in C suggest v128 are treated similarily to arrays,
we use array alignment for v64 too. This makes the
alignment of e.g. [2 x <2 x i32>] behave "intuitively"
and similar to as if the elements were e.g. i32s.
This also makes an "unaligned store" test to be
aligned, with different (but functionally equivalent)
code generated.
llvm-svn: 117360
|
| |
|
|
|
|
|
|
| |
The old algorithm inserted a 'rotqmbyi' instruction which was
both redundant and wrong - it made shufb select bytes from the
wrong end of the input quad.
llvm-svn: 116701
|
| |
|
|
|
|
|
| |
Also remove some code that died in the process.
One now non-existant ori is checked for.
llvm-svn: 115306
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This cleans up after the mess r108567 left in the CellSPU backend.
ORCvt-instruction were used to reinterpret registers, and the ORs were then
removed by isMoveInstr(). This patch now removes 350 instrucions of format:
or $3, $3, $3
(from the 52 testcases in CodeGen/CellSPU). One case of a nonexistant or is
checked for.
Some moves of the form 'ori $., $., 0' and 'ai $., $., 0' still remain.
llvm-svn: 114074
|
| |
|
|
|
|
| |
Some cases of lowering to rotate were miscompiled.
llvm-svn: 113355
|
| |
|
|
|
|
| |
The IDX was treated as byte index, not element index.
llvm-svn: 112422
|
| |
|
|
|
|
| |
llc used to assert on the added testcase.
llvm-svn: 111911
|
| |
|
|
|
|
|
| |
The previous algorithm in LowerVECTOR_SHUFFLE
didn't check all requirements for "monotonic" shuffles.
llvm-svn: 111361
|
| |
|
|
|
|
|
|
| |
The "half vectors" are now widened to full size by the legalizer.
The only exception is in parameter passing, where half vectors are
expanded. This causes changes to some dejagnu tests.
llvm-svn: 111360
|
| |
|
|
|
|
|
|
| |
"SPU Application Binary Interface Specification, v1.9" by
IBM.
Specifically: use r3-r74 to pass parameters and the return value.
llvm-svn: 111358
|
| |
|
|
| |
llvm-svn: 110576
|
| |
|
|
|
|
| |
store for "half vectors"
llvm-svn: 110198
|
| |
|
|
| |
llvm-svn: 110038
|
| |
|
|
|
|
|
|
| |
duplicate the instructions and operate on half vectors.
Also reorder code in SPUInstrInfo.td for better coherency.
llvm-svn: 110037
|
| |
|
|
|
|
|
|
|
|
| |
such registers in SPU, this support boils down to "emulating"
them by duplicating instructions on the general purpose registers.
This adds the most basic operations on v2i32: passing parameters,
addition, subtraction, multiplication and a few others.
llvm-svn: 110035
|
| |
|
|
|
|
| |
TII::isMoveInstr is going tobe completely removed.
llvm-svn: 108507
|
| |
|
|
| |
llvm-svn: 106954
|
| |
|
|
| |
llvm-svn: 106421
|
| |
|
|
|
|
|
|
|
| |
This allows the fast regiser allocator to remove redundant
register moves.
Update a set of tests that depend on the register allocator
to be linear scan.
llvm-svn: 106420
|
| |
|
|
| |
llvm-svn: 106419
|
| |
|
|
|
|
|
| |
used to choke llc with the attached test.
llvm-svn: 106411
|
| |
|
|
|
|
| |
We default to inserting to lane 0.
llvm-svn: 105722
|
| |
|
|
|
|
| |
random load/store, rather than crashing llc.
llvm-svn: 105710
|
| |
|
|
| |
llvm-svn: 105269
|
| |
|
|
| |
llvm-svn: 103466
|
| |
|
|
| |
llvm-svn: 103399
|
| |
|
|
|
|
|
|
| |
emit an add instruction of the form 'a reg, reg, imm'."
Patch by Kalle Raiskila!
llvm-svn: 103021
|
| |
|
|
|
|
| |
patch by Kalle Raiskila!
llvm-svn: 101875
|
| |
|
|
| |
llvm-svn: 100879
|
| |
|
|
|
|
|
|
|
|
| |
directive are not aligned on 16 byte boundaries. This causes misaligned loads, as the generated assembly assumes this "default" alignment.
this patch disables .lcomm in favour of '.local .comm'
Patch by Kalle Raisklia!
llvm-svn: 100875
|
| |
|
|
|
|
| |
those who don't build all targets.
llvm-svn: 100688
|
| |
|
|
|
|
|
|
| |
"the bigstack patch for SPU, with testcase. It is essentially the patch committed as 97091, and reverted as 97099, but with the following additions:
-in vararg handling, registers are marked to be live, to not confuse the register scavenger
-function prologue and epilogue are not emitted, if the stack size is 16. 16 means it is empty - there is only the register scavenger emergency spill slot, which is not used as there is no stack."
llvm-svn: 99819
|
| |
|
|
| |
llvm-svn: 97814
|
| |
|
|
| |
llvm-svn: 93869
|
| |
|
|
|
|
|
|
|
|
| |
(OP (trunc x), (trunc y)) -> (trunc (OP x, y))
Unfortunately this simple change causes dag combine to infinite looping. The problem is the shrink demanded ops optimization tend to canonicalize expressions in the opposite manner. That is badness. This patch disable those optimizations in dag combine but instead it is done as a late pass in sdisel.
This also exposes some deficiencies in dag combine and x86 setcc / brcond lowering. Teach them to look pass ISD::TRUNCATE in various places.
llvm-svn: 92849
|
| |
|
|
| |
llvm-svn: 92740
|
| |
|
|
|
|
|
|
| |
Fold (zext (and x, cst)) -> (and (zext x), cst)
DAG combiner likes to optimize expression in the other way so this would end up cause an infinite looping.
llvm-svn: 91574
|
| |
|
|
| |
llvm-svn: 91380
|
| |
|
|
|
|
| |
isl lowering code.
llvm-svn: 90925
|