| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
llvm-svn: 60384
|
| |
|
|
| |
llvm-svn: 60383
|
| |
|
|
|
|
| |
- Add support for seto, setno, setc, and setnc instructions.
llvm-svn: 60382
|
| |
|
|
|
|
| |
types.
llvm-svn: 60381
|
| |
|
|
| |
llvm-svn: 60380
|
| |
|
|
| |
llvm-svn: 60377
|
| |
|
|
| |
llvm-svn: 60376
|
| |
|
|
|
|
|
|
|
| |
figuring out the base of the IV. This produces better
code in the example. (Addresses use (IV) instead of
(BASE,IV) - a significant improvement on low-register
machines like x86).
llvm-svn: 60374
|
| |
|
|
| |
llvm-svn: 60373
|
| |
|
|
| |
llvm-svn: 60372
|
| |
|
|
|
|
| |
and big endian systems.
llvm-svn: 60371
|
| |
|
|
| |
llvm-svn: 60370
|
| |
|
|
| |
llvm-svn: 60369
|
| |
|
|
|
|
| |
-Start adding support for rewriting @synthesize.
llvm-svn: 60368
|
| |
|
|
| |
llvm-svn: 60367
|
| |
|
|
|
|
| |
integer is "minint".
llvm-svn: 60366
|
| |
|
|
| |
llvm-svn: 60365
|
| |
|
|
| |
llvm-svn: 60364
|
| |
|
|
| |
llvm-svn: 60363
|
| |
|
|
|
|
|
| |
__ASSEMBLER__ properly. Patch from Roman Divacky (with minor
formatting changes). Thanks!
llvm-svn: 60362
|
| |
|
|
| |
llvm-svn: 60361
|
| |
|
|
| |
llvm-svn: 60360
|
| |
|
|
| |
llvm-svn: 60359
|
| |
|
|
|
|
|
|
|
| |
- Fix v2[if]64 vector insertion code before IBM files a bug report.
- Ensure that zero (0) offsets relative to $sp don't trip an assert
(add $sp, 0 gets legalized to $sp alone, tripping an assert)
- Shuffle masks passed to SPUISD::SHUFB are now v16i8 or v4i32
llvm-svn: 60358
|
| |
|
|
| |
llvm-svn: 60357
|
| |
|
|
|
|
| |
as a pointer
llvm-svn: 60355
|
| |
|
|
| |
llvm-svn: 60354
|
| |
|
|
| |
llvm-svn: 60353
|
| |
|
|
|
|
|
| |
damaged approximation. This should fix it on big endian platforms
and on 64-bit.
llvm-svn: 60352
|
| |
|
|
|
|
|
|
| |
MERGE_VALUES node with only one operand, so get
rid of special code that only existed to handle
that possibility.
llvm-svn: 60349
|
| |
|
|
|
|
|
|
|
|
|
| |
ReplaceNodeResults: rather than returning a node which
must have the same number of results as the original
node (which means mucking around with MERGE_VALUES,
and which is also easy to get wrong since SelectionDAG
folding may mean you don't get the node you expect),
return the results in a vector.
llvm-svn: 60348
|
| |
|
|
|
|
| |
don't have overlapping bits.
llvm-svn: 60344
|
| |
|
|
| |
llvm-svn: 60343
|
| |
|
|
| |
llvm-svn: 60341
|
| |
|
|
|
|
| |
fiddling with constants unless we have to.
llvm-svn: 60340
|
| |
|
|
|
|
| |
instead of throughout it.
llvm-svn: 60339
|
| |
|
|
|
|
|
| |
that it isn't reallocated all the time. This is a tiny speedup for
GVN: 3.90->3.88s
llvm-svn: 60338
|
| |
|
|
| |
llvm-svn: 60337
|
| |
|
|
| |
llvm-svn: 60336
|
| |
|
|
|
|
|
|
|
| |
instead of std::sort. This shrinks the release-asserts LSR.o file
by 1100 bytes of code on my system.
We should start using array_pod_sort where possible.
llvm-svn: 60335
|
| |
|
|
| |
llvm-svn: 60334
|
| |
|
|
|
|
| |
don't want that :)
llvm-svn: 60333
|
| |
|
|
|
|
| |
This is a lot cheaper and conceptually simpler.
llvm-svn: 60332
|
| |
|
|
| |
llvm-svn: 60331
|
| |
|
|
|
|
| |
DeadInsts ivar, just use it directly.
llvm-svn: 60330
|
| |
|
|
|
|
|
|
| |
buggy rewrite, this notifies ScalarEvolution of a pending instruction
about to be removed and then erases it, instead of erasing it then
notifying.
llvm-svn: 60329
|
| |
|
|
|
|
| |
xor in testcase (or is a substring).
llvm-svn: 60328
|
| |
|
|
|
|
|
|
|
| |
new instructions it simplifies. Because we're threading jumps on edges
with constants coming in from PHI's, we inherently are exposing a lot more
constants to the new block. Folding them and deleting dead conditions
allows the cost model in jump threading to be more accurate as it iterates.
llvm-svn: 60327
|
| |
|
|
|
|
|
| |
prevents the passmgr from adding yet-another domtree invocation
for Verifier if there is already one live.
llvm-svn: 60326
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
instead of using FoldPHIArgBinOpIntoPHI. In addition to being more
obvious, this also fixes a problem where instcombine wouldn't merge two
phis that had different variable indices. This prevented instcombine
from factoring big chunks of code in 403.gcc. For example:
insn_cuid.exit:
- %tmp336 = load i32** @uid_cuid, align 4
- %tmp337 = getelementptr %struct.rtx_def* %insn_addr.0.ph.i, i32 0, i32 3
- %tmp338 = bitcast [1 x %struct.rtunion]* %tmp337 to i32*
- %tmp339 = load i32* %tmp338, align 4
- %tmp340 = getelementptr i32* %tmp336, i32 %tmp339
br label %bb62
bb61:
- %tmp341 = load i32** @uid_cuid, align 4
- %tmp342 = getelementptr %struct.rtx_def* %insn, i32 0, i32 3
- %tmp343 = bitcast [1 x %struct.rtunion]* %tmp342 to i32*
- %tmp344 = load i32* %tmp343, align 4
- %tmp345 = getelementptr i32* %tmp341, i32 %tmp344
br label %bb62
bb62:
- %iftmp.62.0.in = phi i32* [ %tmp345, %bb61 ], [ %tmp340, %insn_cuid.exit ]
+ %insn.pn2 = phi %struct.rtx_def* [ %insn, %bb61 ], [ %insn_addr.0.ph.i, %insn_cuid.exit ]
+ %tmp344.pn.in.in = getelementptr %struct.rtx_def* %insn.pn2, i32 0, i32 3
+ %tmp344.pn.in = bitcast [1 x %struct.rtunion]* %tmp344.pn.in.in to i32*
+ %tmp341.pn = load i32** @uid_cuid
+ %tmp344.pn = load i32* %tmp344.pn.in
+ %iftmp.62.0.in = getelementptr i32* %tmp341.pn, i32 %tmp344.pn
%iftmp.62.0 = load i32* %iftmp.62.0.in
llvm-svn: 60325
|