summaryrefslogtreecommitdiffstats
path: root/llvm
Commit message (Collapse)AuthorAgeFilesLines
...
* New rlwimi implementation, which is superior to the old one. There areNate Begeman2006-05-071-86/+45
| | | | | | | | still a couple missed optimizations, but we now generate all the possible rlwimis for multiple inserts into the same bitfield. More regression tests to come. llvm-svn: 28156
* Use ComputeMaskedBits to determine # sign bits as a fallback. This allows usChris Lattner2006-05-061-2/+23
| | | | | | | to handle all kinds of stuff, including silly things like: sextinreg(setcc,i16) -> setcc. llvm-svn: 28155
* Add some more sign propagation casesChris Lattner2006-05-061-10/+77
| | | | llvm-svn: 28154
* Apply bug fix supplied by Greg Pettyjohn for a bug he found: '<invalid>' is ↵Jeff Cohen2006-05-061-1/+1
| | | | | | not a legal path on Windows. llvm-svn: 28153
* Simplify some code, add a couple minor missed foldsChris Lattner2006-05-061-21/+16
| | | | llvm-svn: 28152
* constant fold sign_extend_inregChris Lattner2006-05-061-1/+9
| | | | llvm-svn: 28151
* remove cases handled elsewhereChris Lattner2006-05-061-16/+2
| | | | llvm-svn: 28150
* Add some more simple sign bit propagation cases.Chris Lattner2006-05-061-27/+67
| | | | llvm-svn: 28149
* Fix some loose ends in MASM support.Jeff Cohen2006-05-063-65/+77
| | | | llvm-svn: 28148
* new testcase we handle right now.Chris Lattner2006-05-061-6/+20
| | | | llvm-svn: 28147
* Use the new TargetLowering::ComputeNumSignBits method to eliminateChris Lattner2006-05-061-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sign_extend_inreg operations. Though ComputeNumSignBits is still rudimentary, this is enough to compile this: short test(short X, short x) { int Y = X+x; return (Y >> 1); } short test2(short X, short x) { int Y = (short)(X+x); return Y >> 1; } into: _test: add r2, r3, r4 srawi r3, r2, 1 blr _test2: add r2, r3, r4 extsh r2, r2 srawi r3, r2, 1 blr instead of: _test: add r2, r3, r4 srawi r2, r2, 1 extsh r3, r2 blr _test2: add r2, r3, r4 extsh r2, r2 srawi r2, r2, 1 extsh r3, r2 blr llvm-svn: 28146
* Add some really really simple code for computing sign-bit propagation.Chris Lattner2006-05-061-0/+95
| | | | | | This will certainly be enhanced in the future. llvm-svn: 28145
* Add some new methods for computing sign bit information.Chris Lattner2006-05-061-0/+13
| | | | llvm-svn: 28144
* When inserting casts, be careful of where we put them. We cannot insertChris Lattner2006-05-061-9/+12
| | | | | | | | a cast immediately before a PHI node. This fixes Regression/CodeGen/Generic/2006-05-06-GEP-Cast-Sink-Crash.ll llvm-svn: 28143
* new testcaseChris Lattner2006-05-061-0/+33
| | | | llvm-svn: 28142
* Move some code around.Chris Lattner2006-05-061-124/+140
| | | | | | | | | | Make the "fold (and (cast A), (cast B)) -> (cast (and A, B))" transformation only apply when both casts really will cause code to be generated. If one or both doesn't, then this xform doesn't remove a cast. This fixes Transforms/InstCombine/2006-05-06-Infloop.ll llvm-svn: 28141
* new testcase from ghostscript that inf looped instcombineChris Lattner2006-05-061-0/+522
| | | | llvm-svn: 28140
* Teach the X86 backend about non-i32 inline asm register classes.Chris Lattner2006-05-061-5/+25
| | | | llvm-svn: 28139
* Fold (trunc (srl x, c)) -> (srl (trunc x), c)Chris Lattner2006-05-061-0/+32
| | | | llvm-svn: 28138
* Fold trunc(any_ext). This gives stuff like:Chris Lattner2006-05-051-1/+2
| | | | | | | | | | 27,28c27 < movzwl %di, %edi < movl %edi, %ebx --- > movw %di, %bx llvm-svn: 28137
* Shrink shifts when possible.Chris Lattner2006-05-051-0/+12
| | | | llvm-svn: 28136
* Implement ComputeMaskedBits/SimplifyDemandedBits for ISD::TRUNCATEChris Lattner2006-05-051-0/+18
| | | | llvm-svn: 28135
* Print a grouping around inline asm blocks so that we can tell when we areChris Lattner2006-05-051-1/+2
| | | | | | using them. llvm-svn: 28134
* Print *some* grouping around inline asm blocks so we know where they are.Chris Lattner2006-05-051-1/+2
| | | | llvm-svn: 28133
* Indent multiline asm strings more nicelyChris Lattner2006-05-051-5/+9
| | | | llvm-svn: 28132
* Teach the code generator to use cvtss2sd as extload f32 -> f64Chris Lattner2006-05-052-5/+1
| | | | llvm-svn: 28131
* Fold (fpext (load x)) -> (extload x)Chris Lattner2006-05-051-0/+14
| | | | llvm-svn: 28130
* More aggressively sink GEP offsets into loops. For example, before weChris Lattner2006-05-051-56/+115
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | generated: movl 8(%esp), %eax movl %eax, %edx addl $4316, %edx cmpb $1, %cl ja LBB1_2 #cond_false LBB1_1: #cond_true movl L_QuantizationTables720$non_lazy_ptr, %ecx movl %ecx, (%edx) movl L_QNOtoQuantTableShift720$non_lazy_ptr, %edx movl %edx, 4460(%eax) ret ... Now we generate: movl 8(%esp), %eax cmpb $1, %cl ja LBB1_2 #cond_false LBB1_1: #cond_true movl L_QuantizationTables720$non_lazy_ptr, %ecx movl %ecx, 4316(%eax) movl L_QNOtoQuantTableShift720$non_lazy_ptr, %ecx movl %ecx, 4460(%eax) ret ... which uses one fewer register. llvm-svn: 28129
* Fix an infinite loop compiling oggenc last night.Chris Lattner2006-05-051-6/+9
| | | | llvm-svn: 28128
* Need extload patterns after Chris' DAG combiner changesEvan Cheng2006-05-051-1/+11
| | | | llvm-svn: 28127
* Implement InstCombine/cast.ll:test29Chris Lattner2006-05-051-0/+40
| | | | llvm-svn: 28126
* New testcaseChris Lattner2006-05-051-0/+8
| | | | llvm-svn: 28125
* Fold some common code.Chris Lattner2006-05-051-14/+2
| | | | llvm-svn: 28124
* Implement:Chris Lattner2006-05-051-5/+7
| | | | | | | | | | | // fold (and (sext x), (sext y)) -> (sext (and x, y)) // fold (or (sext x), (sext y)) -> (sext (or x, y)) // fold (xor (sext x), (sext y)) -> (sext (xor x, y)) // fold (and (aext x), (aext y)) -> (aext (and x, y)) // fold (or (aext x), (aext y)) -> (aext (or x, y)) // fold (xor (aext x), (aext y)) -> (aext (xor x, y)) llvm-svn: 28123
* Pull and through and/or/xor. This compiles some bitfield code to:Chris Lattner2006-05-051-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mov EAX, DWORD PTR [ESP + 4] mov ECX, DWORD PTR [EAX] mov EDX, ECX add EDX, EDX or EDX, ECX and EDX, -2147483648 and ECX, 2147483647 or EDX, ECX mov DWORD PTR [EAX], EDX ret instead of: sub ESP, 4 mov DWORD PTR [ESP], ESI mov EAX, DWORD PTR [ESP + 8] mov ECX, DWORD PTR [EAX] mov EDX, ECX add EDX, EDX mov ESI, ECX and ESI, -2147483648 and EDX, -2147483648 or EDX, ESI and ECX, 2147483647 or EDX, ECX mov DWORD PTR [EAX], EDX mov ESI, DWORD PTR [ESP] add ESP, 4 ret llvm-svn: 28122
* Implement a variety of simplifications for ANY_EXTEND.Chris Lattner2006-05-051-0/+51
| | | | llvm-svn: 28121
* Factor some code, add these transformations:Chris Lattner2006-05-051-55/+66
| | | | | | | | // fold (and (trunc x), (trunc y)) -> (trunc (and x, y)) // fold (or (trunc x), (trunc y)) -> (trunc (or x, y)) // fold (xor (trunc x), (trunc y)) -> (trunc (xor x, y)) llvm-svn: 28120
* Better implementation of truncate. ISel matches it to a pseudo instructionEvan Cheng2006-05-056-240/+162
| | | | | | | | that gets emitted as movl (for r32 to i16, i8) or a movw (for r16 to i8). And if the destination gets allocated a subregister of the source operand, then the instruction will not be emitted at all. llvm-svn: 28119
* New note, Nate, please check to see if I'm full of it :)Chris Lattner2006-05-051-0/+33
| | | | llvm-svn: 28118
* Fix VC++ compilation error.Jeff Cohen2006-05-051-1/+1
| | | | llvm-svn: 28117
* Somehow, I missed this part of the checkin a couple days agoNate Begeman2006-05-051-3/+0
| | | | llvm-svn: 28116
* Sink noop copies into the basic block that uses them. This reduces the numberChris Lattner2006-05-051-4/+77
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | of cross-block live ranges, and allows the bb-at-a-time selector to always coallesce these away, at isel time. This reduces the load on the coallescer and register allocator. For example on a codec on X86, we went from: 1643 asm-printer - Number of machine instrs printed 419 liveintervals - Number of loads/stores folded into instructions 1144 liveintervals - Number of identity moves eliminated after coalescing 1022 liveintervals - Number of interval joins performed 282 liveintervals - Number of intervals after coalescing 1304 liveintervals - Number of original intervals 86 regalloc - Number of times we had to backtrack 1.90232 regalloc - Ratio of intervals processed over total intervals 40 spiller - Number of values reused 182 spiller - Number of loads added 121 spiller - Number of stores added 132 spiller - Number of register spills 6 twoaddressinstruction - Number of instructions commuted to coalesce 360 twoaddressinstruction - Number of two-address instructions to: 1636 asm-printer - Number of machine instrs printed 403 liveintervals - Number of loads/stores folded into instructions 1155 liveintervals - Number of identity moves eliminated after coalescing 1033 liveintervals - Number of interval joins performed 279 liveintervals - Number of intervals after coalescing 1312 liveintervals - Number of original intervals 76 regalloc - Number of times we had to backtrack 1.88998 regalloc - Ratio of intervals processed over total intervals 1 spiller - Number of copies elided 41 spiller - Number of values reused 191 spiller - Number of loads added 114 spiller - Number of stores added 128 spiller - Number of register spills 4 twoaddressinstruction - Number of instructions commuted to coalesce 356 twoaddressinstruction - Number of two-address instructions On this testcase, this change provides a modest reduction in spill code, regalloc iterations, and total instructions emitted. It increases the number of register coallesces. llvm-svn: 28115
* Add a helper method.Chris Lattner2006-05-051-0/+6
| | | | llvm-svn: 28114
* wrap long lineChris Lattner2006-05-041-1/+2
| | | | llvm-svn: 28113
* Adjust to use proper TargetData copy ctorChris Lattner2006-05-042-3/+2
| | | | llvm-svn: 28112
* Fix this to be a proper copy ctorChris Lattner2006-05-041-11/+11
| | | | llvm-svn: 28111
* Final pass of minor cleanups for MachineInstrChris Lattner2006-05-042-9/+5
| | | | llvm-svn: 28110
* Initial support for register pressure aware scheduling. The register reductionEvan Cheng2006-05-041-50/+238
| | | | | | | | | | scheduler can go into a "vertical mode" (i.e. traversing up the two-address chain, etc.) when the register pressure is low. This does seem to reduce the number of spills in the cases I've looked at. But with x86, it's no guarantee the performance of the code improves. It can be turned on with -sched-vertically option. llvm-svn: 28108
* Remove redundancy and a level of indirection when creating machine operandsChris Lattner2006-05-042-89/+64
| | | | llvm-svn: 28107
* Move register numbers out of "extra" into "contents". Other minor cleanup.Chris Lattner2006-05-041-34/+21
| | | | llvm-svn: 28106
OpenPOWER on IntegriCloud