summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen
Commit message (Collapse)AuthorAgeFilesLines
* Fix some significant problems with constant pools that resulted in ↵Evan Cheng2009-03-138-67/+69
| | | | | | | | | | | | | | | | | | | | | | | unnecessary paddings between constant pool entries, larger than necessary alignments (e.g. 8 byte alignment for .literal4 sections), and potentially other issues. 1. ConstantPoolSDNode alignment field is log2 value of the alignment requirement. This is not consistent with other SDNode variants. 2. MachineConstantPool alignment field is also a log2 value. 3. However, some places are creating ConstantPoolSDNode with alignment value rather than log2 values. This creates entries with artificially large alignments, e.g. 256 for SSE vector values. 4. Constant pool entry offsets are computed when they are created. However, asm printer group them by sections. That means the offsets are no longer valid. However, asm printer uses them to determine size of padding between entries. 5. Asm printer uses expensive data structure multimap to track constant pool entries by sections. 6. Asm printer iterate over SmallPtrSet when it's emitting constant pool entries. This is non-deterministic. Solutions: 1. ConstantPoolSDNode alignment field is changed to keep non-log2 value. 2. MachineConstantPool alignment field is also changed to keep non-log2 value. 3. Functions that create ConstantPool nodes are passing in non-log2 alignments. 4. MachineConstantPoolEntry no longer keeps an offset field. It's replaced with an alignment field. Offsets are not computed when constant pool entries are created. They are computed on the fly in asm printer and JIT. 5. Asm printer uses cheaper data structure to group constant pool entries. 6. Asm printer compute entry offsets after grouping is done. 7. Change JIT code to compute entry offsets on the fly. llvm-svn: 66875
* Convert VirtRegMap to a MachineFunctionPass.Owen Anderson2009-03-134-28/+63
| | | | llvm-svn: 66870
* Oops...I committed too much.Bill Wendling2009-03-136-46/+65
| | | | llvm-svn: 66867
* Temporarily XFAIL this test.Bill Wendling2009-03-136-65/+46
| | | | llvm-svn: 66866
* Fix a typo in a comment.Dan Gohman2009-03-121-1/+1
| | | | llvm-svn: 66843
* Reorganize some #include's.Owen Anderson2009-03-122-5/+4
| | | | llvm-svn: 66780
* Move 3 "(add (select cc, 0, c), x) -> (select cc, x, (add, x, c))"Chris Lattner2009-03-121-76/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | related transformations out of target-specific dag combine into the ARM backend. These were added by Evan in r37685 with no testcases and only seems to help ARM (e.g. test/CodeGen/ARM/select_xform.ll). Add some simple X86-specific (for now) DAG combines that turn things like cond ? 8 : 0 -> (zext(cond) << 3). This happens frequently with the recently added cp constant select optimization, but is a very general xform. For example, we now compile the second example in const-select.ll to: _test: movsd LCPI2_0, %xmm0 ucomisd 8(%esp), %xmm0 seta %al movzbl %al, %eax movl 4(%esp), %ecx movsbl (%ecx,%eax,4), %eax ret instead of: _test: movl 4(%esp), %eax leal 4(%eax), %ecx movsd LCPI2_0, %xmm0 ucomisd 8(%esp), %xmm0 cmovbe %eax, %ecx movsbl (%ecx), %eax ret This passes multisource and dejagnu. llvm-svn: 66779
* Enable Chris' value propagation change. It make available known sign, zero, ↵Evan Cheng2009-03-121-3/+1
| | | | | | one bits information for values that are live out of basic blocks. The goal is to eliminate unnecessary sext, zext, truncate of values that are live-in to blocks. This does not handle PHI nodes yet. llvm-svn: 66777
* updateGabor Greif2009-03-111-0/+1
| | | | llvm-svn: 66733
* Reorganization: Move the Spiller out of VirtRegMap.cpp into its own files. ↵Owen Anderson2009-03-116-1886/+1971
| | | | | | No (intended) functionality change. llvm-svn: 66720
* My last coalescer fix introduced a subtler one. It's aborting a commuting ↵Evan Cheng2009-03-111-5/+11
| | | | | | optimization too late and left the live intervals to be out of sync with instructions. This fixes 8b10b. llvm-svn: 66715
* It makes no sense to have a ODR version of commonDuncan Sands2009-03-111-2/+1
| | | | | | linkage, so remove it. llvm-svn: 66690
* Add parentheses to pacify gcc-4.3.Duncan Sands2009-03-111-1/+1
| | | | llvm-svn: 66653
* reapply my previous patch (r66358) with a tweak to set theChris Lattner2009-03-111-2/+55
| | | | | | | | | alignment of the generated constant pool entry to the desired alignment of a type. If we don't do this, we end up trying to do movsd from 4-byte alignment memory. This fixes 450.soplex and 456.hmmer. llvm-svn: 66641
* Put the assignment back at the top of this method.Bill Wendling2009-03-111-2/+2
| | | | llvm-svn: 66611
* Two coalescer fixes in one.Evan Cheng2009-03-112-8/+61
| | | | | | | 1. Use the same value# to represent unknown values being merged into sub-registers. 2. When coalescer commute an instruction and the destination is a physical register, update its sub-registers by merging in the extended ranges. llvm-svn: 66610
* Make ivars private. Other cleanup. No functionality change.Bill Wendling2009-03-101-59/+27
| | | | llvm-svn: 66607
* Just make the Dwarf timer group static inside of the getter function. No ↵Bill Wendling2009-03-101-7/+5
| | | | | | need to alloc/dealloc. llvm-svn: 66591
* Don't put static functions in anonymous namespace.Bill Wendling2009-03-101-4/+0
| | | | llvm-svn: 66589
* These should *stop* the timer, not start it again.Bill Wendling2009-03-101-2/+2
| | | | llvm-svn: 66586
* - Fix misspelled method name.Bill Wendling2009-03-101-11/+5
| | | | | | - Remove unused method. llvm-svn: 66585
* - Create GetOrCreateSourceID from getOrCreateSourceID. GetOrCreateSourceID isBill Wendling2009-03-101-79/+82
| | | | | | | | | | | the untimed version of getOrCreateSourceID. getOrCreateSourceID calls GetOrCreateSourceID, of course. - Move some methods into the "private" section. Constify at least one method. - General clean-ups. llvm-svn: 66582
* Refine the Dwarf writer timers so that they measure exception writing and debugBill Wendling2009-03-101-143/+169
| | | | | | writing individually. llvm-svn: 66577
* Revert 66358 for now. It's breaking povray, 450.soplex, and 456.hmmer on x86 ↵Evan Cheng2009-03-101-53/+2
| | | | | | / Darwin. llvm-svn: 66574
* Add a timer to the DwarfWriter pass that measures the total time it takes toBill Wendling2009-03-101-8/+110
| | | | | | emit exception and debug Dwarf info. llvm-svn: 66571
* Fix a post-RA scheduling liveness bug. When a basic block is beingDan Gohman2009-03-101-9/+22
| | | | | | | | | | | | | scheduled in multiple regions, liveness data used by the anti-dependence breaker is carried from one region to the next, however the information reflects the state of the instructions before scheduling. After scheduling, there may be new live range overlaps. Handle this by pessimizing the liveness data carried between regions to the point where it will be conservatively correct now matter how the earlier region is scheduled. This fixes a miscompilation in 176.gcc with the post-RA scheduler enabled. llvm-svn: 66558
* wire up support for emitting "special" values from inline asmChris Lattner2009-03-101-1/+20
| | | | | | format strings with the standard ${:foo} syntax. llvm-svn: 66527
* Fix PR3763 by using proper APInt methods instead of uint64_t's.Chris Lattner2009-03-091-3/+4
| | | | llvm-svn: 66434
* Yet another case where the spiller marked two uses of the same register on ↵Evan Cheng2009-03-091-19/+10
| | | | | | the same instruction as kill. This fixes PR3706. llvm-svn: 66428
* just remove the use_empty() check entirely, the only reason itChris Lattner2009-03-091-14/+8
| | | | | | | existed was for llvm-gcc 3.4 (which used the __main hack) which is really really long dead. llvm-svn: 66417
* Make the code generator rip of dead constant expr uses before decidingChris Lattner2009-03-091-10/+16
| | | | | | | | whether a global is dead or not. This should fix PR3749 - linker adds spurious use to appending globals. I can't reasonably add a testcase for this, because the bc writer/reader strip dead constant users. llvm-svn: 66404
* Pass in a std::string when getting the names of debugging things. This cuts downBill Wendling2009-03-095-35/+60
| | | | | | on the number of times a std::string is created and copied. llvm-svn: 66396
* If a MI uses the same register more than once, only mark one of them as 'kill'.Evan Cheng2009-03-081-6/+22
| | | | llvm-svn: 66363
* implement an optimization to codegen c ? 1.0 : 2.0 as load { 2.0, 1.0 } + c*4. Chris Lattner2009-03-081-2/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For 2009-03-07-FPConstSelect.ll we now produce: _f: xorl %eax, %eax testl %edi, %edi movl $4, %ecx cmovne %rax, %rcx leaq LCPI1_0(%rip), %rax movss (%rcx,%rax), %xmm0 ret previously we produced: _f: subl $4, %esp cmpl $0, 8(%esp) movss LCPI1_0, %xmm0 je LBB1_2 ## entry LBB1_1: ## entry movss LCPI1_1, %xmm0 LBB1_2: ## entry movss %xmm0, (%esp) flds (%esp) addl $4, %esp ret on PPC the code also improves to: _f: cntlzw r2, r3 srwi r2, r2, 5 li r3, lo16(LCPI1_0) slwi r2, r2, 2 addis r3, r3, ha16(LCPI1_0) lfsx f1, r3, r2 blr from: _f: li r2, lo16(LCPI1_1) cmplwi cr0, r3, 0 addis r2, r2, ha16(LCPI1_1) beq cr0, LBB1_2 ; entry LBB1_1: ; entry li r2, lo16(LCPI1_0) addis r2, r2, ha16(LCPI1_0) LBB1_2: ; entry lfs f1, 0(r2) blr This also improves the existing pic-cpool case from: foo: subl $12, %esp call .Lllvm$1.$piclabel .Lllvm$1.$piclabel: popl %eax addl $_GLOBAL_OFFSET_TABLE_ + [.-.Lllvm$1.$piclabel], %eax cmpl $0, 16(%esp) movsd .LCPI1_0@GOTOFF(%eax), %xmm0 je .LBB1_2 # entry .LBB1_1: # entry movsd .LCPI1_1@GOTOFF(%eax), %xmm0 .LBB1_2: # entry movsd %xmm0, (%esp) fldl (%esp) addl $12, %esp ret to: foo: call .Lllvm$1.$piclabel .Lllvm$1.$piclabel: popl %eax addl $_GLOBAL_OFFSET_TABLE_ + [.-.Lllvm$1.$piclabel], %eax xorl %ecx, %ecx cmpl $0, 4(%esp) movl $8, %edx cmovne %ecx, %edx fldl .LCPI1_0@GOTOFF(%eax,%edx) ret This triggers a few dozen times in spec FP 2000. llvm-svn: 66358
* random cleanups.Chris Lattner2009-03-081-4/+3
| | | | llvm-svn: 66357
* Introduce new linkage types linkonce_odr, weak_odr, common_odrDuncan Sands2009-03-076-17/+25
| | | | | | | | | | | | | | | | | | | | | and extern_weak_odr. These are the same as the non-odr versions, except that they indicate that the global will only be overridden by an *equivalent* global. In C, a function with weak linkage can be overridden by a function which behaves completely differently. This means that IP passes have to skip weak functions, since any deductions made from the function definition might be wrong, since the definition could be replaced by something completely different at link time. This is not allowed in C++, thanks to the ODR (One-Definition-Rule): if a function is replaced by another at link-time, then the new function must be the same as the original function. If a language knows that a function or other global can only be overridden by an equivalent global, it can give it the weak_odr linkage type, and the optimizers will understand that it is alright to make deductions based on the function body. The code generators on the other hand map weak and weak_odr linkage to the same thing. llvm-svn: 66339
* Fix ScheduleDAGRRList::CopyAndMoveSuccessors' handling of nodesDan Gohman2009-03-061-7/+7
| | | | | | | | | | | with multiple chain operands. This can occur when the scheduler has added chain operands to a node that already has a chain operand, in order to handle physical register dependencies. This fixes an llvm-gcc bootstrap failure on x86-64 introduced in r66058. llvm-svn: 66240
* When we split a basic block, there's a default branch to the newly created BB.Bill Wendling2009-03-061-0/+3
| | | | | | Delete this default branch, because we're going to generate our own. llvm-svn: 66234
* (Hopefully) silence a warning.Owen Anderson2009-03-051-1/+1
| | | | llvm-svn: 66158
* Be more careful about choosing restore points when doing restore folding. ↵Owen Anderson2009-03-051-5/+28
| | | | | | This fixes some subtle miscompilations. llvm-svn: 66147
* Fix how livein live intervals are handled. Previously it could end at MBB ↵Evan Cheng2009-03-051-4/+9
| | | | | | start. Sorry, no small test case possible. llvm-svn: 66129
* Fix BuildVectorSDNode::isConstantSplat to handle one-element vectors.Bob Wilson2009-03-041-2/+2
| | | | | | | It is an error to call APInt::zext with a size that is equal to the value's current size, so use zextOrTrunc instead. llvm-svn: 66039
* Add a restore folder, which shaves a dozen or so machineinstrs off oggenc. ↵Owen Anderson2009-03-041-6/+75
| | | | | | Update a testcase to check this. llvm-svn: 66029
* PR3686: make the legalizer handle bitcast from i80 to x86 long double.Eli Friedman2009-03-042-0/+8
| | | | llvm-svn: 66021
* Fix PR3701. 1. X86 target renamed eflags register to flags. This matches ↵Evan Cheng2009-03-041-25/+54
| | | | | | what llvm-gcc generates so codegen knows flags register is being clobbered by inline asm. 2. BURR scheduler should also check if inline asm nodes can clobber "live" physical registers. Previously it was only checking target nodes with implicit defs. llvm-svn: 65996
* The DAG combiner was performing a BT combine. The BT combine had a value of -1,Bill Wendling2009-03-041-11/+22
| | | | | | | | | | | | so it changed it into a 31 via the TLO.ShrinkDemandedConstant() call. Then it would go through the DAG combiner again. This time it had a value of 31, which was turned into a -1 by TLI.SimplifyDemandedBits(). This would ping pong forever. Teach the TLO.ShrinkDemandedConstant() call not to lower a value if the demanded value is an XOR of all ones. llvm-svn: 65985
* Generalize BuildVectorSDNode::isConstantSplat to use APInts and handleBob Wilson2009-03-021-78/+49
| | | | | | | | arbitrary vector sizes. Add an optional MinSplatBits parameter to specify a minimum for the splat element size. Update the PPC target to use the revised interface. llvm-svn: 65899
* Fix a problem with DAGCombine on 64b targets where foldingNate Begeman2009-03-011-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | extracts + build_vector into a shuffle would fail, because the type of the new build_vector would not be legal. Try harder to create a legal build_vector type. Note: this will be totally irrelevant once vector_shuffle no longer takes a build_vector for shuffle mask. New: _foo: xorps %xmm0, %xmm0 xorps %xmm1, %xmm1 subps %xmm1, %xmm1 mulps %xmm0, %xmm1 addps %xmm0, %xmm1 movaps %xmm1, 0 Old: _foo: xorps %xmm0, %xmm0 movss %xmm0, %xmm1 xorps %xmm2, %xmm2 unpcklps %xmm1, %xmm2 pshufd $80, %xmm1, %xmm1 unpcklps %xmm1, %xmm2 pslldq $16, %xmm2 pshufd $57, %xmm2, %xmm1 subps %xmm0, %xmm1 mulps %xmm0, %xmm1 addps %xmm0, %xmm1 movaps %xmm1, 0 llvm-svn: 65791
* Minor optimization:Evan Cheng2009-03-011-29/+237
| | | | | | | | | | | Look for situations like this: %reg1024<def> = MOV r1 %reg1025<def> = MOV r0 %reg1026<def> = ADD %reg1024, %reg1025 r0 = MOV %reg1026 Commute the ADD to hopefully eliminate an otherwise unavoidable copy. llvm-svn: 65752
* Combine PPC's GetConstantBuildVectorBits and isConstantSplat functions to a newBob Wilson2009-03-011-0/+91
| | | | | | method in a BuildVectorSDNode "pseudo-class". llvm-svn: 65747
OpenPOWER on IntegriCloud