summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen
Commit message (Collapse)AuthorAgeFilesLines
...
* ARM isel bug fix for adds/subs operands.Andrew Trick2011-09-202-8/+2
| | | | | | | | | | | Modified ARMISelLowering::AdjustInstrPostInstrSelection to handle the full gamut of CPSR defs/uses including instructins whose "optional" cc_out operand is not really optional. This allowed removal of the hasPostISelHook to simplify the .td files and make the implementation more robust. Fixes rdar://10137436: sqlite3 miscompile llvm-svn: 140134
* whitespaceAndrew Trick2011-09-202-30/+30
| | | | llvm-svn: 140133
* white space cleanupsNadav Rotem2011-09-181-5/+4
| | | | llvm-svn: 139994
* Namespacify.Benjamin Kramer2011-09-161-0/+2
| | | | llvm-svn: 139892
* Spill mode: Hoist back-copies locally.Jakob Stoklund Olesen2011-09-161-6/+17
| | | | | | | | | | | | | | | | | The leaveIntvAfter() function normally inserts a back-copy after the requested instruction, making the back-copy kill the live range. In spill mode, try to insert the back-copy before the last use instead. That means the last use becomes the kill instead of the back-copy. This lowers the register pressure because the last use can now redefine the same register it was reading. This will also improve compile time: The back-copy isn't a kill, so hoisting it in hoistCopiesForSize() won't force a recomputation of the source live range. Similarly, if the back-copy isn't hoisted by the splitter, the spiller will not attempt hoisting it locally. llvm-svn: 139883
* Disable local spill hoisting for non-killing copies.Jakob Stoklund Olesen2011-09-161-5/+15
| | | | | | | | If the source register is live after the copy being spilled, there is no point to hoisting it. Hoisting inside a basic block only serves to resolve interferences by shortening the live range of the source. llvm-svn: 139882
* Some legalization fixes for atomic load and store.Eli Friedman2011-09-153-1/+29
| | | | llvm-svn: 139851
* Add an option to disable spill hoisting.Jakob Stoklund Olesen2011-09-151-1/+5
| | | | | | | | When -split-spill-mode is enabled, spill hoisting is performed by SplitKit instead of by InlineSpiller. This hidden command line option is for testing the splitter spill mode. llvm-svn: 139845
* VirtRegMap is counting spill slots, not register spills.Jakob Stoklund Olesen2011-09-151-3/+3
| | | | | | Fix the stats counters to reflect that. llvm-svn: 139819
* Count correctly when a COPY turns into a spill or reload.Jakob Stoklund Olesen2011-09-151-1/+7
| | | | | | | The number of spills could go negative since a folded COPY is just a spill, and it may be eliminated. llvm-svn: 139815
* Count inserted spills and reloads more accurately.Jakob Stoklund Olesen2011-09-151-14/+22
| | | | | | | | Adjust counters when removing spill and reload instructions. We still don't account for reloads being removed by eliminateDeadDefs(). llvm-svn: 139806
* Trace through sibling PHIs in bulk.Jakob Stoklund Olesen2011-09-151-23/+50
| | | | | | | | | | | | | | | When traceSiblingValue() encounters a PHI-def value created by live range splitting, don't look at all the predecessor blocks. That can be very expensive in a complicated CFG. Instead, consider that all the non-PHI defs jointly dominate all the PHI-defs. Tracing directly to all the non-PHI defs is much faster that zipping around in the CFG when there are many PHIs with many predecessors. This significantly improves compile time for indirectbr interpreters. llvm-svn: 139797
* Speed up LiveIntervals::shrinkToUse with some caching.Jakob Stoklund Olesen2011-09-151-6/+8
| | | | | | | | | | | | | Blocks with multiple PHI successors only need to go on the worklist once. Use a SmallPtrSet to track the live-out blocks that have already been handled. This is a lot faster than the two live range check we would otherwise do. Also stop recomputing hasPHIKill flags. Like RenumberValues(), it is conservatively correct to leave them in, and they are not used for anything important. llvm-svn: 139792
* Revert r139782, "RemoveCopyByCommutingDef doesn't need hasPHIKill()."Jakob Stoklund Olesen2011-09-151-8/+8
| | | | | | | | | | It does, after all. RemoveCopyByCommutingDef rewrites the uses of one particular value number in A. It doesn't know how to rewrite phi uses, so there can't be any. llvm-svn: 139787
* Stop verifying hasPHIKill() flags.Jakob Stoklund Olesen2011-09-151-11/+1
| | | | | | | | | | There is only one legitimate use remaining, in addIntervalsForSpills(). All other calls to hasPHIKill() are only used to update PHIKill flags. The addIntervalsForSpills() function is part of the old spilling framework, only used by linearscan. llvm-svn: 139783
* RemoveCopyByCommutingDef doesn't need hasPHIKill().Jakob Stoklund Olesen2011-09-151-8/+8
| | | | | | | | | | | Instead, let HasOtherReachingDefs() test for defs in B that overlap any phi-defs in A as well. This test is slightly different, but almost identical. A perfectly precise test would only check those phi-defs in A that are reachable from AValNo. llvm-svn: 139782
* It is safe to remat a value killed by phis.Jakob Stoklund Olesen2011-09-151-3/+1
| | | | | | | | | | | | | The source live range is recomputed using shrinkToUses() which does handle phis correctly. The hasPHIKill() condition was relevant in the old days when ReMaterializeTrivialDef() tried to recompute the live range itself. The shrinkToUses() function will mark the original def as dead when no more uses and phi kills remain. It is then removed by runOnMachineFunction(). llvm-svn: 139781
* Leave hasPHIKill flags alone in LiveInterval::RenumberValues.Jakob Stoklund Olesen2011-09-151-21/+0
| | | | | | | | | | | It is conservatively correct to keep the hasPHIKill flags, even after deleting PHI-defs. The calculation can be very expensive after taildup has created a quadratic number of indirectbr edges in the CFG, and the hasPHIKill flag isn't used for anything after RenumberValues(). llvm-svn: 139780
* [regcoalescing] bug fix for RegistersDefinedFromSameValue.Andrew Trick2011-09-151-2/+5
| | | | | | | An improper SlotIndex->VNInfo lookup was leading to unsafe copy removal. Fixes PR10920 401.bzip2 miscompile with no IV rewrite. llvm-svn: 139765
* Add support to emit debug info for C++0x nullptr type.Devang Patel2011-09-141-4/+11
| | | | llvm-svn: 139751
* Ignore the cloning of unknown registers.Jakob Stoklund Olesen2011-09-141-0/+4
| | | | | | | | THe LRE_DidCloneVirtReg callback may be called with vitual registers that RAGreedy doesn't even know about yet. In that case, there are no data structures to update. llvm-svn: 139702
* Hoist back-copies to the least busy dominator.Jakob Stoklund Olesen2011-09-142-2/+66
| | | | | | | | | | | | | | | | When a back-copy is hoisted to the nearest common dominator, keep looking up the dominator tree for a less loopy dominator, and place the back-copy there instead. Don't do this when a single existing back-copy dominates all the others. Assume the client knows what he is doing, and keep the dominating back-copy. This prevents us from hoisting back-copies into loops in most cases. If a value is defined in a loop with multiple exits, we may still hoist back-copies into that loop. That is the speed/size tradeoff. llvm-svn: 139698
* Add integer promotion support for vselectNadav Rotem2011-09-142-0/+10
| | | | llvm-svn: 139692
* Distinguish complex mapped values from forced recomputation.Jakob Stoklund Olesen2011-09-132-53/+40
| | | | | | | | | | | | | | | | | | When a ParentVNI maps to multiple defs in a new interval, its live range may still be derived directly from RegAssign by transferValues(). On the other hand, when instructions have been rematerialized or hoisted, it may be necessary to completely recompute live ranges using LiveRangeCalc::extend() to all uses. Use a bit in the value map to indicate that a live range must be recomputed. Rename markComplexMapped() to forceRecompute(). This fixes some live range verification errors when -split-spill-mode=size hoists back-copies by recomputing source ranges when RegAssign kills can't be moved. llvm-svn: 139660
* Implement -split-spill-mode=size.Jakob Stoklund Olesen2011-09-132-0/+164
| | | | | | | | | | Whenever the complement interval is defined by multiple copies of the same value, hoist those back-copies to the nearest common dominator. This ensures that at most one copy is inserted per value in the complement inteval, and no phi-defs are needed. llvm-svn: 139651
* Fix check for unaligned load/store so it doesn't catch over-aligned load/store.Eli Friedman2011-09-131-2/+2
| | | | llvm-svn: 139649
* Error out on CodeGen of unaligned load/store. Fix test so it isn't ↵Eli Friedman2011-09-131-2/+9
| | | | | | accidentally testing that case. llvm-svn: 139641
* Fix the assertion which checks the size of the input operand.Nadav Rotem2011-09-131-1/+1
| | | | llvm-svn: 139633
* Add vselect target support for targets that do not support blend but do supportNadav Rotem2011-09-132-2/+45
| | | | | | xor/and/or (For example SSE2). llvm-svn: 139623
* Use a cache to maintain list of machine basic blocks for a given UserValue.Devang Patel2011-09-131-10/+33
| | | | llvm-svn: 139616
* Add SplitEditor::markOverlappedComplement().Jakob Stoklund Olesen2011-09-132-2/+28
| | | | | | | | This function is used to flag values where the complement interval may overlap other intervals. Call it from overlapIntv, and use the flag to fully recompute those live ranges in transferValues(). llvm-svn: 139612
* Eliminate the extendRange() wrapper.Jakob Stoklund Olesen2011-09-132-20/+15
| | | | llvm-svn: 139608
* Switch extendInBlock() to take a kill slot instead of the last use slot.Jakob Stoklund Olesen2011-09-134-16/+13
| | | | | | | Three out of four clients prefer this interface which is consistent with extendIntervalEndTo() and LiveRangeCalc::extend(). llvm-svn: 139604
* Use a separate LiveRangeCalc for the complement in spill modes.Jakob Stoklund Olesen2011-09-132-11/+30
| | | | | | | | | | The complement interval may overlap the other intervals created, so use a separate LiveRangeCalc instance to compute its live range. A LiveRangeCalc instance can only be shared among non-overlapping intervals. llvm-svn: 139603
* Unbreak msvc.NAKAMURA Takumi2011-09-132-2/+2
| | | | llvm-svn: 139581
* Extract live range calculations from SplitKit.Jakob Stoklund Olesen2011-09-135-306/+516
| | | | | | | | SplitKit will soon need two copies of these data structures, and the algorithms will also be useful when LiveIntervalAnalysis becomes independent of LiveVariables. llvm-svn: 139572
* Introduce a bit of a hack.Bill Wendling2011-09-121-15/+44
| | | | | | | | | | | | | | | | | | Splitting a landing pad takes considerable care because of PHIs and other nasties. The problem is that the jump table needs to jump to the landing pad block. However, the landing pad block can be jumped to only by an invoke instruction. So we clone the landingpad instruction into its own basic block, have the invoke jump to there. The landingpad instruction's basic block's successor is now the target for the jump table. But because of PHI nodes, we need to create another basic block for the jump table to jump to. This is definitely a hack, because the values for the PHI nodes may not be defined on the edge from the jump table. But that's okay, because the jump table is simply a construct to mimic what is happening in the CFG. So the values are mysteriously there, even though there is no value for the PHI from the jump table's edge (hence calling this a hack). llvm-svn: 139545
* Remove the -compact-regions flag.Jakob Stoklund Olesen2011-09-121-11/+5
| | | | | | | It has been enabled by default for a while, it was only there to allow performance comparisons. llvm-svn: 139501
* Add an interface for SplitKit complement spill modes.Jakob Stoklund Olesen2011-09-123-5/+49
| | | | | | | | | | | | | | | | | | | | | | | | | SplitKit always computes a complement live range to cover the places where the original live range was live, but no explicit region has been allocated. Currently, the complement live range is created to be as small as possible - it never overlaps any of the regions. This minimizes register pressure, but if the complement is going to be spilled anyway, that is not very important. The spiller will eliminate redundant spills, and hoist others by making the spill slot live range overlap some of the regions created by splitting. Stack slots are cheap. This patch adds the interface to enable spill modes in SplitKit. In spill mode, SplitKit will assume that the complement is going to spill, so it will allow it to overlap regions in order to avoid back-copies. By doing some of the spiller's work early, the complement live range becomes simpler. In some cases, it can become much simpler because no extra PHI-defs are required. This will speed up both splitting and spilling. This is only the interface to enable spill modes, no implementation yet. llvm-svn: 139500
* Update comments to reflect some (not so) recent changes.Jakob Stoklund Olesen2011-09-121-4/+5
| | | | llvm-svn: 139498
* Fix asserts in CodeGen from:Richard Trieu2011-09-102-3/+3
| | | | | | | | | | assert("error"); to: assert(0 && "error"); llvm-svn: 139449
* tidy up a bitChris Lattner2011-09-091-7/+5
| | | | llvm-svn: 139419
* Make the SelectionDAG verify that all the operands of BUILD_VECTOR have the ↵Eli Friedman2011-09-092-24/+36
| | | | | | same type. Teach DAGCombiner::visitINSERT_VECTOR_ELT not to make invalid BUILD_VECTORs. Fixes PR10897. llvm-svn: 139407
* Reapply r139247: Cache intermediate results during traceSiblingValue.Jakob Stoklund Olesen2011-09-091-82/+239
| | | | | | | | | | | | | | | | | In some cases such as interpreters using indirectbr, the CFG can be very complicated, and live range splitting may be forced to insert a large number of phi-defs. When that happens, traceSiblingValue can spend a lot of time zipping around in the CFG looking for defs and reloads. This patch causes more information to be cached in SibValues, and the cached values are used to terminate searches early. This speeds up spilling by 20x in one interpreter test case. For more typical code, this is just a 10% speedup of spilling. The previous version had bugs that caused miscompilations. They have been fixed. llvm-svn: 139378
* Directly point debug info to the stack slot of the arugment, instead of ↵Devang Patel2011-09-083-28/+25
| | | | | | trying to keep track of vreg in which it the arugment is copied. The LiveDebugVariable can keep track of variable's ranges. llvm-svn: 139330
* Revert r139247 "Cache intermediate results during traceSiblingValue."Jakob Stoklund Olesen2011-09-071-221/+82
| | | | | | It broke the self host and clang-x86_64-darwin10-RA. llvm-svn: 139259
* Cache intermediate results during traceSiblingValue.Jakob Stoklund Olesen2011-09-071-82/+221
| | | | | | | | | | | | | | In some cases such as interpreters using indirectbr, the CFG can be very complicated, and live range splitting may be forced to insert a large number of phi-defs. When that happens, traceSiblingValue can spend a lot of time zipping around in the CFG looking for defs and reloads. This patch causes more information to be cached in SibValues, and the cached values are used to terminate searches early. This speeds up spilling by 20x in one interpreter test case. For more typical code, this is just a 10% speedup of spilling. llvm-svn: 139247
* Refactor instprinter and mcdisassembler to take a SubtargetInfo. Add -mattr= ↵James Molloy2011-09-071-2/+2
| | | | | | handling to llvm-mc. Reviewed by Owen Anderson. llvm-svn: 139237
* Relax the MemOperands on atomics a bit. Fixes -verify-machineinstrs ↵Eli Friedman2011-09-071-2/+17
| | | | | | | | failures for atomic laod/store on ARM. (The fix for the related failures on x86 is going to be nastier because we actually need Acquire memoperands attached to the atomic load instrs, etc.) llvm-svn: 139221
* While sinking machine instructions, sink matching DBG_VALUEs also otherwise ↵Devang Patel2011-09-071-0/+31
| | | | | | live debug variable pass will drop DBG_VALUEs on the floor. llvm-svn: 139208
OpenPOWER on IntegriCloud