summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen/LiveIntervalAnalysis.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* Fix for PR1831: if all defs of an interval are re-materializable, then it's ↵Evan Cheng2007-12-061-4/+34
| | | | | | a preferred spill candiate. llvm-svn: 44644
* MachineInstr can change. Store indexes instead.Evan Cheng2007-12-051-2/+12
| | | | llvm-svn: 44612
* If a split live interval is spilled again, remove the kill marker on its ↵Evan Cheng2007-12-051-1/+4
| | | | | | last use. llvm-svn: 44611
* Clobber more bugs.Evan Cheng2007-12-051-2/+3
| | | | llvm-svn: 44610
* Fix kill info for split intervals.Evan Cheng2007-12-051-10/+20
| | | | llvm-svn: 44609
* - Mark last use of a split interval as kill instead of letting spiller track it.Evan Cheng2007-12-051-26/+73
| | | | | | | | | This allows an important optimization to be re-enabled. - If all uses / defs of a split interval can be folded, give the interval a low spill weight so it would not be picked in case spilling is needed (avoid pushing other intervals in the same BB to be spilled). llvm-svn: 44601
* Discard split intervals made empty due to folding.Evan Cheng2007-12-041-5/+16
| | | | llvm-svn: 44565
* TypoEvan Cheng2007-12-031-1/+1
| | | | llvm-svn: 44532
* Update kill info for uses of split intervals.Evan Cheng2007-12-031-3/+2
| | | | llvm-svn: 44531
* Remove redundant foldMemoryOperand variants and other code clean up.Evan Cheng2007-12-021-70/+72
| | | | llvm-svn: 44517
* Fix a bug where splitting cause some unnecessary spilling.Evan Cheng2007-12-011-2/+12
| | | | llvm-svn: 44482
* Allow some reloads to be folded in multi-use cases. Specifically testl r, r ↵Evan Cheng2007-12-011-22/+32
| | | | | | -> cmpl [mem], 0. llvm-svn: 44479
* Do not fold reload into an instruction with multiple uses. It issues one ↵Evan Cheng2007-11-301-75/+86
| | | | | | extra load. llvm-svn: 44467
* Do not lose rematerialization info when spilling already split live intervals.Evan Cheng2007-11-291-14/+9
| | | | llvm-svn: 44443
* Fix a major performance issue with splitting. If there is a def (not def/use)Evan Cheng2007-11-291-60/+133
| | | | | | | | | | | | | | | in the middle of a split basic block, create a new live interval starting at the def. This avoid artifically extending the live interval over a number of cycles where it is dead. e.g. bb1: = vr1204 (use / kill) <= new interval starts and ends here. ... ... vr1204 = (new def) <= start a new interval here. = vr1204 (use) llvm-svn: 44436
* Replace the odd kill# hack with something less fragile.Evan Cheng2007-11-291-15/+10
| | | | llvm-svn: 44434
* Fixed various live interval splitting bugs / compile time issues.Evan Cheng2007-11-291-110/+200
| | | | llvm-svn: 44428
* Recover compile time regression.Evan Cheng2007-11-281-15/+25
| | | | llvm-svn: 44386
* Live interval splitting:Evan Cheng2007-11-171-61/+294
| | | | | | | | | | | | | | | | | | | When a live interval is being spilled, rather than creating short, non-spillable intervals for every def / use, split the interval at BB boundaries. That is, for every BB where the live interval is defined or used, create a new interval that covers all the defs and uses in the BB. This is designed to eliminate one common problem: multiple reloads of the same value in a single basic block. Note, it does *not* decrease the number of spills since no copies are inserted so the split intervals are *connected* through spill and reloads (or rematerialization). The newly created intervals can be spilled again, in that case, since it does not span multiple basic blocks, it's spilled in the usual manner. However, it can reuse the same stack slot as the previously split interval. This is currently controlled by -split-intervals-at-bb. llvm-svn: 44198
* Fix a thinko in post-allocation coalescer.Evan Cheng2007-11-151-3/+10
| | | | llvm-svn: 44166
* Clean up sub-register implementation by moving subReg information back toEvan Cheng2007-11-141-8/+2
| | | | | | | | | | | MachineOperand auxInfo. Previous clunky implementation uses an external map to track sub-register uses. That works because register allocator uses a new virtual register for each spilled use. With interval splitting (coming soon), we may have multiple uses of the same register some of which are of using different sub-registers from others. It's too fragile to constantly update the information. llvm-svn: 44104
* Refactor some code.Evan Cheng2007-11-121-292/+327
| | | | llvm-svn: 44010
* Simplify my (il)logic.Evan Cheng2007-11-071-11/+2
| | | | llvm-svn: 43819
* When the allocator rewrite a spill register with new virtual register, it ↵Evan Cheng2007-11-061-3/+12
| | | | | | | | replaces other operands of the same register. Watch out for situations where only some of the operands are sub-register uses. llvm-svn: 43776
* Fix a bug where a def use operand isn't being detected as a sub-register use.Evan Cheng2007-11-061-4/+7
| | | | llvm-svn: 43763
* Fix PR1187.Evan Cheng2007-11-051-1/+1
| | | | llvm-svn: 43692
* There are times when the coalescer would not coalesce away a copy but the copyEvan Cheng2007-11-031-0/+33
| | | | | | | | | | | | | | | | can be eliminated by the allocator is the destination and source targets the same register. The most common case is when the source and destination registers are in different class. For example, on x86 mov32to32_ targets GR32_ which contains a subset of the registers in GR32. The allocator can do 2 things: 1. Set the preferred allocation for the destination of a copy to that of its source. 2. After allocation is done, change the allocation of a copy destination (if legal) so the copy can be eliminated. This eliminates 443 extra moves from 403.gcc. llvm-svn: 43662
* Apply Chris' suggestions.Evan Cheng2007-10-171-1/+1
| | | | llvm-svn: 43069
* Clean up code that calculate MBB live-in's.Evan Cheng2007-10-171-0/+36
| | | | llvm-svn: 43060
* Did mean to leave this in. INSERT_SUBREG isn't being coalesced yet.Evan Cheng2007-10-121-2/+1
| | | | llvm-svn: 42916
* EXTRACT_SUBREG coalescing support. The coalescer now treats EXTRACT_SUBREG likeEvan Cheng2007-10-121-173/+127
| | | | | | | | | (almost) a register copy. However, it always coalesced to the register of the RHS (the super-register). All uses of the result of a EXTRACT_SUBREG are sub- register uses which adds subtle complications to load folding, spiller rewrite, etc. llvm-svn: 42899
* Kill cycle of an live range is always the last use index + 1.Evan Cheng2007-10-081-1/+1
| | | | llvm-svn: 42742
* Use empty() member functions when that's what's being tested for insteadDan Gohman2007-10-031-11/+9
| | | | | | of comparing begin() and end(). llvm-svn: 42585
* Remove isReg, isImm, and isMBB, and change all their users to use Dan Gohman2007-09-141-1/+1
| | | | | | | isRegister, isImmediate, and isMachineBasicBlock, which are equivalent, and more popular. llvm-svn: 41958
* Fix a memory leak.Evan Cheng2007-09-061-1/+2
| | | | llvm-svn: 41739
* Use pool allocator for all the VNInfo's to improve memory access locality. ↵Evan Cheng2007-09-051-21/+22
| | | | | | This reduces coalescing time on siod Mac OS X PPC by 35%. Also remove the back ptr from VNInfo to LiveInterval and other tweaks. llvm-svn: 41729
* Try fold re-materialized load instructions into its uses.Evan Cheng2007-08-301-11/+22
| | | | llvm-svn: 41598
* Change LiveRange so it keeps a pointer to the VNInfo rather than an index.Evan Cheng2007-08-291-51/+56
| | | | | | | Changes related modules so VNInfo's are not copied. This decrease copy coalescing time by 45% and overall compilation time by 10% on siod. llvm-svn: 41579
* Fix some kill info update bugs; add hidden option -disable-rematerialization ↵Evan Cheng2007-08-161-1/+10
| | | | | | to turn off remat for debugging. llvm-svn: 41118
* Re-implement trivial rematerialization. This allows def MIs whose live ↵Evan Cheng2007-08-131-113/+236
| | | | | | intervals that are coalesced to be rematerialized. llvm-svn: 41060
* Code to maintain kill information during register coalescing.Evan Cheng2007-08-111-4/+8
| | | | llvm-svn: 41016
* Adding kill info to val#.Evan Cheng2007-08-081-3/+10
| | | | llvm-svn: 40925
* - Each val# can have multiple kills.Evan Cheng2007-08-081-4/+5
| | | | | | | - Fix some minor bugs related to special markers on val# def. ~0U means undefined, ~1U means dead val#. llvm-svn: 40916
* - LiveInterval value#'s now have 3 components: def instruction #,Evan Cheng2007-08-071-7/+6
| | | | | | | | | | kill instruction #, and source register number (iff the value# is defined by a copy). - Now def instruction # is set for every value#, not just for copy defined ones. - Update some outdated code related inactive live ranges. - Kill info not yet set. That's next patch. llvm-svn: 40913
* If a livein is not used in the block. It's live through.Evan Cheng2007-06-271-5/+8
| | | | llvm-svn: 37764
* Fix an obvious bug. Old code only worked for the entry block.Evan Cheng2007-06-271-3/+4
| | | | llvm-svn: 37743
* Replace M_REMATERIALIZIBLE and the newly-added isOtherReMaterializableLoadDan Gohman2007-06-191-5/+4
| | | | | | | | | | with a general target hook to identify rematerializable instructions. Some instructions are only rematerializable with specific operands, such as loads from constant pools, while others are always rematerializable. This hook allows both to be identified as being rematerializable with the same mechanism. llvm-svn: 37644
* Add a target hook to allow loads from constant pools to be rematerialized, ↵Dan Gohman2007-06-141-2/+4
| | | | | | | | and an implementation for x86. llvm-svn: 37576
* Factor live variable analysis so it does not do register coalescingDavid Greene2007-06-081-1055/+3
| | | | | | | | | | simultaneously. Move that pass to SimpleRegisterCoalescing. This makes it easier to implement alternative register allocation and coalescing strategies while maintaining reuse of the existing live interval analysis. llvm-svn: 37520
* Only worry about intervening kill if there are more than one live ranges in ↵Evan Cheng2007-05-141-3/+5
| | | | | | the interval. llvm-svn: 37052
OpenPOWER on IntegriCloud