summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen
Commit message (Collapse)AuthorAgeFilesLines
...
* Don't add live ranges for sub-registers when clobbering a physical register.Jakob Stoklund Olesen2011-04-112-15/+7
| | | | | | | | | Both coalescing and register allocation already check aliases for interference, so these extra segments are only slowing us down. This speeds up both linear scan and the greedy register allocator. llvm-svn: 129283
* Speed up LiveIntervalUnion::unify by handling end insertion specially.Jakob Stoklund Olesen2011-04-111-1/+9
| | | | | | This particularly helps with the initial transfer of fixed intervals. llvm-svn: 129277
* Time the initial seeding of live registersJakob Stoklund Olesen2011-04-111-0/+1
| | | | llvm-svn: 129276
* Don't shrink live ranges after dead code elimination unless it is going to help.Jakob Stoklund Olesen2011-04-111-4/+10
| | | | | | In particular, don't repeatedly recompute the PIC base live range after rematerialization. llvm-svn: 129275
* Don't include Operator.h from InstrTypes.h.Jay Foad2011-04-111-0/+1
| | | | llvm-svn: 129271
* Avoid excess precision issues that lead to generating host-compiler-specific ↵Chris Lattner2011-04-091-2/+6
| | | | | | | | code. Switch lowering probably shouldn't be using FP for this. This resolves PR9581. llvm-svn: 129199
* Build the Hopfield network incrementally when splitting global live ranges.Jakob Stoklund Olesen2011-04-096-84/+176
| | | | | | | | | It is common for large live ranges to have few basic blocks with register uses and many live-through blocks without any uses. This approach grows the Hopfield network incrementally around the use blocks, completely avoiding checking interference for some through blocks. llvm-svn: 129188
* Precompute interference for neighbor blocks as long as there is no interference.Jakob Stoklund Olesen2011-04-092-22/+37
| | | | | | This doesn't require seeking in the live interval union, so it is very cheap. llvm-svn: 129187
* have dag combine zap "store undef", which can be formed during call loweringChris Lattner2011-04-091-2/+8
| | | | | | with undef arguments. llvm-svn: 129185
* Simplify array bound checks and clarify comments. One element array can ↵Devang Patel2011-04-081-10/+7
| | | | | | have same non-zero number as lower bound as well as upper bound. llvm-svn: 129170
* Do not emit DW_AT_upper_bound and DW_AT_lower_bound for unbouded array.Devang Patel2011-04-081-3/+15
| | | | | | | | If lower bound is more then upper bound then consider it is an unbounded array. An array is unbounded if non-zero lower bound is same as upper bound. If lower bound and upper bound are zero than array has one element. llvm-svn: 129156
* Change -arm-trap-func= into a non-arm specific option. Now Intrinsic::trap ↵Evan Cheng2011-04-081-2/+15
| | | | | | is lowered into a call to the specified trap function at sdisel time. llvm-svn: 129152
* llvm.global_[cd]tor is defined to be either external, or appending with an arrayNick Lewycky2011-04-081-11/+9
| | | | | | | of { i32, void ()* }. Teach the verifier to verify that, deleting copies of checks strewn about. llvm-svn: 129128
* Added a check in the preRA scheduler for potential interference on aAndrew Trick2011-04-073-4/+107
| | | | | | | | | induction variable. The preRA scheduler is unaware of induction vars, so we look for potential "virtual register cycles" instead. Fixes <rdar://problem/8946719> Bad scheduling prevents coalescing llvm-svn: 129100
* Recompute hasPHIKill flags when shrinking live intervals.Jakob Stoklund Olesen2011-04-071-1/+3
| | | | | | PHI values may be deleted, causing the flags to be wrong. This fixes PR9616. llvm-svn: 129092
* Avoid moving iterators when the previous block was just visited.Jakob Stoklund Olesen2011-04-071-8/+13
| | | | llvm-svn: 129081
* Prefer multiplications to divisions.Jakob Stoklund Olesen2011-04-071-7/+13
| | | | llvm-svn: 129080
* Extract SpillPlacement::addLinks for handling the special transparent blocks.Jakob Stoklund Olesen2011-04-073-37/+49
| | | | llvm-svn: 129079
* Remove dead code. rdar://9221736.Evan Cheng2011-04-071-5/+0
| | | | llvm-svn: 129044
* Also account for the spill code that would be inserted in live-through ↵Jakob Stoklund Olesen2011-04-061-5/+16
| | | | | | blocks with interference. llvm-svn: 129030
* Abort the constraint calculation early when all positive bias is lost.Jakob Stoklund Olesen2011-04-061-33/+63
| | | | | | | Without any positive bias, there is nothing for the spill placer to to. It will spill everywhere. llvm-svn: 129029
* Keep track of the number of positively biased nodes when adding constraints.Jakob Stoklund Olesen2011-04-063-3/+16
| | | | | | If there are no positive nodes, the algorithm can be aborted early. llvm-svn: 129021
* Break the spill placement algorithm into three parts: prepare, ↵Jakob Stoklund Olesen2011-04-063-30/+39
| | | | | | | | addConstraints, and finish. This will allow us to abort the algorithm early if it is determined to be futile. llvm-svn: 129020
* Oops. Scary.Jakob Stoklund Olesen2011-04-061-1/+1
| | | | llvm-svn: 128986
* Analyze blocks with uses separately from live-through blocks without uses.Jakob Stoklund Olesen2011-04-063-89/+120
| | | | | | | | About 90% of the relevant blocks are live-through without uses, and the only information required about them is their number. This saves memory and enables later optimizations that need to look at only the use-blocks. llvm-svn: 128985
* Sign errorJakob Stoklund Olesen2011-04-051-1/+1
| | | | llvm-svn: 128963
* Don't crash when a value is defined after the last split point.Jakob Stoklund Olesen2011-04-051-1/+2
| | | | llvm-svn: 128962
* Permit blocks to branch directly to a landing pad.Jakob Stoklund Olesen2011-04-051-0/+5
| | | | | | Treat the landing pad as a normal successor when that happens. llvm-svn: 128961
* Add support to encode function's template parameters.Devang Patel2011-04-051-0/+3
| | | | llvm-svn: 128947
* Run LiveDebugVariables in RegAllocBasic and RegAllocGreedy.Jakob Stoklund Olesen2011-04-052-0/+14
| | | | llvm-svn: 128935
* Refactor.Devang Patel2011-04-052-15/+21
| | | | llvm-svn: 128929
* Add an assertion instead of crashing when the scavenger goes past the endBob Wilson2011-04-051-1/+2
| | | | | | of a basic block. llvm-svn: 128925
* When dead code elimination removes all but one use, try to fold the single ↵Jakob Stoklund Olesen2011-04-052-0/+55
| | | | | | | | def into the remaining use. Rematerialization can leave single-use loads behind that we might as well fold whenever possible. llvm-svn: 128918
* Do not emit empty name.Devang Patel2011-04-051-1/+2
| | | | llvm-svn: 128914
* Ensure all defs referring to a virtual register are marked dead by ↵Jakob Stoklund Olesen2011-04-051-7/+2
| | | | | | | | | | | | addRegisterDead(). There can be multiple defs for a single virtual register when they are defining sub-registers. The missing <dead> flag was stopping the inline spiller from eliminating dead code after rematerialization. llvm-svn: 128888
* Print visibility info for external variables.Rafael Espindola2011-04-051-10/+12
| | | | llvm-svn: 128887
* Use std::unique instead of a SmallPtrSet to ensure unique instructions in ↵Jakob Stoklund Olesen2011-04-052-54/+26
| | | | | | | | | | | UseSlots. This allows us to always keep the smaller slot for an instruction which is what we want when a register has early clobber defines. Drop the UsingInstrs set and the UsingBlocks map. They are no longer needed. llvm-svn: 128886
* Stop precomputing last split points, query the SplitAnalysis cache on demand.Jakob Stoklund Olesen2011-04-053-21/+17
| | | | llvm-svn: 128875
* Cache the fairly expensive last split point computation and provide a fastJakob Stoklund Olesen2011-04-052-14/+54
| | | | | | | | | inlined path for the common case. Most basic blocks don't contain a call that may throw, so the last split point os simply the first terminator. llvm-svn: 128874
* Revamp the SjLj "dispatch setup" intrinsic.Bill Wendling2011-04-052-9/+6
| | | | | | | | | | | | It needed to be moved closer to the setjmp statement, because the code directly after the setjmp needs to know about values that are on the stack. Also, the 'bitcast' of the function context was causing a dead load. This wouldn't be too horrible, except that at -O0 it wasn't optimized out, and because it wasn't using the correct base pointer (if there is a VLA), it would try to access a value from a garbage address. <rdar://problem/9130540> llvm-svn: 128873
* Revert 123704; it broke threaded LLVM.Stuart Hastings2011-04-051-9/+15
| | | | llvm-svn: 128868
* Allow coalescing with reserved physregs in certain cases:Jakob Stoklund Olesen2011-04-043-6/+14
| | | | | | | | | | | | | | | | | | | | | | | | | When a virtual register has a single value that is defined as a copy of a reserved register, permit that copy to be joined. These virtual register are usually copies of the stack pointer: %vreg75<def> = COPY %ESP; GR32:%vreg75 MOV32mr %vreg75, 1, %noreg, 0, %noreg, %vreg74<kill> MOV32mi %vreg75, 1, %noreg, 8, %noreg, 0 MOV32mi %vreg75<kill>, 1, %noreg, 4, %noreg, 0 CALLpcrel32 ... Coalescing these virtual registers early decreases register pressure. Previously, they were coalesced by RALinScan::attemptTrivialCoalescing after register allocation was completed. The lower register pressure causes the mcinst-lowering-cmp0.ll test case to fail because it depends on linear scan spilling a particular register. I am deleting 2008-08-05-SpillerBug.ll because it is counting the number of instructions emitted, and its revision history shows the 'correct' count being edited many times. llvm-svn: 128845
* Extract physreg joining policy to a separate method.Jakob Stoklund Olesen2011-04-042-53/+60
| | | | llvm-svn: 128844
* Stop caching basic block index ranges now that SlotIndexes can keep up.Jakob Stoklund Olesen2011-04-043-30/+33
| | | | llvm-svn: 128821
* Delete leftover data members.Jakob Stoklund Olesen2011-04-041-4/+0
| | | | llvm-svn: 128820
* Use InterferenceCache in RegAllocGreedy.Jakob Stoklund Olesen2011-04-021-94/+46
| | | | llvm-svn: 128765
* Add an InterferenceCache class for caching per-block interference ranges.Jakob Stoklund Olesen2011-04-024-1/+300
| | | | | | | | When the greedy register allocator is splitting multiple global live ranges, it tends to look at the same interference data many times. The InterferenceCache class caches queries for unaltered LiveIntervalUnions. llvm-svn: 128764
* Use basic block numbers as indexes when mapping slot index ranges.Jakob Stoklund Olesen2011-04-021-11/+9
| | | | | | This is more compact and faster than using DenseMap. llvm-svn: 128763
* Add a RemoveFromWorklist method to DCI. This is needed to do some complicatedCameron Zwarich2011-04-021-0/+4
| | | | | | | | transformations in target-specific DAG combines without causing DAGCombiner to delete the same node twice. If you know of a better way to avoid this (see my next patch for an example), please let me know. llvm-svn: 128758
* Add comments.Evan Cheng2011-04-011-2/+4
| | | | llvm-svn: 128730
OpenPOWER on IntegriCloud