summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen
Commit message (Collapse)AuthorAgeFilesLines
* Move global variables in TargetMachine into new TargetOptions class. As an APINick Lewycky2011-12-028-85/+114
| | | | | | | | | | | | change, now you need a TargetOptions object to create a TargetMachine. Clang patch to follow. One small functionality change in PTX. PTX had commented out the machine verifier parts in their copy of printAndVerify. That now calls the version in LLVMTargetMachine. Users of PTX who need verification disabled should rely on not passing the command-line flag to enable it. llvm-svn: 145714
* make sure ScheduleDAGInstrs::EmitSchedule does not crash when the first ↵Hal Finkel2011-12-021-5/+5
| | | | | | instruction in Sequence is a Noop llvm-svn: 145677
* CodeGen: fix CMake buildDylan Noblesmith2011-12-011-0/+1
| | | | | | Missing file from r145629. llvm-svn: 145634
* Add a deterministic finite automaton based packetizer for VLIW architecturesAnshuman Dasgupta2011-12-011-0/+98
| | | | llvm-svn: 145629
* If fast-isel fails, remove dead instructions generated during the failed Chad Rosier2011-11-291-0/+27
| | | | | | attempt. llvm-svn: 145425
* build/CMake: Finish removal of add_llvm_library_dependencies.Daniel Dunbar2011-11-293-30/+0
| | | | llvm-svn: 145420
* On MachO, the pointer to the personality function should always be in theBill Wendling2011-11-291-3/+1
| | | | | | | | non_lazy_symbol_pointers section (__IMPORT,__pointers). Ignore the 'hidden' part since that will place it in the wrong section. <rdar://problem/10443720> llvm-svn: 145356
* Make SelectionDAG::InferPtrAlignment use llvm::ComputeMaskedBits instead of ↵Eli Friedman2011-11-281-18/+9
| | | | | | duplicating the logic for globals. Make llvm::ComputeMaskedBits handle GlobalVariables slightly more aggressively, to match what InferPtrAlignment knew how to do. llvm-svn: 145304
* Revert r145273 and fix in SelectionDAG::InferPtrAlignment() instead.Evan Cheng2011-11-282-27/+17
| | | | | | | | | | | Conservatively returns zero when the GV does not specify an alignment nor is it initialized. Previously it returns ABI alignment for type of the GV. However, if the type is a "packed" type, then the under-specified alignments is attached to the load / store instructions. In that case, the alignment of the type cannot be trusted. rdar://10464621 llvm-svn: 145300
* DAG combine should not increase alignment of loads / stores with alignment lessEvan Cheng2011-11-281-12/+26
| | | | | | | | | than ABI alignment. These are loads / stores from / to "packed" data structures. Their alignments are intentionally under-specified. rdar://10301431 llvm-svn: 145273
* 80-column.Chad Rosier2011-11-281-2/+4
| | | | llvm-svn: 145267
* Remove dead llvm.eh.sjlj.dispatchsetup intrinsic.Bill Wendling2011-11-283-8/+0
| | | | llvm-svn: 145263
* Prevent rotating the blocks of a loop (and thus getting a backedge to beChandler Carruth2011-11-271-0/+16
| | | | | | | | | | | | | | fallthrough) in cases where we might fail to rotate an exit to an outer loop onto the end of the loop chain. Having *some* rotation, but not performing this rotation, is the primary fix of thep performance regression with -enable-block-placement for Olden/em3d (a whopping 30% regression). Still working on reducing the test case that actually exercises this and the new rotation strategy out of this code, but I want to check if this regresses other test cases first as that may indicate it isn't the correct fix. llvm-svn: 145195
* Take two on rotating the block ordering of loops. My previous attemptChandler Carruth2011-11-271-85/+103
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | was centered around the premise of laying out a loop in a chain, and then rotating that chain. This is good for preserving contiguous layout, but bad for actually making sane rotations. In order to keep it safe, I had to essentially make it impossible to rotate deeply nested loops. The information needed to correctly reason about a deeply nested loop is actually available -- *before* we layout the loop. We know the inner loops are already fused into chains, etc. We lose information the moment we actually lay out the loop. The solution was the other alternative for this algorithm I discussed with Benjamin and some others: rather than rotating the loop after-the-fact, try to pick a profitable starting block for the loop's layout, and then use our existing layout logic. I was worried about the complexity of this "pick" step, but it turns out such complexity is needed to handle all the important cases I keep teasing out of benchmarks. This is, I'm afraid, a bit of a work-in-progress. It is still misbehaving on some likely important cases I'm investigating in Olden. It also isn't really tested. I'm going to try to craft some interesting nested-loop test cases, but it's likely to be extremely time consuming and I don't want to go there until I'm sure I'm testing the correct behavior. Sadly I can't come up with a way of getting simple, fine grained test cases for this logic. We need complex loop structures to even trigger much of it. llvm-svn: 145183
* Fix an impressive type-o / spell-o Duncan noticed.Chandler Carruth2011-11-271-1/+1
| | | | llvm-svn: 145181
* Rework a bit of the implementation of loop block rotation to not rely soChandler Carruth2011-11-271-21/+31
| | | | | | | | | | | | | | | heavily on AnalyzeBranch. That routine doesn't behave as we want given that rotation occurs mid-way through re-ordering the function. Instead merely check that there are not unanalyzable branching constructs present, and then reason about the CFG via successor lists. This actually simplifies my mental model for all of this as well. The concrete result is that we now will rotate more loop chains. I've added a test case from Olden highlighting the effect. There is still a bit more to do here though in order to regain all of the performance in Olden. llvm-svn: 145179
* Introduce a loop block rotation optimization to the new block placementChandler Carruth2011-11-271-3/+92
| | | | | | | | | | | | | | | pass. This is designed to achieve one of the important optimizations that the old code placement pass did, but more simply. This is a somewhat rough and *very* conservative version of the transform. We could get a lot fancier here if there are profitable cases to do so. In particular, this only looks for a single pattern, it insists that the loop backedge being rotated away is the last backedge in the chain, and it doesn't provide any means of doing better in-loop placement due to the rotation. However, it appears that it will handle the important loops I am finding in the LLVM test suite. llvm-svn: 145158
* Move code into anonymous namespaces.Benjamin Kramer2011-11-261-2/+4
| | | | llvm-svn: 145154
* Fix a silly use-after-free issue. A much earlier version of this codeChandler Carruth2011-11-241-2/+2
| | | | | | | | | | | | need lots of fanciness around retaining a reference to a Chain's slot in the BlockToChain map, but that's all gone now. We can just go directly to allocating the new chain (which will update the mapping for us) and using it. Somewhat gross mechanically generated test case replicates the issue Duncan spotted when actually testing this out. llvm-svn: 145120
* When adding blocks to the list of those which no longer have any CFGChandler Carruth2011-11-241-3/+3
| | | | | | | | | | | | | | conflicts, we should only be adding the first block of the chain to the list, lest we try to merge into the middle of that chain. Most of the places we were doing this we already happened to be looking at the first block, but there is no reason to assume that, and in some cases it was clearly wrong. I've added a couple of tests here. One already worked, but I like having an explicit test for it. The other is reduced from a test case Duncan reduced for me and used to crash. Now it is handled correctly. llvm-svn: 145119
* Relax an invariant that block placement was trying to assert a bitChandler Carruth2011-11-231-3/+1
| | | | | | | | | | | further. This invariant just wasn't going to work in the face of unanalyzable branches; we need to be resillient to the phenomenon of chains poking into a loop and poking out of a loop. In fact, we already were, we just needed to not assert on it. This was found during a bootstrap with block placement turned on. llvm-svn: 145100
* Handle the case of a no-return invoke correctly. It actually still hasChandler Carruth2011-11-231-0/+8
| | | | | | | | | successors, they just are all landing pad successors. We handle this the same way as no successors. Comments attached for the next person to wade through here and another lovely test case courtesy of Benjamin Kramer's bugpoint reduction. llvm-svn: 145098
* Enable stack protectors for all arrays, not just char arrays. rdar://5875909Bob Wilson2011-11-231-6/+1
| | | | | | Patch by Bill Wendling. llvm-svn: 145097
* Fix PR11422.Jakob Stoklund Olesen2011-11-231-2/+6
| | | | | | | | | | | | | This was a bug in keeping track of the available domains when merging domain values. The wrong domain mask caused ExecutionDepsFix to try to move VANDPSYrr to the integer domain which is only available in AVX2. Also add an assertion to catch future attempts at emitting AVX2 instructions. llvm-svn: 145096
* Fix a crash in block placement due to an inner loop that happened to beChandler Carruth2011-11-231-1/+4
| | | | | | | | | | | reversed in the function's original ordering, and we happened to encounter it while handling an outer unnatural CFG structure. Thanks to the test case reduced from GCC's source by Benjamin Kramer. This may also fix a crasher in gzip that Duncan reduced for me, but I haven't yet gotten to testing that one. llvm-svn: 145094
* Fix a devilish miscompile exposed by block placement. TheChandler Carruth2011-11-221-2/+8
| | | | | | | | | | | | | | | | | | | | | updateTerminator code didn't correctly handle EH terminators in one very specific case. AnalyzeBranch would find no terminator instruction, and so the fallback in updateTerminator is to assume fallthrough. This is correct, but the destination of the fallthrough was assumed to be the first successor. This is *almost always* true, but in certain cases the loop transformations will cause the landing pad to be the first successor! Instead of this brittle logic, actually look through the successors for a non-landing-pad accessor, and to assert if more than one is found. This will hopefully fix some (if not all) of the self host miscompiles with block placement. Thanks to Benjamin Kramer for reporting, Nick Lewycky for an initial stab at a reduction, and Duncan for endless advice on EH (which I know nothing about) as well as reviewing the actual fix. llvm-svn: 145062
* Fix an obvious omission in the SelectionDAGBuilder where we wereChandler Carruth2011-11-221-2/+2
| | | | | | | | | | | | | dropping weights on the floor for invokes. This was impeding my writing further test cases for invoke when interacting with probabilities and block placement. No test case as there doesn't appear to be a way to test this stuff. =/ Suggestions for a test case of course welcome. I hope to be able to add test cases that indirectly cover this eventually by adding probabilities to the exceptional edge and reordering blocks as a result. llvm-svn: 145060
* If a register is both an early clobber and part of a tied use, handle the useRafael Espindola2011-11-221-7/+16
| | | | | | | | before the clobber so that we copy the value if needed. Fixes pr11415. llvm-svn: 145056
* The logic for breaking the CFG in the presence of hot successors didn'tChandler Carruth2011-11-201-3/+29
| | | | | | | | | | | | | | | | properly account for the *global* probability of the edge being taken. This manifested as a very large number of unconditional branches to blocks being merged against the CFG even though they weren't particularly hot within the CFG. The fix is to check whether the edge being merged is both locally hot relative to other successors for the source block, and globally hot compared to other (unmerged) predecessors of the destination block. This introduces a new crasher on GCC single-source, but it's currently behind a flag, and Ben has offered to work on the reduction. =] llvm-svn: 145010
* Move the handling of unanalyzable branches out of the loop-driven chainChandler Carruth2011-11-191-25/+33
| | | | | | | | | | | | | | | | | | | | | | | | formation phase and into the initial walk of the basic blocks. We essentially pre-merge all blocks where unanalyzable fallthrough exists, as we won't be able to update the terminators effectively after any reorderings. This is quite a bit more principled as there may be CFGs where the second half of the unanalyzable pair has some analyzable predecessor that gets placed first. Then it may get placed next, implicitly breaking the unanalyzable branch even though we never even looked at the part that isn't analyzable. I've included a test case that triggers this (thanks Benjamin yet again!), and I'm hoping to synthesize some more general ones as I dig into related issues. Also, to make this new scheme work we have to be able to handle branches into the middle of a chain, so add this check. We always fallback on the incoming ordering. Finally, this starts to really underscore a known limitation of the current implementation -- we don't consider broken predecessors when merging successors. This can caused major missed opportunities, and is something I'm planning on looking at next (modulo more bug reports). llvm-svn: 144994
* DISubrange supports unsigned lower/upper array bounds, so let's not fake it ↵Devang Patel2011-11-171-4/+4
| | | | | | in the end while emitting DWARF. If a FE needs to encode signed lower/upper array bounds then we need to extend DISubrange or ad DISignedSubrange. llvm-svn: 144937
* When fast iseling a GEP, accumulate the offset rather than emitting a series ofChad Rosier2011-11-171-11/+35
| | | | | | | | | | | | | ADDs. MaxOffs is used as a threshold to limit the size of the offset. Tradeoffs being: (1) If we can't materialize the large constant then we'll cause fast-isel to bail. (2) Too large of an offset can't be directly encoded in the ADD resulting in a MOV+ADD. Generally not a bad thing because otherwise we would have had ADD+ADD, but on Thumb this turns into a MOVS+MOVT+ADD. Working on a fix for that. (3) Conversely, too low of a threshold we'll miss opportunities to coalesce ADDs. rdar://10412592 llvm-svn: 144886
* Make sure to replace the chain properly when DAGCombining a ↵Eli Friedman2011-11-161-4/+17
| | | | | | LOAD+EXTRACT_VECTOR_ELT into a single LOAD. Fixes PR10747/PR11393. llvm-svn: 144863
* Add fast-isel stats to determine who's doing all the work, the Chad Rosier2011-11-161-0/+7
| | | | | | target-independent selector or the target-specific selector. llvm-svn: 144833
* Fix the stats collection for fast-isel. The failed count was only accountingChad Rosier2011-11-161-5/+18
| | | | | | | | | for a single miss and not all predecessor instructions that get selected by the selection DAG instruction selector. This is still not exact (e.g., over states misses when folded/dead instructions are present), but it is a step in the right direction. llvm-svn: 144832
* Disable expensive two-address optimizations at -O0. rdar://10453055Evan Cheng2011-11-161-0/+8
| | | | llvm-svn: 144806
* Disable the assertion again. Looks like fastisel is still generating bad ↵Evan Cheng2011-11-161-1/+2
| | | | | | kill markers. llvm-svn: 144804
* Sink codegen optimization level into MCCodeGenInfo along side relocation modelEvan Cheng2011-11-162-33/+30
| | | | | | | and code model. This eliminates the need to pass OptLevel flag all over the place and makes it possible for any codegen pass to use this information. llvm-svn: 144788
* Record landing pads with a SmallSetVector to avoid multiple entries.Bob Wilson2011-11-161-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There may be many invokes that share one landing pad, and the previous code would record the landing pad once for each invoke. Besides the wasted effort, a pair of volatile loads gets inserted every time the landing pad is processed. The rest of the code can get optimized away when a landing pad is processed repeatedly, but the volatile loads remain, resulting in code like: LBB35_18: Ltmp483: ldr r2, [r7, #-72] ldr r2, [r7, #-68] ldr r2, [r7, #-72] ldr r2, [r7, #-68] ldr r2, [r7, #-72] ldr r2, [r7, #-68] ldr r2, [r7, #-72] ldr r2, [r7, #-68] ldr r2, [r7, #-72] ldr r2, [r7, #-68] ldr r2, [r7, #-72] ldr r2, [r7, #-68] ldr r2, [r7, #-72] ldr r2, [r7, #-68] ldr r2, [r7, #-72] ldr r2, [r7, #-68] ldr r4, [r7, #-72] ldr r2, [r7, #-68] llvm-svn: 144787
* Update the SP in the SjLj jmpbuf whenever it changes. <rdar://problem/10444602>Bob Wilson2011-11-161-3/+21
| | | | | | | This same basic code was in the older version of the SjLj exception handling, but it was removed in the recent revisions to that code. It needs to be there. llvm-svn: 144782
* Revert r144568 now that r144730 has fixed the fast-isel kill marker bug.Evan Cheng2011-11-161-2/+1
| | | | llvm-svn: 144776
* If the 2addr instruction has other kills, don't move it below any other uses ↵Evan Cheng2011-11-161-2/+7
| | | | | | since we don't want to extend other live ranges. llvm-svn: 144772
* RescheduleKillAboveMI() must backtrack to before the rescheduled DBG_VALUE ↵Evan Cheng2011-11-161-1/+1
| | | | | | instructions. rdar://10451185 llvm-svn: 144771
* Process all uses first before defs to accurately capture register liveness. ↵Evan Cheng2011-11-161-7/+13
| | | | | | rdar://10449480 llvm-svn: 144770
* CONCAT_VECTORS can have more than two operands. PR11389.Eli Friedman2011-11-161-22/+12
| | | | llvm-svn: 144768
* Add a couple asserts so it will be easier to debug if we accidentally pass ↵Eli Friedman2011-11-161-0/+4
| | | | | | indexed loads/stores to the legalizer. llvm-svn: 144767
* Rename MVT::untyped to MVT::Untyped to match similar nomenclature.Owen Anderson2011-11-162-3/+3
| | | | llvm-svn: 144747
* Stabilize the output of the dwarf accelerator tables. Fixes a comparisonEric Christopher2011-11-151-2/+11
| | | | | | failure during bootstrap with it turned on. llvm-svn: 144731
* GEPs with all zero indices are trivially coalesced by fast-isel. For example,Chad Rosier2011-11-151-0/+5
| | | | | | | | | | | | | %arrayidx135 = getelementptr inbounds [4 x [4 x [4 x [4 x i32]]]]* %M0, i32 0, i64 0 %arrayidx136 = getelementptr inbounds [4 x [4 x [4 x i32]]]* %arrayidx135, i32 0, i64 %idxprom134 Prior to this commit, the GEP instruction that defines %arrayidx136 thought that %arrayidx135 was a trivial kill. The GEP that defines %arrayidx135 doesn't generate any code and thus %M0 gets folded into the second GEP. Thus, we need to look through GEPs with all zero indices. rdar://10443319 llvm-svn: 144730
* Added custom lowering for load->dec->store sequence in x86 when the EFLAGS ↵Pete Cooper2011-11-151-0/+5
| | | | | | | | | | | | registers is used by later instructions. Only done for DEC64m right now. Fixes <rdar://problem/6172640> llvm-svn: 144705
OpenPOWER on IntegriCloud