summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen/MachineBlockPlacement.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Split TargetLowering into a CodeGen and a SelectionDAG part.Benjamin Kramer2013-01-111-1/+1
| | | | | | | | | This fixes some of the cycles between libCodeGen and libSelectionDAG. It's still a complete mess but as long as the edges consist of virtual call it doesn't cause breakage. BasicTTI did static calls and thus broke some build configurations. llvm-svn: 172246
* Remove the Function::getFnAttributes method in favor of using the AttributeSetBill Wendling2012-12-301-2/+2
| | | | | | | | | directly. This is in preparation for removing the use of the 'Attribute' class as a collection of attributes. That will shift to the AttributeSet class instead. llvm-svn: 171253
* Rename the 'Attributes' class to 'Attribute'. It's going to represent a ↵Bill Wendling2012-12-191-1/+1
| | | | | | single attribute in the future. llvm-svn: 170502
* Use the new script to sort the includes of every file under lib.Chandler Carruth2012-12-031-5/+5
| | | | | | | | | | | | | | | | | Sooooo many of these had incorrect or strange main module includes. I have manually inspected all of these, and fixed the main module include to be the nearest plausible thing I could find. If you own or care about any of these source files, I encourage you to take some time and check that these edits were sensible. I can't have broken anything (I strictly added headers, and reordered them, never removed), but they may not be the headers you'd really like to identify as containing the API being implemented. Many forward declarations and missing includes were added to a header files to allow them to parse cleanly when included first. The main module rule does in fact have its merits. =] llvm-svn: 169131
* Create enums for the different attributes.Bill Wendling2012-10-091-1/+2
| | | | | | | We use the enums to query whether an Attributes object has that attribute. The opaque layer is responsible for knowing where that specific attribute is stored. llvm-svn: 165488
* Remove the `hasFnAttr' method from Function.Bill Wendling2012-09-261-1/+1
| | | | | | | The hasFnAttr method has been replaced by querying the Attributes explicitly. No intended functionality change. llvm-svn: 164725
* Remove silly dead store. Patch by Ettl Martin.Duncan Sands2012-09-141-2/+1
| | | | llvm-svn: 163882
* Add a much more conservative strategy for aligning branch targets.Chandler Carruth2012-08-071-15/+49
| | | | | | | | | | | | | Previously, MBP essentially aligned every branch target it could. This bloats code quite a bit, especially non-looping code which has no real reason to prefer aligned branch targets so heavily. As Andy said in review, it's still a bit odd to do this without a real cost model, but this at least has much more plausible heuristics. Fixes PR13265. llvm-svn: 161409
* Reverse order of the two branches at end of a basic block if it is profitable.Manman Ren2012-07-311-1/+15
| | | | | | | | | | | | | | | We branch to the successor with higher edge weight first. Convert from je LBB4_8 --> to outer loop jmp LBB4_14 --> to inner loop to jne LBB4_14 jmp LBB4_8 PR12750 rdar: 11393714 llvm-svn: 161018
* Update a bunch of stale comments that dated from when this folled theChandler Carruth2012-06-261-14/+11
| | | | | | | very first (and worst) placement algorithm. These should now more accurately reflect the reality of the pass. llvm-svn: 159185
* Fix typos found by http://github.com/lyda/misspell-checkBenjamin Kramer2012-06-021-4/+4
| | | | llvm-svn: 157885
* Add a somewhat hacky heuristic to do something different from whole-loopChandler Carruth2012-04-161-3/+78
| | | | | | | | | | | | | rotation. When there is a loop backedge which is an unconditional branch, we will end up with a branch somewhere no matter what. Try placing this backedge in a fallthrough position above the loop header as that will definitely remove at least one branch from the loop iteration, where whole loop rotation may not. I haven't seen any benchmarks where this is important but loop-blocks.ll tests for it, and so this will be covered when I flip the default. llvm-svn: 154812
* Tweak the loop rotation logic to check whether the loop is naturallyChandler Carruth2012-04-161-11/+51
| | | | | | | | | | | laid out in a form with a fallthrough into the header and a fallthrough out of the bottom. In that case, leave the loop alone because any rotation will introduce unnecessary branches. If either side looks like it will require an explicit branch, then the rotation won't add any, do it to ensure the branch occurs outside of the loop (if possible) and maximize the benefit of the fallthrough in the bottom. llvm-svn: 154806
* Rewrite how machine block placement handles loop rotation.Chandler Carruth2012-04-161-66/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a complex change that resulted from a great deal of experimentation with several different benchmarks. The one which proved the most useful is included as a test case, but I don't know that it captures all of the relevant changes, as I didn't have specific regression tests for each, they were more the result of reasoning about what the old algorithm would possibly do wrong. I'm also failing at the moment to craft more targeted regression tests for these changes, if anyone has ideas, it would be welcome. The first big thing broken with the old algorithm is the idea that we can take a basic block which has a loop-exiting successor and a looping successor and use the looping successor as the layout top in order to get that particular block to be the bottom of the loop after layout. This happens to work in many cases, but not in all. The second big thing broken was that we didn't try to select the exit which fell into the nearest enclosing loop (to which we exit at all). As a consequence, even if the rotation worked perfectly, it would result in one of two bad layouts. Either the bottom of the loop would get fallthrough, skipping across a nearer enclosing loop and thereby making it discontiguous, or it would be forced to take an explicit jump over the nearest enclosing loop to earch its successor. The point of the rotation is to get fallthrough, so we need it to fallthrough to the nearest loop it can. The fix to the first issue is to actually layout the loop from the loop header, and then rotate the loop such that the correct exiting edge can be a fallthrough edge. This is actually much easier than I anticipated because we can handle all the hard parts of finding a viable rotation before we do the layout. We just store that, and then rotate after layout is finished. No inner loops get split across the post-rotation backedge because we check for them when selecting the rotation. That fix exposed a latent problem with our exitting block selection -- we should allow the backedge to point into the middle of some inner-loop chain as there is no real penalty to it, the whole point is that it *won't* be a fallthrough edge. This may have blocked the rotation at all in some cases, I have no idea and no test case as I've never seen it in practice, it was just noticed by inspection. Finally, all of these fixes, and studying the loops they produce, highlighted another problem: in rotating loops like this, we sometimes fail to align the destination of these backwards jumping edges. Fix this by actually walking the backwards edges rather than relying on loopinfo. This fixes regressions on heapsort if block placement is enabled as well as lots of other cases where the previous logic would introduce an abundance of unnecessary branches into the execution. llvm-svn: 154783
* Make a somewhat subtle change in the logic of block placement. SometimesChandler Carruth2012-04-101-0/+12
| | | | | | | | | | | | | | | | the loop header has a non-loop predecessor which has been pre-fused into its chain due to unanalyzable branches. In this case, rotating the header into the body of the loop in order to place a loop exit at the bottom of the loop is a Very Bad Idea as it makes the loop non-contiguous. I'm working on a good test case for this, but it's a bit annoynig to craft. I should get one shortly, but I'm submitting this now so I can begin the (lengthy) performance analysis process. An initial run of LNT looks really, really good, but there is too much noise there for me to trust it much. llvm-svn: 154395
* Remove an over zealous assert. The assert was trying to catch placesChandler Carruth2012-04-081-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | where a chain outside of the loop block-set ended up in the worklist for scheduling as part of the contiguous loop. However, asserting the first block in the chain is in the loop-set isn't a valid check -- we may be forced to drag a chain into the worklist due to one block in the chain being part of the loop even though the first block is *not* in the loop. This occurs when we have been forced to form a chain early due to un-analyzable branches. No test case here as I have no idea how to even begin reducing one, and it will be hopelessly fragile. We have to somehow end up with a loop header of an inner loop which is a successor of a basic block with an unanalyzable pair of branch instructions. Ow. Self-host triggers it so it is unlikely it will regress. This at least gets block placement back to passing selfhost and the test suite. There are still a lot of slowdown that I don't like coming out of block placement, although there are now also a lot of speedups. =[ I'm seeing swings in both directions up to 10%. I'm going to try to find time to dig into this and see if we can turn this on for 3.1 as it does a really good job of cleaning up after some loops that degraded with the inliner changes. llvm-svn: 154287
* Add a debug-only 'dump' method to the BlockChain structure to easeChandler Carruth2012-04-081-0/+8
| | | | | | debugging. llvm-svn: 154286
* Codegen pass definition cleanup. No functionality.Andrew Trick2012-02-081-12/+2
| | | | | | | | | | | | | Moving toward a uniform style of pass definition to allow easier target configuration. Globally declare Pass ID. Globally declare pass initializer. Use INITIALIZE_PASS consistently. Add a call to the initializer from CodeGen.cpp. Remove redundant "createPass" functions and "getPassName" methods. While cleaning up declarations, cleaned up comments (sorry for large diff). llvm-svn: 150100
* Revert patch from 147090. There is not point to make code less readable if weJakub Staszak2011-12-211-43/+45
| | | | | | don't get any serious benefit there. llvm-svn: 147101
* - Change a few operator[] to lookup which is cheaper.Jakub Staszak2011-12-211-45/+43
| | | | | | - Add some constantness. llvm-svn: 147090
* Remove unneeded semicolon.Jakub Staszak2011-12-071-3/+3
| | | | | | Skip two looking up at BlockChain. llvm-svn: 146053
* Remove unneeded type.Jakub Staszak2011-12-071-2/+0
| | | | llvm-svn: 145995
* - Remove unneeded #includes.Jakub Staszak2011-12-061-25/+4
| | | | | | | - Remove unused types/fields. - Add some constantness. llvm-svn: 145993
* Prevent rotating the blocks of a loop (and thus getting a backedge to beChandler Carruth2011-11-271-0/+16
| | | | | | | | | | | | | | fallthrough) in cases where we might fail to rotate an exit to an outer loop onto the end of the loop chain. Having *some* rotation, but not performing this rotation, is the primary fix of thep performance regression with -enable-block-placement for Olden/em3d (a whopping 30% regression). Still working on reducing the test case that actually exercises this and the new rotation strategy out of this code, but I want to check if this regresses other test cases first as that may indicate it isn't the correct fix. llvm-svn: 145195
* Take two on rotating the block ordering of loops. My previous attemptChandler Carruth2011-11-271-85/+103
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | was centered around the premise of laying out a loop in a chain, and then rotating that chain. This is good for preserving contiguous layout, but bad for actually making sane rotations. In order to keep it safe, I had to essentially make it impossible to rotate deeply nested loops. The information needed to correctly reason about a deeply nested loop is actually available -- *before* we layout the loop. We know the inner loops are already fused into chains, etc. We lose information the moment we actually lay out the loop. The solution was the other alternative for this algorithm I discussed with Benjamin and some others: rather than rotating the loop after-the-fact, try to pick a profitable starting block for the loop's layout, and then use our existing layout logic. I was worried about the complexity of this "pick" step, but it turns out such complexity is needed to handle all the important cases I keep teasing out of benchmarks. This is, I'm afraid, a bit of a work-in-progress. It is still misbehaving on some likely important cases I'm investigating in Olden. It also isn't really tested. I'm going to try to craft some interesting nested-loop test cases, but it's likely to be extremely time consuming and I don't want to go there until I'm sure I'm testing the correct behavior. Sadly I can't come up with a way of getting simple, fine grained test cases for this logic. We need complex loop structures to even trigger much of it. llvm-svn: 145183
* Fix an impressive type-o / spell-o Duncan noticed.Chandler Carruth2011-11-271-1/+1
| | | | llvm-svn: 145181
* Rework a bit of the implementation of loop block rotation to not rely soChandler Carruth2011-11-271-21/+31
| | | | | | | | | | | | | | | heavily on AnalyzeBranch. That routine doesn't behave as we want given that rotation occurs mid-way through re-ordering the function. Instead merely check that there are not unanalyzable branching constructs present, and then reason about the CFG via successor lists. This actually simplifies my mental model for all of this as well. The concrete result is that we now will rotate more loop chains. I've added a test case from Olden highlighting the effect. There is still a bit more to do here though in order to regain all of the performance in Olden. llvm-svn: 145179
* Introduce a loop block rotation optimization to the new block placementChandler Carruth2011-11-271-3/+92
| | | | | | | | | | | | | | | pass. This is designed to achieve one of the important optimizations that the old code placement pass did, but more simply. This is a somewhat rough and *very* conservative version of the transform. We could get a lot fancier here if there are profitable cases to do so. In particular, this only looks for a single pattern, it insists that the loop backedge being rotated away is the last backedge in the chain, and it doesn't provide any means of doing better in-loop placement due to the rotation. However, it appears that it will handle the important loops I am finding in the LLVM test suite. llvm-svn: 145158
* Fix a silly use-after-free issue. A much earlier version of this codeChandler Carruth2011-11-241-2/+2
| | | | | | | | | | | | need lots of fanciness around retaining a reference to a Chain's slot in the BlockToChain map, but that's all gone now. We can just go directly to allocating the new chain (which will update the mapping for us) and using it. Somewhat gross mechanically generated test case replicates the issue Duncan spotted when actually testing this out. llvm-svn: 145120
* When adding blocks to the list of those which no longer have any CFGChandler Carruth2011-11-241-3/+3
| | | | | | | | | | | | | | conflicts, we should only be adding the first block of the chain to the list, lest we try to merge into the middle of that chain. Most of the places we were doing this we already happened to be looking at the first block, but there is no reason to assume that, and in some cases it was clearly wrong. I've added a couple of tests here. One already worked, but I like having an explicit test for it. The other is reduced from a test case Duncan reduced for me and used to crash. Now it is handled correctly. llvm-svn: 145119
* Relax an invariant that block placement was trying to assert a bitChandler Carruth2011-11-231-3/+1
| | | | | | | | | | | further. This invariant just wasn't going to work in the face of unanalyzable branches; we need to be resillient to the phenomenon of chains poking into a loop and poking out of a loop. In fact, we already were, we just needed to not assert on it. This was found during a bootstrap with block placement turned on. llvm-svn: 145100
* Fix a crash in block placement due to an inner loop that happened to beChandler Carruth2011-11-231-1/+4
| | | | | | | | | | | reversed in the function's original ordering, and we happened to encounter it while handling an outer unnatural CFG structure. Thanks to the test case reduced from GCC's source by Benjamin Kramer. This may also fix a crasher in gzip that Duncan reduced for me, but I haven't yet gotten to testing that one. llvm-svn: 145094
* The logic for breaking the CFG in the presence of hot successors didn'tChandler Carruth2011-11-201-3/+29
| | | | | | | | | | | | | | | | properly account for the *global* probability of the edge being taken. This manifested as a very large number of unconditional branches to blocks being merged against the CFG even though they weren't particularly hot within the CFG. The fix is to check whether the edge being merged is both locally hot relative to other successors for the source block, and globally hot compared to other (unmerged) predecessors of the destination block. This introduces a new crasher on GCC single-source, but it's currently behind a flag, and Ben has offered to work on the reduction. =] llvm-svn: 145010
* Move the handling of unanalyzable branches out of the loop-driven chainChandler Carruth2011-11-191-25/+33
| | | | | | | | | | | | | | | | | | | | | | | | formation phase and into the initial walk of the basic blocks. We essentially pre-merge all blocks where unanalyzable fallthrough exists, as we won't be able to update the terminators effectively after any reorderings. This is quite a bit more principled as there may be CFGs where the second half of the unanalyzable pair has some analyzable predecessor that gets placed first. Then it may get placed next, implicitly breaking the unanalyzable branch even though we never even looked at the part that isn't analyzable. I've included a test case that triggers this (thanks Benjamin yet again!), and I'm hoping to synthesize some more general ones as I dig into related issues. Also, to make this new scheme work we have to be able to handle branches into the middle of a chain, so add this check. We always fallback on the incoming ordering. Finally, this starts to really underscore a known limitation of the current implementation -- we don't consider broken predecessors when merging successors. This can caused major missed opportunities, and is something I'm planning on looking at next (modulo more bug reports). llvm-svn: 144994
* Rather than trying to use the loop block sequence *or* the functionChandler Carruth2011-11-151-27/+24
| | | | | | | | | | | | | | | | | | | | | | | block sequence when recovering from unanalyzable control flow constructs, *always* use the function sequence. I'm not sure why I ever went down the path of trying to use the loop sequence, it is fundamentally not the correct sequence to use. We're trying to preserve the incoming layout in the cases of unreasonable control flow, and that is only encoded at the function level. We already have a filter to select *exactly* the sub-set of blocks within the function that we're trying to form into a chain. The resulting code layout is also significantly better because of this. In several places we were ending up with completely unreasonable control flow constructs due to the ordering chosen by the loop structure for its internal storage. This change removes a completely wasteful vector of basic blocks, saving memory allocation in the common case even though it costs us CPU in the fairly rare case of unnatural loops. Finally, it fixes the latest crasher reduced out of GCC's single source. Thanks again to Benjamin Kramer for the reduction, my bugpoint skills failed at it. llvm-svn: 144627
* It helps to deallocate memory as well as allocate it. =] This actuallyChandler Carruth2011-11-141-0/+1
| | | | | | | | cleans up all the chains allocated during the processing of each function so that for very large inputs we don't just grow memory usage without bound. llvm-svn: 144533
* Remove an over-eager assert that was firing on one of the ARM regressionChandler Carruth2011-11-141-3/+6
| | | | | | | | | | | | | | tests when I forcibly enabled block placement. It is apparantly possible for an unanalyzable block to fallthrough to a non-loop block. I don't actually beleive this is correct, I believe that 'canFallThrough' is returning true needlessly for the code construct, and I've left a bit of a FIXME on the verification code to try to track down why this is coming up. Anyways, removing the assert doesn't degrade the correctness of the algorithm. llvm-svn: 144532
* Begin chipping away at one of the biggest quadratic-ish behaviors inChandler Carruth2011-11-141-2/+26
| | | | | | | | | | | | | | this pass. We're leaving already merged blocks on the worklist, and scanning them again and again only to determine each time through that indeed they aren't viable. We can instead remove them once we're going to have to scan the worklist. This is the easy way to implement removing them. If this remains on the profile (as I somewhat suspect it will), we can get a lot more clever here, as the worklist's order is essentially irrelevant. We can use swapping and fold the two loops to reduce overhead even when there are many blocks on the worklist but only a few of them are removed. llvm-svn: 144531
* Under the hood, MBPI is doing a linear scan of every successor everyChandler Carruth2011-11-141-4/+13
| | | | | | | | | | | | | | | | | | time it is queried to compute the probability of a single successor. This makes computing the probability of every successor of a block in sequence... really really slow. ;] This switches to a linear walk of the successors rather than a quadratic one. One of several quadratic behaviors slowing this pass down. I'm not really thrilled with moving the sum code into the public interface of MBPI, but I don't (at the moment) have ideas for a better interface. My direction I'm thinking in for a better interface is to have MBPI actually retain much more state and make *all* of these queries cheap. That's a lot of work, and would require invasive changes. Until then, this seems like the least bad (ie, least quadratic) solution. Suggestions welcome. llvm-svn: 144530
* Teach machine block placement to cope with unnatural loops. These don'tChandler Carruth2011-11-141-21/+60
| | | | | | | | | | | | | | | | | | get loop info structures associated with them, and so we need some way to make forward progress selecting and placing basic blocks. The technique used here is pretty brutal -- it just scans the list of blocks looking for the first unplaced candidate. It keeps placing blocks like this until the CFG becomes tractable. The cost is somewhat unfortunate, it requires allocating a vector of all basic block pointers eagerly. I have some ideas about how to simplify and optimize this, but I'm trying to get the logic correct first. Thanks to Benjamin Kramer for the reduced test case out of GCC. Sadly there are other bugs that GCC is tickling that I'm reducing and working on now. llvm-svn: 144516
* Cleanup some 80-columns violations and poor formatting. These snuck byChandler Carruth2011-11-131-5/+9
| | | | | | when I was reading through the code for style. llvm-svn: 144513
* Enhance the assertion mechanisms in place to make it easier to catchChandler Carruth2011-11-131-5/+28
| | | | | | | | when we fail to place all the blocks of a loop. Currently this is happening for unnatural loops, and this logic helps more immediately point to the problem. llvm-svn: 144504
* Teach MBP to force-merge layout successors for blocks with unanalyzableChandler Carruth2011-11-131-3/+20
| | | | | | | | | | | | | | | | | branches that also may involve fallthrough. In the case of blocks with no fallthrough, we can still re-order the blocks profitably. For example instruction decoding will in some cases continue past an indirect jump, making laying out its most likely successor there profitable. Note, no test case. I don't know how to write a test case that exercises this logic, but it matches the described desired semantics in discussions with Jakob and others. If anyone has a nice example of IR that will trigger this, that would be lovely. Also note, there are still assertion failures in real world code with this. I'm digging into those next, now that I know this isn't the cause. llvm-svn: 144499
* Hoist another gross nested loop into a helper method.Chandler Carruth2011-11-131-23/+44
| | | | llvm-svn: 144498
* Add a missing doxygen comment for a helper method.Chandler Carruth2011-11-131-0/+6
| | | | llvm-svn: 144497
* Hoist a nested loop into its own method.Chandler Carruth2011-11-131-33/+53
| | | | llvm-svn: 144496
* Rewrite #3 of machine block placement. This is based somewhat on theChandler Carruth2011-11-131-139/+256
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | second algorithm, but only loosely. It is more heavily based on the last discussion I had with Andy. It continues to walk from the inner-most loop outward, but there is a key difference. With this algorithm we ensure that as we visit each loop, the entire loop is merged into a single chain. At the end, the entire function is treated as a "loop", and merged into a single chain. This chain forms the desired sequence of blocks within the function. Switching to a single algorithm removes my biggest problem with the previous approaches -- they had different behavior depending on which system triggered the layout. Now there is exactly one algorithm and one basis for the decision making. The other key difference is how the chain is formed. This is based heavily on the idea Andy mentioned of keeping a worklist of blocks that are viable layout successors based on the CFG. Having this set allows us to consistently select the best layout successor for each block. It is expensive though. The code here remains very rough. There is a lot that needs to be done to clean up the code, and to make the runtime cost of this pass much lower. Very much WIP, but this was a giant chunk of code and I'd rather folks see it sooner than later. Everything remains behind a flag of course. I've added a couple of tests to exercise the issues that this iteration was motivated by: loop structure preservation. I've also fixed one test that was exhibiting the broken behavior of the previous version. llvm-svn: 144495
* Begin collecting some of the statistics for block placement discussed onChandler Carruth2011-11-021-0/+83
| | | | | | | | | | | | | the mailing list. Suggestions for other statistics to collect would be awesome. =] Currently these are implemented as a separate pass guarded by a separate flag. I'm not thrilled by that, but I wanted to be able to collect the statistics for the old code placement as well as the new in order to have a point of comparison. I'm planning on folding them into the single pass if / when there is only one pass of interest. llvm-svn: 143537
* Sink an otherwise unused variable's initializer into the asserts thatChandler Carruth2011-10-241-3/+2
| | | | | | used it. Fixes an unused variable warning from GCC on release builds. llvm-svn: 142799
* Now that we have comparison on probabilities, add some static functionsChandler Carruth2011-10-231-8/+5
| | | | | | | to get important constant branch probabilities and use them for finding the best branch out of a set of possibilities. llvm-svn: 142762
OpenPOWER on IntegriCloud