summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms/PhaseOrdering
Commit message (Collapse)AuthorAgeFilesLines
* ../llvm/utils/update_test_checks.py --opt-binary bin/opt ↵Hans Wennborg2020-03-191-6/+24
| | | | ../llvm/test/Transforms/PhaseOrdering/min-max-abs-cse.ll
* [EarlyCSE] avoid crashing when detecting min/max/abs patterns (PR41083)Sanjay Patel2020-03-191-2/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As discussed in PR41083: https://bugs.llvm.org/show_bug.cgi?id=41083 ...we can assert/crash in EarlyCSE using the current hashing scheme and instructions with flags. ValueTracking's matchSelectPattern() may rely on overflow (nsw, etc) or other flags when detecting patterns such as min/max/abs composed of compare+select. But the value numbering / hashing mechanism used by EarlyCSE intersects those flags to allow more CSE. Several alternatives to solve this are discussed in the bug report. This patch avoids the issue by doing simple matching of min/max/abs patterns that never requires instruction flags. We give up some CSE power because of that, but that is not expected to result in much actual performance difference because InstCombine will canonicalize these patterns when possible. It even has this comment for abs/nabs: /// Canonicalize all these variants to 1 pattern. /// This makes CSE more likely. (And this patch adds PhaseOrdering tests to verify that the expected transforms are still happening in the standard optimization pipelines. I left this code to use ValueTracking's "flavor" enum values, so we don't have to change the callers' code. If we decide to go back to using the ValueTracking call (by changing the hashing algorithm instead), it should be obvious how to replace this chunk. Differential Revision: https://reviews.llvm.org/D74285 (cherry picked from commit b8ebc11f032032c7ca449f020a1fe40346e707c8)
* [Transforms] add phase ordering tests for min/max/abs; NFCSanjay Patel2020-03-191-0/+87
| | | | | | | | | | | Test that instcombine and early-cse can cooperate to reduce sequences of select patterns that are not composed of the same underlying instructions. There's a bug in EarlyCSE (PR41083), and we can test how much a possible fix (D74285) may affect optimization. (cherry picked from commit 0ad6e726ec7eee8ef14a89fa288d5a1420d96b1e)
* Reland [DataLayout] Fix occurrences that size and range of pointers are ↵Nicola Zaghen2019-12-131-0/+70
| | | | | | | | | | | | | | assumed to be the same. GEP index size can be specified in the DataLayout, introduced in D42123. However, there were still places in which getIndexSizeInBits was used interchangeably with getPointerSizeInBits. This notably caused issues with Instcombine's visitPtrToInt; but the unit tests was incorrect, so this remained undiscovered. This fixes the buildbot failures. Differential Revision: https://reviews.llvm.org/D68328 Patch by Joseph Faulls!
* Temporarily Revert "[DataLayout] Fix occurrences that size and range of ↵Nicola Zaghen2019-12-121-70/+0
| | | | | | | | | pointers are assumed to be the same." This reverts commit 5f6208778ff92567c57d7c1e2e740c284d7e69a5. This caused failures in Transforms/PhaseOrdering/scev-custom-dl.ll const: Assertion `getBitWidth() == CR.getBitWidth() && "ConstantRange types don't agree!"' failed.
* [DataLayout] Fix occurrences that size and range of pointers are assumed to ↵Nicola Zaghen2019-12-121-0/+70
| | | | | | | | | | | | be the same. GEP index size can be specified in the DataLayout, introduced in D42123. However, there were still places in which getIndexSizeInBits was used interchangeably with getPointerSizeInBits. This notably caused issues with Instcombine's visitPtrToInt; but the unit tests was incorrect, so this remained undiscovered. Differential Revision: https://reviews.llvm.org/D68328 Patch by Joseph Faulls!
* Revert "[Attributor] Move pass after InstCombine to futher eliminate null ↵Dávid Bolvanský2019-11-271-47/+0
| | | | | | pointer checks" This reverts commit 7ca7d62c6ea1680ec0a1861083669596547fdd6f. Commited accidentally.
* [Attributor] Move pass after InstCombine to futher eliminate null pointer checksDávid Bolvanský2019-11-271-0/+47
| | | | | | | | | | | | Summary: PR44149 Reviewers: jdoerfert Subscribers: mehdi_amini, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70737
* Revert "Revert "As a follow-up to my initial mail to llvm-dev here's a first ↵Eric Christopher2019-11-262-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | pass at the O1 described there."" This reapplies: 8ff85ed905a7306977d07a5cd67ab4d5a56fafb4 Original commit message: As a follow-up to my initial mail to llvm-dev here's a first pass at the O1 described there. This change doesn't include any change to move from selection dag to fast isel and that will come with other numbers that should help inform that decision. There also haven't been any real debuggability studies with this pipeline yet, this is just the initial start done so that people could see it and we could start tweaking after. Test updates: Outside of the newpm tests most of the updates are coming from either optimization passes not run anymore (and without a compelling argument at the moment) that were largely used for canonicalization in clang. Original post: http://lists.llvm.org/pipermail/llvm-dev/2019-April/131494.html Tags: #llvm Differential Revision: https://reviews.llvm.org/D65410 This reverts commit c9ddb02659e3ece7a0d9d6b4dac7ceea4ae46e6d.
* Revert "As a follow-up to my initial mail to llvm-dev here's a first pass at ↵Muhammad Omair Javaid2019-11-262-6/+6
| | | | | | | | | | | | | | | | | | | | | | | the O1 described there." This reverts commit 8ff85ed905a7306977d07a5cd67ab4d5a56fafb4. This commit introduced 9 new failures on lldb buildbot host at http://lab.llvm.org:8014/builders/lldb-aarch64-ubuntu Following tests were failing: lldb-api :: functionalities/tail_call_frames/ambiguous_tail_call_seq1/TestAmbiguousTailCallSeq1.py lldb-api :: functionalities/tail_call_frames/ambiguous_tail_call_seq2/TestAmbiguousTailCallSeq2.py lldb-api :: functionalities/tail_call_frames/disambiguate_call_site/TestDisambiguateCallSite.py lldb-api :: functionalities/tail_call_frames/disambiguate_paths_to_common_sink/TestDisambiguatePathsToCommonSink.py lldb-api :: functionalities/tail_call_frames/disambiguate_tail_call_seq/TestDisambiguateTailCallSeq.py lldb-api :: functionalities/tail_call_frames/inlining_and_tail_calls/TestInliningAndTailCalls.py lldb-api :: functionalities/tail_call_frames/sbapi_support/TestTailCallFrameSBAPI.py lldb-api :: functionalities/tail_call_frames/thread_step_out_message/TestArtificialFrameStepOutMessage.py lldb-api :: functionalities/tail_call_frames/thread_step_out_or_return/TestSteppingOutWithArtificialFrames.py lldb-api :: functionalities/tail_call_frames/unambiguous_sequence/TestUnambiguousTailCalls.py Tags: #llvm Differential Revision: https://reviews.llvm.org/D65410
* As a follow-up to my initial mail to llvm-dev here's a first pass at the O1 ↵Eric Christopher2019-11-252-6/+6
| | | | | | | | | | | | | | | | | | | | | described there. This change doesn't include any change to move from selection dag to fast isel and that will come with other numbers that should help inform that decision. There also haven't been any real debuggability studies with this pipeline yet, this is just the initial start done so that people could see it and we could start tweaking after. Test updates: Outside of the newpm tests most of the updates are coming from either optimization passes not run anymore (and without a compelling argument at the moment) that were largely used for canonicalization in clang. Original post: http://lists.llvm.org/pipermail/llvm-dev/2019-April/131494.html Tags: #llvm Differential Revision: https://reviews.llvm.org/D65410
* [NFC][PhaseOrdering] Add end-to-end tests for the 'two shifts by sext' problemRoman Lebedev2019-09-271-0/+125
| | | | | | | | We start with two separate sext's, but EarlyCSE runs before InstCombine, so when we get them, they are a single sext, and we just ignore that. Likewise, if we had a single sext, we don't do anything there. llvm-svn: 373115
* Revert "Reland "r364412 [ExpandMemCmp][MergeICmps] Move passes out of ↵Dmitri Gribenko2019-09-108-1446/+0
| | | | | | | | | CodeGen into opt pipeline."" This reverts commit r371502, it broke tests (clang/test/CodeGenCXX/auto-var-init.cpp). llvm-svn: 371507
* Reland "r364412 [ExpandMemCmp][MergeICmps] Move passes out of CodeGen into ↵Clement Courbet2019-09-108-0/+1446
| | | | | | | | opt pipeline." With a fix for sanitizer breakage (see explanation in D60318). llvm-svn: 371502
* [InstSimplify] Drop leftover "division-by-zero guard" around ↵Roman Lebedev2019-08-291-8/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `@llvm.umul.with.overflow` inverted overflow bit Summary: Now that with D65143/D65144 we've produce `@llvm.umul.with.overflow`, and with D65147 we've flattened the CFG, we now can see that the guard may have been there to prevent division by zero is redundant. We can simply drop it: ``` ---------------------------------------- Name: no overflow or zero %iszero = icmp eq i4 %y, 0 %umul = smul_overflow i4 %x, %y %umul.ov = extractvalue {i4, i1} %umul, 1 %umul.ov.not = xor %umul.ov, -1 %retval.0 = or i1 %iszero, %umul.ov.not ret i1 %retval.0 => %iszero = icmp eq i4 %y, 0 %umul = smul_overflow i4 %x, %y %umul.ov = extractvalue {i4, i1} %umul, 1 %umul.ov.not = xor %umul.ov, -1 %retval.0 = or i1 %iszero, %umul.ov.not ret i1 %umul.ov.not Done: 1 Optimization is correct! ``` Note that this is inverted from what we have in a previous patch, here we are looking for the inverted overflow bit. And that inversion is kinda problematic - given this particular pattern we neither hoist that `not` closer to `ret` (then the pattern would have been identical to the one without inversion, and would have been handled by the previous patch), neither do the opposite transform. But regardless, we should handle this too. I've filled [[ https://bugs.llvm.org/show_bug.cgi?id=42720 | PR42720 ]]. Reviewers: nikic, spatel, xbolva00, RKSimon Reviewed By: spatel Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D65151 llvm-svn: 370351
* [InstSimplify] Drop leftover "division-by-zero guard" around ↵Roman Lebedev2019-08-291-8/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `@llvm.umul.with.overflow` overflow bit Summary: Now that with D65143/D65144 we've produce `@llvm.umul.with.overflow`, and with D65147 we've flattened the CFG, we now can see that the guard may have been there to prevent division by zero is redundant. We can simply drop it: ``` ---------------------------------------- Name: no overflow and not zero %iszero = icmp ne i4 %y, 0 %umul = umul_overflow i4 %x, %y %umul.ov = extractvalue {i4, i1} %umul, 1 %retval.0 = and i1 %iszero, %umul.ov ret i1 %retval.0 => %iszero = icmp ne i4 %y, 0 %umul = umul_overflow i4 %x, %y %umul.ov = extractvalue {i4, i1} %umul, 1 %retval.0 = and i1 %iszero, %umul.ov ret %umul.ov Done: 1 Optimization is correct! ``` Reviewers: nikic, spatel, xbolva00 Reviewed By: spatel Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D65150 llvm-svn: 370350
* [SimplifyCFG] FoldTwoEntryPHINode(): don't bailout on i1 PHI's if we can ↵Roman Lebedev2019-08-291-12/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | hoist a 'not' from incoming values Summary: As it can be seen in the tests in D65143/D65144, even though we have formed an '@llvm.umul.with.overflow' and got rid of potential for division-by-zero, the control flow remains, we still have that branch. We have this condition: ``` // Don't fold i1 branches on PHIs which contain binary operators // These can often be turned into switches and other things. if (PN->getType()->isIntegerTy(1) && (isa<BinaryOperator>(PN->getIncomingValue(0)) || isa<BinaryOperator>(PN->getIncomingValue(1)) || isa<BinaryOperator>(IfCond))) return false; ``` which was added back in rL121764 to help with `select` formation i think? That check prevents us to flatten the CFG here, even though we know we no longer need that guard and will be able to drop everything but the '@llvm.umul.with.overflow' + `not`. As it can be seen from tests, we end here because the `not` is being sinked into the PHI's incoming values by InstCombine, so we can't workaround this by hoisting it to after PHI. Thus i suggest that we relax that check to not bailout if we'd get to hoist the `not`. Reviewers: craig.topper, spatel, fhahn, nikic Reviewed By: spatel Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D65147 llvm-svn: 370349
* [InstCombine] Fold '(-1 u/ %x) u< %y' to '@llvm.umul.with.overflow' + ↵Roman Lebedev2019-08-291-14/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | overflow bit extraction Summary: `(-1 u/ %x) u< %y` is one of (3?) common ways to check that some unsigned multiplication (will not) overflow. Currently, we don't catch it. We could: ``` ---------------------------------------- Name: no overflow %o0 = udiv i4 -1, %x %r = icmp ult i4 %o0, %y => %o0 = udiv i4 -1, %x %n0 = umul_overflow i4 %x, %y %r = extractvalue {i4, i1} %n0, 1 Done: 1 Optimization is correct! ---------------------------------------- Name: no overflow, swapped %o0 = udiv i4 -1, %x %r = icmp ugt i4 %y, %o0 => %o0 = udiv i4 -1, %x %n0 = umul_overflow i4 %x, %y %r = extractvalue {i4, i1} %n0, 1 Done: 1 Optimization is correct! ---------------------------------------- Name: overflow %o0 = udiv i4 -1, %x %r = icmp uge i4 %o0, %y => %o0 = udiv i4 -1, %x %n0 = umul_overflow i4 %x, %y %n1 = extractvalue {i4, i1} %n0, 1 %r = xor %n1, -1 Done: 1 Optimization is correct! ---------------------------------------- Name: overflow %o0 = udiv i4 -1, %x %r = icmp ule i4 %y, %o0 => %o0 = udiv i4 -1, %x %n0 = umul_overflow i4 %x, %y %n1 = extractvalue {i4, i1} %n0, 1 %r = xor %n1, -1 Done: 1 Optimization is correct! ``` As it can be observed from tests, while simply forming the `@llvm.umul.with.overflow` is easy, if we were looking for the inverted answer, then more work needs to be done to cleanup the now-pointless control-flow that was guarding against division-by-zero. This is being addressed in follow-up patches. Reviewers: nikic, spatel, efriedma, xbolva00, RKSimon Reviewed By: nikic, xbolva00 Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D65143 llvm-svn: 370347
* Add PhaseOrdering/lifetime-sanitizer.ll testsVitaly Buka2019-08-271-0/+71
| | | | | | | | | | | | Reviewers: lebedev.ri Subscribers: llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66761 llvm-svn: 369996
* [NFC][PhaseOredering][SimplifyCFG] Add more runlines to umul.with.overflow testsRoman Lebedev2019-07-231-1/+4
| | | | | | | | This way it will be more obvious that the problem is both in cost threshold and in hardcoded benefit check, plus will show how the instsimplify cleans this all in the end. llvm-svn: 366800
* [NFC][PhaseOrdering] Add tests showcasing the problems of unsigned multiply ↵Roman Lebedev2019-07-221-0/+85
| | | | | | | | | | overflow check While we can form the @llvm.mul.with.overflow easily, we are still left with that check that was guarding against div-by-0. And in the second case we won't even flatten the CFG. llvm-svn: 366747
* Revert "r364412 [ExpandMemCmp][MergeICmps] Move passes out of CodeGen into ↵Clement Courbet2019-06-268-1446/+0
| | | | | | | | | | | | | | opt pipeline." Breaks sanitizers: libFuzzer :: cxxstring.test libFuzzer :: memcmp.test libFuzzer :: recommended-dictionary.test libFuzzer :: strcmp.test libFuzzer :: value-profile-mem.test libFuzzer :: value-profile-strcmp.test llvm-svn: 364416
* [ExpandMemCmp][MergeICmps] Move passes out of CodeGen into opt pipeline.Clement Courbet2019-06-268-0/+1446
| | | | | | | | | This allows later passes (in particular InstCombine) to optimize more cases. One that's important to us is `memcmp(p, q, constant) < 0` and memcmp(p, q, constant) > 0. llvm-svn: 364412
* [Pass Pipeline][NFC] Add a test prior to committing D61726Nemanja Ivanovic2019-05-131-0/+155
| | | | | | | | | | This patch just adds a test case to show the differences in code emitted by opt before and after https://reviews.llvm.org/D61726. Previous attempt to commit this did not include the registered target requirement so it caused buildbot breaks. llvm-svn: 360620
* Pull r360426 as it is breaking the build bots.Nemanja Ivanovic2019-05-101-153/+0
| | | | llvm-svn: 360437
* Another attempt to fix the build bot breaks after r360426Nemanja Ivanovic2019-05-101-3/+2
| | | | | | | | | | | | | The test case checks were produced by the update_test_checks.py scripts and I assumed that is sufficient. However, the behaviour is different with different default target triples. Specify the triple explicitly in the test case. If this doesn't clean up the build bot breaks, I'll remove the test case until I can get to the bottom of why the behaviour on build bots is different from my machine. llvm-svn: 360434
* Fix build break after r360426Nemanja Ivanovic2019-05-101-0/+2
| | | | llvm-svn: 360433
* [Pass Pipeline][NFC] Add a test prior to committing D61726Nemanja Ivanovic2019-05-101-0/+152
| | | | | | | This patch just adds a test case to show the differences in code emitted by opt before and after https://reviews.llvm.org/D61726. llvm-svn: 360426
* Revert "Temporarily Revert "Add basic loop fusion pass.""Eric Christopher2019-04-1710-0/+881
| | | | | | | | The reversion apparently deleted the test/Transforms directory. Will be re-reverting again. llvm-svn: 358552
* Temporarily Revert "Add basic loop fusion pass."Eric Christopher2019-04-1710-881/+0
| | | | | | | | As it's causing some bot failures (and per request from kbarton). This reverts commit r358543/ab70da07286e618016e78247e4a24fcb84077fda. llvm-svn: 358546
* [InstCombine] canonicalize raw IR rotate patterns to funnel shiftSanjay Patel2019-01-011-6/+4
| | | | | | | | | | | | | | | The final piece of IR-level analysis to allow this was committed with: rL350188 Using the intrinsics should improve transforms based on cost models like vectorization and inlining. The backend should be prepared too, so we can now canonicalize more sequences of shift/logic to the intrinsics and know that the end result should be equal or better to the original code even if the target does not have an actual rotate instruction. llvm-svn: 350199
* [AggressiveInstCombine] convert rotate with guard branch into funnel shift ↵Sanjay Patel2018-12-171-11/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (PR34924) Now, that we have funnel shift intrinsics, it should be safe to convert this form of rotate to it. In the worst case (a target that doesn't have rotate instructions), we will expand this into a branch-less sequence of ALU ops (neg/and/and/lshr/shl/or) in the backend, so it's still very likely to be a perf improvement over the original code. The motivating source code pattern for this is shown in: https://bugs.llvm.org/show_bug.cgi?id=34924 Background: I looked at several different options before deciding where to try this - instcombine, simplifycfg, CGP - because it doesn't fit cleanly anywhere AFAIK. The backend (CGP, SDAG, GlobalIsel?) is too late for what we're trying to accomplish. We want to have the IR converted before we reach things like vectorization because the reduced code can make a loop much simpler to transform. Technically, this could be included in instcombine, but it's a large pattern match that includes control-flow, so it just felt wrong to stuff into there (although I have a draft of that patch). Similarly, this could be part of simplifycfg, but all of this pattern matching is a stretch. So we're left with our relatively new dumping ground for homeless transforms: aggressive-instcombine. This only runs at -O3, but that seems like a reasonable limitation given that source code has many options to avoid this pattern (including the recently added clang intrinsics for rotates). I'm including a PhaseOrdering test because we require the teamwork of 3 passes (aggressive-instcombine, instcombine, simplifycfg) to get this into the minimal IR form that we want. That test shows a bug with the new pass manager that's independent of this change (but it will be masked if we canonicalize harder to funnel shift intrinsics in instcombine). Differential Revision: https://reviews.llvm.org/D55604 llvm-svn: 349396
* [PhaseOrdering] add test for funnel shift (rotate); NFCSanjay Patel2018-12-121-0/+49
| | | | | | | | As mentioned in D55604, there are 2 bugs here: 1. The new pass manager is speculating wildly by default. 2. The old pass manager is not converting this to funnel shift. llvm-svn: 348980
* [PhaseOrdering] remove stale comments; NFCSanjay Patel2018-05-091-4/+0
| | | | | | Forgot to update this with rL331937. llvm-svn: 331939
* [AggressiveInstCombine] convert a chain of 'and-shift' bits into masked compareSanjay Patel2018-05-091-17/+8
| | | | | | | | | | | | | | | | | | | | | | This is a follow-up to D45986. As suggested there, we should match the "all-bits-set" pattern in addition to "any-bits-set". This was a little more complicated than I thought it would be initially because the "and 1" instruction can be anywhere in the chain. Hopefully, the code comments make that logic understandable, but if you see a way to simplify or improve that, it's most appreciated. This transforms patterns that emerge from bitfield tests as seen in PR37098: https://bugs.llvm.org/show_bug.cgi?id=37098 I think it would also help reduce the large test from: D46336 D46595 but we need something to reassociate that case to the forms we're expecting here first. Differential Revision: https://reviews.llvm.org/D46649 llvm-svn: 331937
* [AggressiveInstCombine] convert a chain of 'or-shift' bits into masked compareSanjay Patel2018-05-011-17/+8
| | | | | | | | | | | | | | | | | | | | | and (or (lshr X, C), ...), 1 --> (X & C') != 0 I initially thought about implementing the minimal pattern in instcombine as mentioned here: https://bugs.llvm.org/show_bug.cgi?id=37098#c6 ...but we need to do better to catch the more general sequence from the motivating test (more than 2 bits in the compare). And a test-suite run with statistics showed that this pattern only happened 2 times currently. It would potentially happen more often if reassociation worked better (D45842), but it's probably still not too frequent? This is small enough that I didn't see a need to create a whole new class/file within AggressiveInstCombine. There are likely other relatively small matchers like what was discussed in D44266 that would slide under foldUnusualPatterns() (name suggestions welcome). We could potentially also consolidate matchers for ctpop, bswap, etc under here. Differential Revision: https://reviews.llvm.org/D45986 llvm-svn: 331311
* [PhaseOrdering] add tests for bittest patterns from bitfields; NFCSanjay Patel2018-05-011-0/+152
| | | | | | | | | As mentioned in D45986, there's a potential ordering dependency between instcombine and aggressive-instcombine for detecting these, so I'm adding a few tests to confirm that the expected folds occur using -O3 (because aggressive-instcombine only runs at -O3 currently). llvm-svn: 331308
* Adding a width of the GEP index to the Data Layout.Elena Demikhovsky2018-02-141-0/+67
| | | | | | | | | | | | | | | | | | Making a width of GEP Index, which is used for address calculation, to be one of the pointer properties in the Data Layout. p[address space]:size:memory_size:alignment:pref_alignment:index_size_in_bits. The index size parameter is optional, if not specified, it is equal to the pointer size. Till now, the InstCombiner normalized GEPs and extended the Index operand to the pointer width. It works fine if you can convert pointer to integer for address calculation and all registered targets do this. But some ISAs have very restricted instruction set for the pointer calculation. During discussions were desided to retrieve information for GEP index from the Data Layout. http://lists.llvm.org/pipermail/llvm-dev/2018-January/120416.html I added an interface to the Data Layout and I changed the InstCombiner and some other passes to take the Index width into account. This change does not affect any in-tree target. I added tests to cover data layouts with explicitly specified index size. Differential Revision: https://reviews.llvm.org/D42123 llvm-svn: 325102
* [SimplifyCFG] don't sink common insts too soon (PR34603)Sanjay Patel2017-12-141-4/+2
| | | | | | | | | | | | This should solve: https://bugs.llvm.org/show_bug.cgi?id=34603 ...by preventing SimplifyCFG from altering redundant instructions before early-cse has a chance to run. It changes the default (canonical-forming) behavior of SimplifyCFG, so we're only doing the sinking transform later in the optimization pipeline. Differential Revision: https://reviews.llvm.org/D38566 llvm-svn: 320749
* [PassManager, SimplifyCFG] add test for PR34603 / D38566; NFCSanjay Patel2017-11-151-0/+40
| | | | | | This is a recommit of r316908 which was reverted by r317444. llvm-svn: 318300
* [(new) Pass Manager] instantiate SimplifyCFG with the same options as the old PMSanjay Patel2017-11-151-54/+25
| | | | | | | | | | | | | | | | | | This is a recommit of r316869 which was speculatively reverted with r317444 and subsequently shown to not be the cause of PR35210. That crash should be fixed after r318237. Original commit message: The old PM sets the options of what used to be known as "latesimplifycfg" on the instantiation after the vectorizers have run, so that's what we'redoing here. FWIW, there's a later SimplifyCFGPass instantiation in both PMs where we do not set the "late" options. I'm not sure if that's intentional or not. Differential Revision: https://reviews.llvm.org/D39407 llvm-svn: 318299
* [PassManager, SimplifyCFG] Revert r316908 and r316869.David L. Jones2017-11-061-66/+55
| | | | | | These cause Clang to crash with a segfault. See PR35210 for details. llvm-svn: 317444
* [PassManager, SimplifyCFG] add test for PR34603 / D38566; NFCSanjay Patel2017-10-301-1/+41
| | | | | | Sinking common insts and converting to select early can inhibit better folds in other passes. llvm-svn: 316908
* [(new) Pass Manager] instantiate SimplifyCFG with the same options as the old PMSanjay Patel2017-10-291-54/+25
| | | | | | | | | | | | The old PM sets the options of what used to be known as "latesimplifycfg" on the instantiation after the vectorizers have run, so that's what we'redoing here. FWIW, there's a later SimplifyCFGPass instantiation in both PMs where we do not set the "late" options. I'm not sure if that's intentional or not. Differential Revision: https://reviews.llvm.org/D39407 llvm-svn: 316869
* [PassManager] add test to show the new PM uses -latesimplifycfg early; NFCSanjay Patel2017-10-231-0/+95
| | | | llvm-svn: 316351
* Make globalaa-retained.ll test catching more cases.Nikolai Bozhenov2017-04-181-3/+43
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: * Add checks for store. That is needed because GlobalsAA is called twice in the current pipeline with different sets of Function passes following it. However, the loads are eliminated using instcombine which happens everywhere. On the other hand, DeadStoreElimination is performed only once so by checking for store we'll be able to catch more cases when GlobalsAA is invalidated unintentionally. * Add empty function above/below the test so that we don't depend on the relative order of instcombine/dead-store-elimination and the pass that invalidates the analysis (inside the same FunctionPassManager). Reviewers: kristof.beyls Reviewed By: kristof.beyls Subscribers: llvm-commits, n.bozhenov Differential Revision: https://reviews.llvm.org/D32015 Patch by Andrei Elovikov <andrei.elovikov@intel.com> llvm-svn: 300553
* Mark that SpeculativeExecution preserves Globals Alias Analysis.Kristof Beyls2016-05-031-0/+26
| | | | | | | | | | | | | | | | | | | | A few benchmarks with lots of accesses to global variables in the hot loops regressed a lot since r266399, which added the SpeculativeExecution pass to the default pipeline. The problem is that this pass doesn't mark Globals Alias Analysis as preserved. Globals Alias Analysis is computed in a module pass, whereas SpeculativeExecution is a function pass, and a lot of passes dependent on the Globals Alias Analysis to optimize these benchmarks are also function passes. As such, the Globals Alias Analysis information cannot be recomputed between SpeculativeExecution and the following function passes needing that information. SpeculativeExecution doesn't invalidate Globals Alias Analysis, so mark it as such to fix those performance regressions. Differential Revision: http://reviews.llvm.org/D19806 llvm-svn: 268370
* Move the personality function from LandingPadInst to FunctionDavid Majnemer2015-06-171-2/+2
| | | | | | | | | | | | | | | | | | | The personality routine currently lives in the LandingPadInst. This isn't desirable because: - All LandingPadInsts in the same function must have the same personality routine. This means that each LandingPadInst beyond the first has an operand which produces no additional information. - There is ongoing work to introduce EH IR constructs other than LandingPadInst. Moving the personality routine off of any one particular Instruction and onto the parent function seems a lot better than have N different places a personality function can sneak onto an exceptional function. Differential Revision: http://reviews.llvm.org/D10429 llvm-svn: 239940
* [opaque pointer type] Add textual IR support for explicit type parameter to ↵David Blaikie2015-04-161-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the call instruction See r230786 and r230794 for similar changes to gep and load respectively. Call is a bit different because it often doesn't have a single explicit type - usually the type is deduced from the arguments, and just the return type is explicit. In those cases there's no need to change the IR. When that's not the case, the IR usually contains the pointer type of the first operand - but since typed pointers are going away, that representation is insufficient so I'm just stripping the "pointerness" of the explicit type away. This does make the IR a bit weird - it /sort of/ reads like the type of the first operand: "call void () %x(" but %x is actually of type "void ()*" and will eventually be just of type "ptr". But this seems not too bad and I don't think it would benefit from repeating the type ("void (), void () * %x(" and then eventually "void (), ptr %x(") as has been done with gep and load. This also has a side benefit: since the explicit type is no longer a pointer, there's no ambiguity between an explicit type and a function that returns a function pointer. Previously this case needed an explicit type (eg: a function returning a void() function was written as "call void () () * @x(" rather than "call void () * @x(" because of the ambiguity between a function returning a pointer to a void() function and a function returning void). No ambiguity means even function pointer return types can just be written alone, without writing the whole function's type. This leaves /only/ the varargs case where the explicit type is required. Given the special type syntax in call instructions, the regex-fu used for migration was a bit more involved in its own unique way (as every one of these is) so here it is. Use it in conjunction with the apply.sh script and associated find/xargs commands I've provided in rr230786 to migrate your out of tree tests. Do let me know if any of this doesn't cover your cases & we can iterate on a more general script/regexes to help others with out of tree tests. About 9 test cases couldn't be automatically migrated - half of those were functions returning function pointers, where I just had to manually delete the function argument types now that we didn't need an explicit function type there. The other half were typedefs of function types used in calls - just had to manually drop the * from those. import fileinput import sys import re pat = re.compile(r'((?:=|:|^|\s)call\s(?:[^@]*?))(\s*$|\s*(?:(?:\[\[[a-zA-Z0-9_]+\]\]|[@%](?:(")?[\\\?@a-zA-Z0-9_.]*?(?(3)"|)|{{.*}}))(?:\(|$)|undef|inttoptr|bitcast|null|asm).*$)') addrspace_end = re.compile(r"addrspace\(\d+\)\s*\*$") func_end = re.compile("(?:void.*|\)\s*)\*$") def conv(match, line): if not match or re.search(addrspace_end, match.group(1)) or not re.search(func_end, match.group(1)): return line return line[:match.start()] + match.group(1)[:match.group(1).rfind('*')].rstrip() + match.group(2) + line[match.end():] for line in sys.stdin: sys.stdout.write(conv(re.search(pat, line), line)) llvm-svn: 235145
* [opaque pointer type] Add textual IR support for explicit type parameter to ↵David Blaikie2015-03-132-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gep operator Similar to gep (r230786) and load (r230794) changes. Similar migration script can be used to update test cases, which successfully migrated all of LLVM and Polly, but about 4 test cases needed manually changes in Clang. (this script will read the contents of stdin and massage it into stdout - wrap it in the 'apply.sh' script shown in previous commits + xargs to apply it over a large set of test cases) import fileinput import sys import re rep = re.compile(r"(getelementptr(?:\s+inbounds)?\s*\()((<\d*\s+x\s+)?([^@]*?)(|\s*addrspace\(\d+\))\s*\*(?(3)>)\s*)(?=$|%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|zeroinitializer|<|\[\[[a-zA-Z]|\{\{)", re.MULTILINE | re.DOTALL) def conv(match): line = match.group(1) line += match.group(4) line += ", " line += match.group(2) return line line = sys.stdin.read() off = 0 for match in re.finditer(rep, line): sys.stdout.write(line[off:match.start()]) sys.stdout.write(conv(match)) off = match.end() sys.stdout.write(line[off:]) llvm-svn: 232184
OpenPOWER on IntegriCloud