summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms
Commit message (Collapse)AuthorAgeFilesLines
* [LV] Sink casts to unravel first order recurrenceAyal Zaks2017-06-301-0/+80
| | | | | | | | | | | Check if a single cast is preventing handling a first-order-recurrence Phi, because the scheduling constraints it imposes on the first-order-recurrence shuffle are infeasible; but they can be made feasible by moving the cast downwards. Record such casts and move them when vectorizing the loop. Differential Revision: https://reviews.llvm.org/D33058 llvm-svn: 306884
* Fix ODR violations due to abuse of LLVM_YAML_IS_(FLOW_)?SEQUENCE_VECTORRichard Smith2017-06-301-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | This is a short-term fix for PR33650 aimed to get the modules build bots green again. Remove all the places where we use the LLVM_YAML_IS_(FLOW_)?SEQUENCE_VECTOR macros to try to locally specialize a global template for a global type. That's not how C++ works. Instead, we now centrally define how to format vectors of fundamental types and of string (std::string and StringRef). We use flow formatting for the former cases, since that's the obvious right thing to do; in the latter case, it's less clear what the right choice is, but flow formatting is really bad for some cases (due to very long strings), so we pick block formatting. (Many of the cases that were using flow formatting for strings are improved by this change.) Other than the flow -> block formatting change for some vectors of strings, this should result in no functionality change. Differential Revision: https://reviews.llvm.org/D34907 Corresponding updates to clang, clang-tools-extra, and lld to follow. llvm-svn: 306878
* [Hexagon] Guard the generation of lookup tableSumanth Gundapaneni2017-06-302-0/+67
| | | | | | | | The llvm flag "-hexagon-emit-lookup-tables" guards the generation of lookup table generated from a switch statement. Differential Revision: https://reviews.llvm.org/D34819 llvm-svn: 306877
* [SimplifyCFG] Update the name of switch generated lookup table.Sumanth Gundapaneni2017-06-303-18/+18
| | | | | | | | | | This patch appends the name of the function to the switch generated lookup table. This will ease the visual debugging in identifying the function the table is generated from. Differential Revision: https://reviews.llvm.org/D34817 llvm-svn: 306867
* [RuntimeUnrolling] Add logic for loops with multiple exit blocksAnna Thomas2017-06-301-0/+279
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: Runtime unrolling is done for loops with a single exit block and a single exiting block (and this exiting block should be the latch block). This patch adds logic to support unrolling in the presence of multiple exit blocks (which also means multiple exiting blocks). Currently this is under an off-by-default option and is supported when epilog code is generated. Support in presence of prolog code will be in a future patch (we just need to add more tests, and update comments). This patch is essentially an implementation patch. I have not added any heuristic (in terms of branches added or code size) to decide when this should be enabled. Reviewers: mkuper, sanjoy, reames, evstupac Reviewed by: reames Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D33001 llvm-svn: 306846
* [SLP] A test for limiting vectorization of instructions, NFC.Alexey Bataev2017-06-301-0/+70
| | | | llvm-svn: 306828
* Revert of r306525: "Canonicalize clamp of float types to minmax"Nikolai Bozhenov2017-06-301-28/+27
| | | | llvm-svn: 306815
* fix trivial typos, NFCHiroshi Inoue2017-06-301-2/+2
| | | | llvm-svn: 306808
* [LV] Optimize for size when vectorizing loops with tiny trip countAyal Zaks2017-06-302-5/+32
| | | | | | | | | | | | | | | | | It may be detrimental to vectorize loops with very small trip count, as various costs of the vectorized loop body as well as enclosing overheads including runtime tests and scalar iterations may outweigh the gains of vectorizing. The current cost model measures the cost of the vectorized loop body only, expecting it will amortize other costs, and loops with known or expected very small trip counts are not vectorized at all. This patch allows loops with very small trip counts to be vectorized, but under OptForSize constraints, which ensure the cost of the loop body is dominant, having no runtime guards nor scalar iterations. Patch inspired by D32451. Differential Revision: https://reviews.llvm.org/D34373 llvm-svn: 306803
* [InstCombine] Add test cases to demonstrate failure to fold (a | b) ^ (~a | ↵Craig Topper2017-06-302-0/+152
| | | | | | ~b) --> ~(a ^ b) and its commuted variants. llvm-svn: 306801
* [InstCombine] In foldXorToXor, move the commutable matcher from the LHS ↵Craig Topper2017-06-302-8/+8
| | | | | | | | | | match to the RHS match. No meaningful change intended. There are two conditions ORed here with similar checks and each contain two matches that must be true for the if to succeed. With the commutable match on the first half of the OR then both ifs basically have the same first part and only the second part distinguishs. With this change we move the commutable match to second half and make the first half unique. This caused some tests to change because we now produce a commuted result, but this shouldn't matter in practice. llvm-svn: 306800
* Remove the BBVectorize pass.Chandler Carruth2017-06-3031-2831/+0
| | | | | | | | | | | | | It served us well, helped kick-start much of the vectorization efforts in LLVM, etc. Its time has come and past. Back in 2014: http://lists.llvm.org/pipermail/llvm-dev/2014-November/079091.html Time to actually let go and move forward. =] I've updated the release notes both about the removal and the deprecation of the corresponding C API. llvm-svn: 306797
* Revert "r306473 - re-commit r306336: Enable vectorizer-maximize-bandwidth by ↵Daniel Jasper2017-06-3011-76/+67
| | | | | | | | | default." This still breaks PPC tests we have. I'll forward reproduction instructions to dehao. llvm-svn: 306792
* [CodeGenPrepare] Don't create inttoptr for ni ptrsKeno Fischer2017-06-291-0/+68
| | | | | | | | | | | | | Summary: Arguably non-integral pointers probably shouldn't show up here at all, but since the backend doesn't complain and this takes valid (according to the Verifier) IR and makes it invalid, make sure not to introduce any inttoptr instructions if we're dealing with non-integral pointers. Reviewed By: sanjoy Differential Revision: https://reviews.llvm.org/D33110 llvm-svn: 306737
* [AliasSetTracker] Don't drop AA MD so eagerlyKeno Fischer2017-06-291-0/+90
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: When we have patterns like loop: %la = load %ptr, !tbaa %lba = load %ptr, !tbaa !noalias AliasSetTracker would previously think that the two types of annotation for the pointer conflict, dropping both for the purpose of determining alias sets. That is clearly way too conservative, as the tbaa is still valid whether or not one of the memory accesses has additional AA metadata. We could go one step further and attempt to properly merge the AA metadata, but it's not clear that that would be worth it since that may introduce additional MD nodes, which may be undesirable since this is merely an Analysis. Reviewers: hfinkel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D32139 llvm-svn: 306727
* [ConstantHoisting] Avoid hoisting constants in GEPs that index into a struct ↵Leo Li2017-06-291-0/+37
| | | | | | | | | | | | | | | | | | | | | type. Summary: Indices for GEPs that index into a struct type should always be constants. This added more checks in `collectConstantCandidates:` which make sure constants for GEP pointer type are not hoisted. This fixed Bug https://bugs.llvm.org/show_bug.cgi?id=33538 Reviewers: ributzka, rnk Reviewed By: ributzka Subscribers: efriedma, llvm-commits, srhines, javed.absar, pirama Differential Revision: https://reviews.llvm.org/D34576 llvm-svn: 306704
* Restore original intent of memset instcombine testDaniel Neilson2017-06-291-4/+13
| | | | | | | | | | | | | | | | | Summary: The original intent of test/Transforms/InstCombine/memset.ll was to test for lowering of llvm.memset into stores when the size of the memset is 1, 2, 4, or 8. Sometime between then and now the test has stopped testing for that, but remained passing due to testing for the absence of llvm.memset calls rather than the presence of store instructions. Right now this test ends up with an empty function body because the alloca is eliminated as safe-to-remove, which results in the llvm.memset calls's being eliminated due to their pointer args being undef; so it is not testing for conversion of llvm.memset into store instructions at all. This change alters the test to verify that store instructions are created, and moves the target of the memset to an arg of the proc to avoid it being eliminated as unused. Reviewers: anna, efriedma Reviewed By: efriedma Subscribers: efriedma, llvm-commits Differential Revision: https://reviews.llvm.org/D34642 llvm-svn: 306681
* Explicitly check for presence of correct results in instcombine memmove testDaniel Neilson2017-06-291-17/+31
| | | | | | | | | | | | | | | | | Summary: Rather than testing for expected results, test/Transforms/InstCombine/memmove.ll is testing for the absence of calls to llvm.memmove. In the case of test3, the test has stopped testing for materialization of loads/stores, but remained passing due to testing for the absence of llvm.memset calls rather than the presence of load/store instructions. Right now this test ends up with an empty function body because the alloca is eliminated as safe-to-remove, which results in the llvm.memmove calls being eliminated due to a pointer arg being undef; so it is not testing for conversion of llvm.memmove into load/store instructions at all. Reviewers: eli.friedman, anna, efriedma Reviewed By: efriedma Subscribers: efriedma, llvm-commits Differential Revision: https://reviews.llvm.org/D34645 llvm-svn: 306679
* [InstCombine] Retain TBAA when narrowing memory accessesKeno Fischer2017-06-281-0/+45
| | | | | | | | | | | | Summary: As discussed on the mailing list it is legal to propagate TBAA to loads/stores from/to smaller regions of a larger load tagged with TBAA. Do so for (load->extractvalue)=>(gep->load) and similar foldings. Reviewed By: sanjoy Differential Revision: https://reviews.llvm.org/D31954 llvm-svn: 306615
* [InstCombine] add tests for icmp with bitreversed ops; NFCSanjay Patel2017-06-281-0/+30
| | | | | | This is similar enough to bswap that we might as well handle them together in one patch. llvm-svn: 306591
* [AArch64][Falkor] Try to avoid exhausting HW prefetcher resources when ↵Geoff Berry2017-06-281-0/+169
| | | | | | | | | | | | unrolling. Reviewers: t.p.northover, mcrosier Subscribers: aemerson, rengolin, javed.absar, kristof.beyls, llvm-commits Differential Revision: https://reviews.llvm.org/D34533 llvm-svn: 306584
* [BBVectorize][X86] Regenerate simple testsSimon Pilgrim2017-06-284-152/+240
| | | | llvm-svn: 306578
* [InstCombine] Remove 64-bit bit width restriction from m_ConstantInt(uint64_t*&)Craig Topper2017-06-281-8/+3
| | | | | | | | | | I think we only need to make sure the value fits in 64-bits not that bit width is 64-bit. This helps places that use this for shift amounts since the shift amount needs to be the same bitwidth as the LHS, but can't be larger than the bit width. Differential Revision: https://reviews.llvm.org/D34737 llvm-svn: 306577
* [LV] Fix PR33613 - retain order of insertelement per partAyal Zaks2017-06-281-4/+55
| | | | | | | | | | | | r306381 caused PR33613, by reversing the order in which insertelements were generated per unroll part. This patch fixes PR33613 by retraining this order, placing each set of insertelements per part immediately after the last scalar being packed for this part. Includes a test case derived from PR33613. Reference: https://bugs.llvm.org/show_bug.cgi?id=33613 Differential Revision: https://reviews.llvm.org/D34760 llvm-svn: 306575
* [BBVectorize] Regenerate simple testsSimon Pilgrim2017-06-284-578/+609
| | | | llvm-svn: 306571
* [LoopUnroll] Fix bug in computeUnrollCount causing it to not honor MaxCountGeoff Berry2017-06-281-0/+31
| | | | | | | | | | Reviewers: sanjoy, anna, reames, apilipenko, igor-laevsky, mkuper Subscribers: mcrosier, llvm-commits, mzolotukhin Differential Revision: https://reviews.llvm.org/D34532 llvm-svn: 306564
* [InstCombine] add tests for icmp with bswapped operands; NFCSanjay Patel2017-06-281-0/+30
| | | | llvm-svn: 306563
* Revert r306528Nikolai Bozhenov2017-06-281-1/+1
| | | | llvm-svn: 306536
* [ValueTracking] Enabling existing ValueTracking patch by default.Nikolai Bozhenov2017-06-281-1/+1
| | | | | | | | | | | | | | | The original patch was an improvement to IR ValueTracking on non-negative integers. It has been checked in to trunk (D18777, r284022). But was disabled by default due to performance regressions. Perf impact has improved. The patch would be enabled by default. Reviewers: reames Differential Revision: https://reviews.llvm.org/D34101 Patch by: Olga Chupina <olga.chupina@intel.com> llvm-svn: 306528
* [InstCombine] Canonicalize clamp of float types to minmax in fast mode.Nikolai Bozhenov2017-06-281-27/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This commit allows matchSelectPattern to recognize clamp of float arguments in the presence of FMF the same way as already done for integers. This case is a little different though. With integers, given the min/max pattern is recognized, DAGBuilder starts selecting MIN/MAX "automatically". That is not the case for float, because for them only full FMINNAN/FMINNUM/FMAXNAN/FMAXNUM ISD nodes exist and they do care about NaNs. On the other hand, some backends (e.g. X86) have only FMIN/FMAX nodes that do not care about NaNS and the former NAN/NUM nodes are illegal thus selection is not happening. So I decided to do such kind of transformation in IR (InstCombiner) instead of complicating the logic in the backend. Reviewers: spatel, jmolloy, majnemer, efriedma, craig.topper Reviewed By: efriedma Subscribers: hiraditya, javed.absar, n.bozhenov, llvm-commits Patch by Andrei Elovikov <andrei.elovikov@intel.com> Differential Revision: https://reviews.llvm.org/D33186 llvm-svn: 306525
* Add tests to document current InstCombine behavior for clamp pattern.Nikolai Bozhenov2017-06-281-0/+500
| | | | | | | | | | | | | | | | | | | Summary: This commit adds the tests for clamp pattern as a prerequisite of D33186 to make the impact of that fix more clear and also to document current behavior. Reviewers: spatel, jmolloy Reviewed By: spatel Subscribers: n.bozhenov, llvm-commits Patch by Andrei Elovikov <andrei.elovikov@intel.com> Differential Revision: https://reviews.llvm.org/D34350 llvm-svn: 306524
* [InstCombine] Add test case demonstrating that we don't handle icmp eq ↵Craig Topper2017-06-281-0/+21
| | | | | | (trunc (lshr(X, cst1)), cst->icmp (and X, mask), cst when the shift type is larger than 64-bits. NFC llvm-svn: 306510
* Revert r306508 "[InstCombine] Add test case demonstrating that we don't ↵Craig Topper2017-06-281-21/+0
| | | | | | | | handle icmp eq (trunc (lshr(X, cst1)), cst->icmp (and X, mask), cst when the shift type is larger than 64-bits. NFC" I accidentally had a extra change in there. llvm-svn: 306509
* [InstCombine] Add test case demonstrating that we don't handle icmp eq ↵Craig Topper2017-06-281-0/+21
| | | | | | (trunc (lshr(X, cst1)), cst->icmp (and X, mask), cst when the shift type is larger than 64-bits. NFC llvm-svn: 306508
* [CGP] add specialization for memcmp expansion with only one basic blockSanjay Patel2017-06-271-90/+33
| | | | llvm-svn: 306485
* [NewPM/Inliner] Reduce threshold for cold callsites in the non-PGO caseEaswaran Raman2017-06-272-43/+90
| | | | | | Differential Revision: https://reviews.llvm.org/D34312 llvm-svn: 306484
* [AArch64] Inline callee if its target-features are a subset of the callerFlorian Hahn2017-06-271-0/+40
| | | | | | | | | | | | | | | | | | | Summary: Similar to X86, it should be safe to inline callees if their target-features are a subset of the caller. This change matches GCC's inlining behavior with respect to attributes [1]. [1] https://gcc.gnu.org/onlinedocs/gcc/AArch64-Function-Attributes.html#AArch64-Function-Attributes Reviewers: kristof.beyls, javed.absar, rengolin, t.p.northover Reviewed By: t.p.northover Subscribers: aemerson, eraman, llvm-commits Differential Revision: https://reviews.llvm.org/D34698 llvm-svn: 306478
* re-commit r306336: Enable vectorizer-maximize-bandwidth by default.Dehao Chen2017-06-2711-67/+76
| | | | | | Differential Revision: https://reviews.llvm.org/D33341 llvm-svn: 306473
* [CGP] eliminate a sub instruction in memcmp expansionSanjay Patel2017-06-271-30/+25
| | | | | | | | | | | | | | | | | | | | | | As noted in D34071, there are some IR optimization opportunities that could be handled by normal IR passes if this expansion wasn't happening so late in CGP. Regardless of that, it seems wasteful to knowingly produce suboptimal IR here, so I'm proposing this change: %s = sub i32 %x, %y %r = icmp ne %s, 0 => %r = icmp ne %x, %y Changing the predicate to 'eq' mimics what InstCombine would do, so that's just an efficiency improvement if we decide this expansion should happen sooner. The fact that the PowerPC backend doesn't eliminate the 'subf.' might be something for PPC folks to investigate separately. Differential Revision: https://reviews.llvm.org/D34416 llvm-svn: 306471
* Clean up a test caseXinliang David Li2017-06-271-34/+41
| | | | llvm-svn: 306468
* [InstCombine] Propagate nsw flag when turning mul by pow2 into shift when ↵Craig Topper2017-06-271-5/+3
| | | | | | | | | | | | the constant is a vector splat or the scalar bit width is larger than 64-bits The check to see if we can propagate the nsw flag used m_ConstantInt(uint64_t*&) which doesn't work with splat vectors and has a restriction that the bitwidth of the ConstantInt must be 64-bits are less. This patch changes it to use m_APInt to remove both these issues Differential Revision: https://reviews.llvm.org/D34699 llvm-svn: 306457
* [CodeExtractor] Prevent extraction of block involving blockaddressSerge Guelton2017-06-272-0/+79
| | | | | | | | | BlockAddress are only valid within their function context, which does not interact well with CodeExtractor. Detect this case and prevent it. Differential Revision: https://reviews.llvm.org/D33839 llvm-svn: 306448
* [SROA] Fix APInt size when alloca address space is not 0Yaxun Liu2017-06-271-1/+30
| | | | | | | | SROA assumes alloca address space is 0, which causes assertion. This patch fixes that. Differential Revision: https://reviews.llvm.org/D34104 llvm-svn: 306440
* [InstCombine] canonicalize icmp predicate feeding selectSanjay Patel2017-06-278-63/+68
| | | | | | | | | | | | | | | | | | | | | This canonicalization was suggested in D33172 as a way to make InstCombine behavior more uniform. We have this transform for icmp+br, so unless there's some reason that icmp+select should be treated differently, we should do the same thing here. The benefit comes from increasing the chances of creating identical instructions. This is shown in the tests in logical-select.ll (PR32791). InstCombine doesn't fold those directly, but EarlyCSE can simplify the identical cmps, and then InstCombine can fold the selects together. The possible regression for the tests in select.ll raises questions about poison/undef: http://lists.llvm.org/pipermail/llvm-dev/2017-May/113261.html ...but that transform is just as likely to be triggered by this canonicalization as it is to be missed, so we're just pointing out a commutation deficiency in the pattern matching: https://reviews.llvm.org/rL228409 Differential Revision: https://reviews.llvm.org/D34242 llvm-svn: 306435
* [InstCombine] Add test case demonstrating that we don't propagate nsw flag ↵Craig Topper2017-06-271-0/+10
| | | | | | when converting mul by pow2 to shl when the type is larger than 64-bits. NFC llvm-svn: 306427
* [InstCombine] Add test cases to show that we don't propagate 'nsw' flags ↵Craig Topper2017-06-271-0/+20
| | | | | | when converting mul by pow2 constant to shl for splat vectors. NFC llvm-svn: 306426
* [PatternMatch] Remove 64-bit or less restriction from m_SpecificIntCraig Topper2017-06-271-8/+4
| | | | | | | | | | Not sure why this restriction existed, but it seems like we should support any size Constant here. The particular pattern in the tests is not the only use of this matcher in the tree. There's one in CodeGenPrepare and one in InstSimplify as well. Differential Revision: https://reviews.llvm.org/D34666 llvm-svn: 306417
* [JumpThreading] Add test case that was supposed to go with r306085.Craig Topper2017-06-271-0/+125
| | | | | | Looks like I forgot to 'git add' when I submitted the commit. Thanks to Chandler for noticing. llvm-svn: 306416
* [SROA] Fix PR32902 by more carefully propagating !nonnull metadata.Chandler Carruth2017-06-271-0/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is based heavily on the work done ni D34285. I mostly wanted to do test cleanup for the author to save them some time, but I had a really hard time understanding why it was so hard to write better test cases for these issues. The problem is that because SROA does a second rewrite of the loads and because we *don't* propagate !nonnull for non-pointer loads, we first introduced invalid !nonnull metadata and then stripped it back off just in time to avoid most ways of this PR manifesting. Moving to the more careful utility only fixes this by changing the predicate to look at the new load's type rather than the target type. However, that *does* fix the bug, and the utility is much nicer including adding range metadata to model the nonnull property after a conversion to an integer. However, we have bigger problems because we don't actually propagate *range* metadata, and the utility to do this extracted from instcombine isn't really in good shape to do this currently. It *only* handles the case of copying range metadata from an integer load to a pointer load. It doesn't even handle the trivial cases of propagating from one integer load to another when they are the same width! This utility will need to be beefed up prior to using in this location to get the metadata to fully survive. And even then, we need to go and teach things to turn the range metadata into an assume the way we do with nonnull so that when we *promote* an integer we don't lose the information. All of this will require a new test case that looks kind-of like `preserve-nonnull.ll` does here but focuses on range metadata. It will also likely require more testing because it needs to correctly handle changes to the integer width, especially as SROA actively tries to change the integer width! Last but not least, I'm a little worried about hooking the range metadata up here because the instcombine logic for converting from a range metadata *to* a nonnull metadata node seems broken in the face of non-zero address spaces where null is not mapped to the integer `0`. So that probably needs to get fixed with test cases both in SROA and in instcombine to cover it. But this *does* extract the core PR fix from D34285 of preventing the !nonnull metadata from being propagated in a broken state just long enough to feed into promotion and crash value tracking. On D34285 there is some discussion of zero-extend handling because it isn't necessary. First, the new load size covers all of the non-undef (ie, possibly initialized) bits. This may even extend past the original alloca if loading those bits could produce valid data. The only way its valid for us to zero-extend an integer load in SROA is if the original code had a zero extend or those bits were undef. And we get to assume things like undef *never* satifies nonnull, so non undef bits can participate here. No need to special case the zero-extend handling, it just falls out correctly. The original credit goes to Ariel Ben-Yehuda! I'm mostly landing this to save a few rounds of trivial edits fixing style issues and test case formulation. Differental Revision: D34285 llvm-svn: 306379
* [Reassociate] Make sure EraseInst sets MadeChangeMikael Holmen2017-06-271-0/+29
| | | | | | | | | | | | | | | | | | | | | Summary: EraseInst didn't report that it made IR changes through MadeChange. It is essential that changes to the IR are reported correctly, since for example ReassociatePass::run() will indicate that all analyses are preserved otherwise. And the CGPassManager determines if the CallGraph is up-to-date based on status from InstructionCombiningPass::runOnFunction(). Reviewers: craig.topper, rnk, davide Reviewed By: rnk, davide Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34616 llvm-svn: 306368
OpenPOWER on IntegriCloud