summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms
Commit message (Collapse)AuthorAgeFilesLines
* Revert rL305578. There is still some buildbot failure to be fixed.Wei Mi2017-06-163-135/+4
| | | | llvm-svn: 305603
* [InstCombine] Set correct insertion point for selects generated while ↵Anna Thomas2017-06-161-0/+29
| | | | | | | | | | | | | | | | | | | | | | folding phis Summary: When we fold vector constants that are operands of phi's that feed into select, we need to set the correct insertion point for the *new* selects that get generated. The correct insertion point is the incoming block for the phi. Such cases can occur with patch r298845, which fixed folding of vector constants, but the new selects could be inserted incorrectly (as the added test case shows). Reviewers: majnemer, spatel, sanjoy Reviewed by: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34162 llvm-svn: 305591
* Change YAML traits for vector<string> to flow_vector.Evgeniy Stepanov2017-06-161-7/+2
| | | | | | | This is a workaround for an ODR conflict with the definition in AMDGPUCodeObjectMetadata.cpp. llvm-svn: 305584
* [GVN] Recommit the patch "Add phi-translate support in scalarpre".Wei Mi2017-06-163-4/+135
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The recommit fixes two bugs: The first one is to use CurrentBlock instead of PREInstr's Parent as param of performScalarPREInsertion because the Parent of a clone instruction may be uninitialized. The second one is stop PRE when CurrentBlock to its predecessor is a backedge and an operand of CurInst is defined inside of CurrentBlock. The same value defined inside of loop in last iteration can not be regarded as available. Right now scalarpre doesn't have phi-translate support, so it will miss some simple pre opportunities. Like the following testcase, current scalarpre cannot recognize the last "a * b" is fully redundent because a and b used by the last "a * b" expr are both defined by phis. long a[100], b[100], g1, g2, g3; __attribute__((pure)) long goo(); void foo(long a, long b, long c, long d) { g1 = a * b; if (__builtin_expect(g2 > 3, 0)) { a = c; b = d; g2 = a * b; } g3 = a * b; // fully redundant. } The patch adds phi-translate support in scalarpre. This is only a temporary solution before the newpre based on newgvn is available. Differential Revision: https://reviews.llvm.org/D32252 llvm-svn: 305578
* [InstCombine] Add test cases to show missed opportunities due to overly ↵Craig Topper2017-06-162-0/+77
| | | | | | conservative single use checks. NFC llvm-svn: 305562
* [Atomics] Rename and change prototype for atomic memcpy intrinsicDaniel Neilson2017-06-163-31/+37
| | | | | | | | | | | | | | | | | | Summary: Background: http://lists.llvm.org/pipermail/llvm-dev/2017-May/112779.html This change is to alter the prototype for the atomic memcpy intrinsic. The prototype itself is being changed to more closely resemble the semantics and parameters of the llvm.memcpy intrinsic -- to ease later combination of the llvm.memcpy and atomic memcpy intrinsics. Furthermore, the name of the atomic memcpy intrinsic is being changed to make it clear that it is not a generic atomic memcpy, but specifically a memcpy is unordered atomic. Reviewers: reames, sanjoy, efriedma Reviewed By: reames Subscribers: mzolotukhin, anna, llvm-commits, skatkov Differential Revision: https://reviews.llvm.org/D33240 llvm-svn: 305558
* [InstCombine] Fold (!iszero(A & K1) & !iszero(A & K2)) -> (A & (K1 | K2)) ↵Craig Topper2017-06-161-12/+8
| | | | | | | | | | | | | | | | == (K1 | K2) if K1 and K2 are a 1-bit mask Summary: This is the demorganed version of the case we already handle for the OR of iszero. Reviewers: spatel Reviewed By: spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34244 llvm-svn: 305548
* [cfi] CFI-ICall for ThinLTO.Evgeniy Stepanov2017-06-164-0/+152
| | | | | | | | Implement ControlFlowIntegrity for indirect function calls in ThinLTO. Design follows the RFC in llvm-dev, see https://groups.google.com/d/msg/llvm-dev/MgUlaphu4Qc/kywu0AqjAQAJ llvm-svn: 305533
* [InstCombine] Add test cases to demonstrate instcombine increasing ↵Craig Topper2017-06-151-0/+237
| | | | | | instruction count when trying to fold (select (icmp eq (and X, C1), 0), Y, (or Y, C2))->(or (shl (and X, C1), C3), y) when the pieces have multiple uses. llvm-svn: 305509
* [InstCombine] Pre-commit test cases for the transform proposed in D34244.Craig Topper2017-06-151-0/+58
| | | | llvm-svn: 305492
* [InstCombine] Handle (iszero(A & K1) | iszero(A & K2)) -> (A & (K1 | K2)) != ↵Craig Topper2017-06-151-9/+4
| | | | | | | | | | | | (K1 | K2) when the one of the Ands is commuted relative to the other Currently we expect A to be on the same side in both Ands but nothing guarantees that. While there also switch to using matchers for some of the code. Differential Revision: https://reviews.llvm.org/D34230 llvm-svn: 305487
* [BasicAA] Add test case that goes with r305481.Craig Topper2017-06-151-0/+53
| | | | | | Forgot to 'git add' the file. llvm-svn: 305483
* [InstCombine] auto-generate complete checks; NFCSanjay Patel2017-06-151-50/+106
| | | | llvm-svn: 305474
* [InstCombine] Add a test case to show a case where don't handle a partially ↵Craig Topper2017-06-151-0/+27
| | | | | | commuted IR. NFC llvm-svn: 305438
* PredicateInfo: Don't insert conditional info when a conditional branch jumps ↵Daniel Berlin2017-06-142-0/+161
| | | | | | to the same target regardless of condition llvm-svn: 305416
* [EarlyCSE] Make PhiToCheck in removeMSSA() a set.Davide Italiano2017-06-141-0/+26
| | | | | | | | | | | This way we end up not looking at PHI args already removed. MemSSA now goes through the updater so we can prune it to avoid having redundant MemoryPHI arguments, but that doesn't quite work for the general case. Discussed with Daniel Berlin, fixes PR33406. llvm-svn: 305409
* [ValueTracking] Correct early out in computeKnownBitsFromOperator to work ↵Craig Topper2017-06-141-0/+10
| | | | | | | | | | | | with non power of 2 bit widths There's an early out that's trying to detect when we don't know any bits that make up the legal range of a shift. The code subtracts one from BitWidth which creates a mask in the lower bits for power of 2 bit widths. This is then ANDed with the known bits to see if any of those bits are known. If the bit width isn't a power of 2 this creates a non-sensical mask. This patch corrects this by rounding up to a power of 2 before doing the subtract and mask. Differential Revision: https://reviews.llvm.org/D34165 llvm-svn: 305400
* Align definition of DW_OP_plus with DWARF spec [3/3]Florian Hahn2017-06-144-12/+12
| | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch is part of 3 patches that together form a single patch, but must be introduced in stages in order not to break things. The way that LLVM interprets DW_OP_plus in DIExpression nodes is basically that of the DW_OP_plus_uconst operator since LLVM expects an unsigned constant operand. This unnecessarily restricts the DW_OP_plus operator, preventing it from being used to describe the evaluation of runtime values on the expression stack. These patches try to align the semantics of DW_OP_plus and DW_OP_minus with that of the DWARF definition, which pops two elements off the expression stack, performs the operation and pushes the result back on the stack. This is done in three stages: • The first patch (LLVM) adds support for DW_OP_plus_uconst. • The second patch (Clang) contains changes all its uses from DW_OP_plus to DW_OP_plus_uconst. • The third patch (LLVM) changes the semantics of DW_OP_plus and DW_OP_minus to be in line with its DWARF meaning. This patch includes the bitcode upgrade from legacy DIExpressions. Patch by Sander de Smalen. Reviewers: echristo, pcc, aprantl Reviewed By: aprantl Subscribers: fhahn, javed.absar, aprantl, llvm-commits Differential Revision: https://reviews.llvm.org/D33894 llvm-svn: 305386
* [InstCombine] Add test cases demonstrating failure to handle (select (icmp ↵Craig Topper2017-06-131-0/+31
| | | | | | | | eq (and X, C1), 0), Y, (or Y, C2)) when the icmp portion gets turned into a truncate and a signed compare with 0. InstCombine has an optimization that recognizes an and with the sign bit of legal type size and turns it into a truncate and compare that checks the sign bit. But the select handling code doesn't recognize this idiom. llvm-svn: 305338
* [PGO] Update VP metadata after memory intrinsic optimizationTeresa Johnson2017-06-131-2/+7
| | | | | | | | | | | | | | | Summary: Leave an updated VP metadata on the fallback memcpy intrinsic after specialization. This can be used for later possible expansion based on the average of the remaining values. Reviewers: davidxl Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34164 llvm-svn: 305321
* Inliner: Don't remove calls to readnone+nounwind (but not always_inline) ↵David Blaikie2017-06-121-0/+11
| | | | | | functions in the AlwaysInliner llvm-svn: 305245
* [RS4GC] Drop invalid metadata after pointers are relocatedAnna Thomas2017-06-121-0/+92
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: After RS4GC, we should drop metadata that is no longer valid. These metadata is used by optimizations scheduled after RS4GC, and can cause a miscompile. One such metadata is invariant.load which is used by LICM sinking transform. After rewriting statepoints, the address of a load maybe relocated. With invariant.load metadata on a load instruction, LICM sinking assumes the loaded value (from a dererenceable address) to be invariant, and rematerializes the load operand and the load at the exit block. This transforms the IR to have an unrelocated use of the address after a statepoint, which is incorrect. Other metadata we conservatively remove are related to dereferenceability and noalias metadata. This patch drops such metadata on store and load instructions after rewriting statepoints. Reviewers: reames, sanjoy, apilipenko Reviewed by: reames Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D33756 llvm-svn: 305234
* [InstCombine] lshr (sext iM X to iN), N-M --> zext (ashr X, min(N-M, M-1)) to iNSanjay Patel2017-06-121-5/+14
| | | | | | | | | | | | | | | | | | | This is a follow-up to https://reviews.llvm.org/D33879 / https://reviews.llvm.org/rL304939 , and was discussed in https://reviews.llvm.org/D33338. We prefer this form because a narrower shift may be cheaper, and we can more easily fold a zext than a sext. http://rise4fun.com/Alive/slVe Name: shz %s = sext i8 %x to i12 %r = lshr i12 %s, 4 => %a = ashr i8 %x, 4 %r = zext i8 %a to i12 llvm-svn: 305190
* [PartialInlining] Support shrinkwrap life_range markersXinliang David Li2017-06-115-0/+359
| | | | | | Differential Revision: http://reviews.llvm.org/D33847 llvm-svn: 305170
* [X86][SLM] Add SLM arithmetic vectorization testsSimon Pilgrim2017-06-104-37/+333
| | | | | | As discussed on D33983, as SLM has so many custom costs its worth testing as well. llvm-svn: 305151
* [InstSimplify] Don't constant fold or DCE calls that are marked nobuiltinAndrew Kaylor2017-06-092-0/+24
| | | | | | Differential Revision: https://reviews.llvm.org/D33737 llvm-svn: 305132
* [SROA] Fix APInt size when load/store have different address spaceYaxun Liu2017-06-091-0/+28
| | | | | | | | | | | | | | | Currently there is a bug in SROA::presplitLoadsAndStores which causes assertion in GEPOperator::accumulateConstantOffset. Basically it does not consider the situation that the pointer operand of load or store may be in a non-zero address space and its size may be different from the size of a pointer in address space 0. This patch fixes assertion when compiling Blender Cycles kernels for amdgpu backend. Diffferential Revision: https://reviews.llvm.org/D33298 llvm-svn: 305107
* [Sink] Fix predicate in legality checkKeno Fischer2017-06-091-0/+18
| | | | | | | | | | | | | | | | | | Summary: isSafeToSpeculativelyExecute is the wrong predicate to use here. All that checks for is whether it is safe to hoist a value due to unaligned/un-dereferencable accesses. However, not only are we doing sinking rather than hoisting, our concern is that the location we're loading from may have been modified. Instead forbid sinking any load across a critical edge. Reviewers: majnemer Subscribers: davide, llvm-commits Differential Revision: https://reviews.llvm.org/D33179 llvm-svn: 305102
* [IndVars] Add an option to be able to disable LFTRSerguei Katkov2017-06-091-0/+28
| | | | | | | | | | | | This change adds an option disable-lftr to be able to disable Linear Function Test Replace optimization. By default option is off so current behavior is not changed. Reviewers: reames, sanjoy, wmi, andreadb, apilipenko Reviewed By: sanjoy Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D33979 llvm-svn: 305055
* [LoopVectorize] Don't preserve nsw/nuw flags on shrunken ops.George Burgess IV2017-06-091-9/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we're shrinking a binary operation, it may be the case that the new operations wraps where the old didn't. If this happens, the behavior should be well-defined. So, we can't always carry wrapping flags with us when we shrink operations. If we do, we get incorrect optimizations in cases like: void foo(const unsigned char *from, unsigned char *to, int n) { for (int i = 0; i < n; i++) to[i] = from[i] - 128; } which gets optimized to: void foo(const unsigned char *from, unsigned char *to, int n) { for (int i = 0; i < n; i++) to[i] = from[i] | 128; } Because: - InstCombine turned `sub i32 %from.i, 128` into `add nuw nsw i32 %from.i, 128`. - LoopVectorize vectorized the add to be `add nuw nsw <16 x i8>` with a vector full of `i8 128`s - InstCombine took advantage of the fact that the newly-shrunken add "couldn't wrap", and changed the `add` to an `or`. InstCombine seems happy to figure out whether we can add nuw/nsw on its own, so I just decided to drop the flags. There are already a number of places in LoopVectorize where we rely on InstCombine to clean up. llvm-svn: 305053
* Inliner: Don't touch indirect callsDavid Blaikie2017-06-091-0/+24
| | | | | | | | | Other comments/implications are that this isn't intended behavior (nor perserved/reimplemented in the new inliner) & complicates fixing the 'inlining' of trivially dead calls without consulting the cost function first. llvm-svn: 305052
* [CFI] Remove LinkerSubsectionsViaSymbols.Evgeniy Stepanov2017-06-081-15/+1
| | | | | | | | | | | | Since D17854 LinkerSubsectionsViaSymbols is unnecessary. It is interfering with ThinLTO implementation of CFI-ICall, where the aliases used on the !LinkerSubsectionsViaSymbols branch are needed to export jump tables to ThinLTO backends. This is the second attempt to land this change after fixing PR33316. llvm-svn: 305031
* Write summaries for merged modules when splitting modules for ThinLTO.Peter Collingbourne2017-06-081-0/+4
| | | | | | | | | This is to prepare to allow for dead stripping of globals in the merged modules. Differential Revision: https://reviews.llvm.org/D33921 llvm-svn: 305027
* [CGP, x86] add tests for potential memcmp expansion; NFCSanjay Patel2017-06-081-0/+337
| | | | | | | | | | | | No IR tests were added with rL304313 ( https://reviews.llvm.org/D28637 ), so I want these for extra coverage if we enable memcmp expansion for x86. As shown, nothing is expanded for x86 in CGP yet. Also fundamentally, we're doing an IR transform, so we should have IR tests for just that part. If something goes wrong, we need to know if the bug is in CGP or later lowering. llvm-svn: 305011
* Do not early-inline recursive calls in sample profile loader.Dehao Chen2017-06-082-0/+16
| | | | | | | | | | | | | | Summary: Early-inlining of recursive call makes the code size bloat exponentially. We should not disable it. Reviewers: davidxl, dnovillo, iteratee Reviewed By: iteratee Subscribers: iteratee, llvm-commits, sanjoy Differential Revision: https://reviews.llvm.org/D34017 llvm-svn: 305009
* Changed a comparison operator for std::stable_sort to implement strict weak ↵Galina Kistanova2017-06-081-27/+30
| | | | | | | | | ordering. This is a temporarily fix which needs additional work, as it triggers a test3 failure. test3 is commented out till then. llvm-svn: 304993
* InferAddressSpaces: Avoid assertion failure with replacing identicalNirav Dave2017-06-081-0/+36
| | | | | | | | | | | | | | | cloned constexpr Have cloneConstantExprWithNewAddressSpaces return nullptr when returning initial ConstantExpr. Reviewers: arsenm Subscribers: jholewinski, wdng, llvm-commits Differential Revision: https://reviews.llvm.org/D33995 llvm-svn: 304975
* Regenerate testSimon Pilgrim2017-06-081-24/+24
| | | | llvm-svn: 304973
* [InstCombine] fold lshr (sext X), C1 --> zext (lshr X, C2)Sanjay Patel2017-06-071-17/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This was discussed in D33338. We have larger pattern-matching ending in a truncate that we can reduce or remove by handling these smaller patterns first. Further motivation is that narrower shift ops are easier for value tracking and zext is better than sext. http://rise4fun.com/Alive/rhh Name: boolshift %sext = sext i1 %x to i8 %r = lshr i8 %sext, 7 => %r = zext i1 %x to i8 Name: noboolshift %sext = sext i3 %x to i8 %r = lshr i8 %sext, 7 => %sh = lshr i3 %x, 2 %r = zext i3 %sh to i8 Differential Revision: https://reviews.llvm.org/D33879 llvm-svn: 304939
* Fix builin_expect lowering bugXinliang David Li2017-06-071-0/+22
| | | | | | | | PR33346 Skip cases when expected value is not constant int. llvm-svn: 304933
* LowerTypeTests: Generate simpler IR for br(llvm.type.test, then, else).Peter Collingbourne2017-06-071-0/+37
| | | | | | | | | | | | | This makes it so that the code quality for CFI checks when compiling with -O2 and linking with --lto-O0 is similar to that of the rest of the code. Reduces the size of a chrome binary built with -O2/--lto-O0 by about 750KB. Differential Revision: https://reviews.llvm.org/D33925 llvm-svn: 304921
* Introduce the new feature "abi-breaking-checks" to satisfy -reverse-iterate ↵NAKAMURA Takumi2017-06-072-2/+2
| | | | | | | | | | | | | in llvm/test/Transforms/Util/PredicateInfo/ A few tests in llvm/test/Transforms/Util/PredicateInfo/ are using -reverse-iterate. The option -reverse-iterate is enabled with +Asserts in usual cases, but it can be turned on/off regardless of LLVM_ENABLE_ASSERTIONS. I wonder if this were incompatible to https://reviews.llvm.org/D33908 (r304757). Differential Revision: https://reviews.llvm.org/D33854 llvm-svn: 304851
* Added tests for X86InterleavedStore.Evgeny Stupachenko2017-06-061-1/+60
| | | | | | | | | | Reviewers: RKSimon, DavidKreitzer Differential Revision: https://reviews.llvm.org/D33684 Patch by: Aleen Farhana <Farhana.aleen@gmail.com> llvm-svn: 304834
* [SLP] Change extension of the test, NFC.Alexey Bataev2017-06-061-0/+0
| | | | llvm-svn: 304829
* [SLP] Add a test for fix of PR32164, NFC.Alexey Bataev2017-06-061-0/+138
| | | | llvm-svn: 304826
* Fix PR23384 (part 3 of 3)Evgeny Stupachenko2017-06-066-22/+26
| | | | | | | | | | | | | Summary: The patch makes instruction count the highest priority for LSR solution for X86 (previously registers had highest priority). Reviewers: qcolombet Differential Revision: http://reviews.llvm.org/D30562 From: Evgeny Stupachenko <evstupac@gmail.com> llvm-svn: 304824
* [LoopIdiom] Move X86 specific atomic memcpy test to the X86 directoryAnna Thomas2017-06-061-0/+0
| | | | | | | | | | Patch https://reviews.llvm.org/rL304806 was causing failures in Aarch64 and multiple other targets since the test should be run on X86 only. Specifying the target triple is not enough. Moving the testcase to the X86 target directory in LoopIdiom. llvm-svn: 304809
* NewGVN: Fix PR/33187. This is a bug caused by two things:Daniel Berlin2017-06-064-4/+150
| | | | | | | | | | | | | 1. When there is no perfect iteration order, we can't let phi nodes put themselves in terms of things that come later in the iteration order, or we will endlessly cycle (the normal RPO algorithm clears the hashtable to avoid this issue). 2. We are sometimes erasing the wrong expression (causing pessimism) because our equality says loads and stores are the same. We introduce an exact equality function and use it when erasing to make sure we erase only identical expressions, not equivalent ones. llvm-svn: 304807
* [Atomics][LoopIdiom] Recognize unordered atomic memcpyAnna Thomas2017-06-062-0/+480
| | | | | | | | | | | | | | | | | | | | | | Summary: Expanding the loop idiom test for memcpy to also recognize unordered atomic memcpy. The only difference for recognizing an unordered atomic memcpy and instead of a normal memcpy is that the loads and/or stores involved are unordered atomic operations. Background: http://lists.llvm.org/pipermail/llvm-dev/2017-May/112779.html Patch by Daniel Neilson! Reviewers: reames, anna, skatkov Reviewed By: reames, anna Subscribers: llvm-commits, mzolotukhin Differential Revision: https://reviews.llvm.org/D33243 llvm-svn: 304806
* [IRCE] Canonicalize pre/post loops after the blocks are added into parent loopAnna Thomas2017-06-061-0/+182
| | | | | | | | | | | | | | | | | | | | | Summary: We were canonizalizing the pre loop (into loop-simplify form) before the post loop blocks were added into parent loop. This is incorrect when IRCE is done on a subloop. The post-loop blocks are created, but not yet added to the parent loop. So, loop-simplification on the pre-loop incorrectly updates LoopInfo. This patch corrects the ordering so that pre and post loop blocks are added to parent loop (if any), and then the loops are canonicalized to LCSSA and LoopSimplifyForm. Reviewers: reames, sanjoy, apilipenko Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D33846 llvm-svn: 304800
OpenPOWER on IntegriCloud