summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/InstCombine
Commit message (Collapse)AuthorAgeFilesLines
...
* [InstCombine] rename variables in shifted-shift helper function (NFCI)Sanjay Patel2016-04-111-17/+20
| | | | | | | This is step 3 of refactoring to solve PR26760: https://llvm.org/bugs/show_bug.cgi?id=26760 llvm-svn: 265954
* [InstCombine] add helper function for shift-shift optimization (NFCI)Sanjay Patel2016-04-111-24/+37
| | | | | | | This is step 2 of refactoring to solve PR26760: https://llvm.org/bugs/show_bug.cgi?id=26760 llvm-svn: 265951
* [InstCombine] Fix miscompile in FoldSPFofSPFDavid Majnemer2016-04-081-0/+3
| | | | | | | | | | | We had a select of a cast of a select but attempted to replace the outer select with the inner select dispite their incompatible types. Patch by Anton Korobeynikov! This fixes PR27236. llvm-svn: 265805
* [InstCombine] Add a peephole for redundant assumesDavid Majnemer2016-04-081-2/+7
| | | | | | | | | | Two or more identical assumes are occasionally next to each other in a basic block. While our generic machinery will turn a redundant assume into a no-op, it is not super cheap. We can perform a simpler check to achieve the same result for this case. llvm-svn: 265801
* Don't IPO over functions that can be de-refinedSanjoy Das2016-04-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Fixes PR26774. If you're aware of the issue, feel free to skip the "Motivation" section and jump directly to "This patch". Motivation: I define "refinement" as discarding behaviors from a program that the optimizer has license to discard. So transforming: ``` void f(unsigned x) { unsigned t = 5 / x; (void)t; } ``` to ``` void f(unsigned x) { } ``` is refinement, since the behavior went from "if x == 0 then undefined else nothing" to "nothing" (the optimizer has license to discard undefined behavior). Refinement is a fundamental aspect of many mid-level optimizations done by LLVM. For instance, transforming `x == (x + 1)` to `false` also involves refinement since the expression's value went from "if x is `undef` then { `true` or `false` } else { `false` }" to "`false`" (by definition, the optimizer has license to fold `undef` to any non-`undef` value). Unfortunately, refinement implies that the optimizer cannot assume that the implementation of a function it can see has all of the behavior an unoptimized or a differently optimized version of the same function can have. This is a problem for functions with comdat linkage, where a function can be replaced by an unoptimized or a differently optimized version of the same source level function. For instance, FunctionAttrs cannot assume a comdat function is actually `readnone` even if it does not have any loads or stores in it; since there may have been loads and stores in the "original function" that were refined out in the currently visible variant, and at the link step the linker may in fact choose an implementation with a load or a store. As an example, consider a function that does two atomic loads from the same memory location, and writes to memory only if the two values are not equal. The optimizer is allowed to refine this function by first CSE'ing the two loads, and the folding the comparision to always report that the two values are equal. Such a refined variant will look like it is `readonly`. However, the unoptimized version of the function can still write to memory (since the two loads //can// result in different values), and selecting the unoptimized version at link time will retroactively invalidate transforms we may have done under the assumption that the function does not write to memory. Note: this is not just a problem with atomics or with linking differently optimized object files. See PR26774 for more realistic examples that involved neither. This patch: This change introduces a new set of linkage types, predicated as `GlobalValue::mayBeDerefined` that returns true if the linkage type allows a function to be replaced by a differently optimized variant at link time. It then changes a set of IPO passes to bail out if they see such a function. Reviewers: chandlerc, hfinkel, dexonsmith, joker.eph, rnk Subscribers: mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D18634 llvm-svn: 265762
* [InstCombine] Don't sink an instr after a catchswitchDavid Majnemer2016-04-011-1/+5
| | | | | | A catchswitch is a terminator, instructions cannot be inserted after it. llvm-svn: 265158
* AMDGPU: Add frexp_exp intrinsicMatt Arsenault2016-03-301-5/+16
| | | | llvm-svn: 264944
* AMDGPU: Constant folding for frexp_mantMatt Arsenault2016-03-301-0/+14
| | | | llvm-svn: 264943
* Minor code cleanup. NFC.Junmo Park2016-03-231-1/+1
| | | | llvm-svn: 264124
* [InstCombine] Don't insert instructions before a catch switchDavid Majnemer2016-03-191-0/+3
| | | | | | | | | CatchSwitches are not splittable, we cannot insert casts, etc. before them. This fixes PR26992. llvm-svn: 263874
* [InstCombine] Combine A->B->A BitCastGuozhi Wei2016-03-172-0/+107
| | | | | | | | | | This patch enhances InstCombine to handle following case: A -> B bitcast PHI B -> A bitcast llvm-svn: 263734
* Also handle the new Rust pers fn to isCatchAll()Bjorn Steinbrink2016-03-151-2/+3
| | | | llvm-svn: 263585
* [attrs] Handle convergent CallSites.Justin Lebar2016-03-141-1/+10
| | | | | | | | | | | | | | | | | | | | | | | Summary: Previously we had a notion of convergent functions but not of convergent calls. This is insufficient to correctly analyze calls where the target is unknown, e.g. indirect calls. Now a call is convergent if it targets a known-convergent function, or if it's explicitly marked as convergent. As usual, we can remove convergent where we can prove that no convergent operations are performed in the call. Originally landed as r261544, then reverted in r261544 for (incidental) build breakage. Re-landed here with no changes. Reviewers: chandlerc, jingyue Subscribers: llvm-commits, tra, jhen, hfinkel Differential Revision: http://reviews.llvm.org/D17739 llvm-svn: 263481
* Remove PreserveNames template parameter from IRBuilderMehdi Amini2016-03-132-4/+4
| | | | | | | | This reapplies r263258, which was reverted in r263321 because of issues on Clang side. From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 263393
* [x86, InstCombine] delete x86 SSE2 masked store with zero maskSanjay Patel2016-03-121-0/+6
| | | | | | | | | This follows up on the related AVX instruction transforms, but this one is too strange to do anything more with. Intel's behavioral description of this instruction in its Software Developer's Manual is tragi-comic. llvm-svn: 263340
* Temporarily revert:Eric Christopher2016-03-122-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit ae14bf6488e8441f0f6d74f00455555f6f3943ac Author: Mehdi Amini <mehdi.amini@apple.com> Date: Fri Mar 11 17:15:50 2016 +0000 Remove PreserveNames template parameter from IRBuilder Summary: Following r263086, we are now relying on a flag on the Context to discard Value names in release builds. Reviewers: chandlerc Subscribers: mzolotukhin, llvm-commits Differential Revision: http://reviews.llvm.org/D18023 From: Mehdi Amini <mehdi.amini@apple.com> git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@263258 91177308-0d34-0410-b5e6-96231b3b80d8 until we can figure out what to do about clang and Release build testing. This reverts commit 263258. llvm-svn: 263321
* Remove PreserveNames template parameter from IRBuilderMehdi Amini2016-03-112-4/+4
| | | | | | | | | | | | | | | Summary: Following r263086, we are now relying on a flag on the Context to discard Value names in release builds. Reviewers: chandlerc Subscribers: mzolotukhin, llvm-commits Differential Revision: http://reviews.llvm.org/D18023 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 263258
* [PM] Make the AnalysisManager parameter to run methods a reference.Chandler Carruth2016-03-111-5/+5
| | | | | | | | | | | | This was originally a pointer to support pass managers which didn't use AnalysisManagers. However, that doesn't realistically come up much and the complexity of supporting it doesn't really make sense. In fact, *many* parts of the pass manager were just assuming the pointer was never null already. This at least makes it much more explicit and clear. llvm-svn: 263219
* [InstCombine] Use Twines to generate names.Benjamin Kramer2016-03-111-15/+5
| | | | | | | | | Since the names are used in a loop this does more work in debug builds. In release builds value names are generally discarded so we don't have to do the concatenation at all. It's also simpler code, no functional change intended. llvm-svn: 263215
* [ValueTracking] Extract isKnownPositive [NFCI]Philip Reames2016-03-091-2/+2
| | | | | | Extract out a generic interface from a recently landed patch and document a TODO in case compile time becomes a problem. llvm-svn: 263062
* [InstCombine] (icmp sgt smin(PosA, B) 0) -> (icmp sgt B 0)Philip Reames2016-03-091-0/+13
| | | | | | | | When checking whether an smin is positive, we can move the comparison to one of the inputs if the other is known positive. If the known positive one is the min, then the other can't be negative. If the other is the min, then we compute the min. Differential Revision: http://reviews.llvm.org/D17873 llvm-svn: 263059
* InstCombine: Restrict computeKnownBits() on all Values to OptLevel > 2Matthias Braun2016-03-092-11/+22
| | | | | | | | | | | | | | | | | | As part of r251146 InstCombine was extended to call computeKnownBits on every value in the function to determine whether it happens to be constant. This increases typical compiletime by 1-3% (5% in irgen+opt time) in my measurements. On the other hand this case did not trigger once in the whole llvm-testsuite. This patch introduces the notion of ExpensiveCombines which are only enabled for OptLevel > 2. I removed the check in InstructionSimplify as that is called from various places where the OptLevel is not known but given the rarity of the situation I think a check in InstCombine is enough. Differential Revision: http://reviews.llvm.org/D16835 llvm-svn: 263047
* Reland r262337 "calculate builtin_object_size if arg is a removable pointer"Petar Jovanovic2016-03-091-8/+25
| | | | | | | | | | | | | | | | | | Original commit message: calculate builtin_object_size if argument is a removable pointer This patch fixes calculating correct value for builtin_object_size function when pointer is used only in builtin_object_size function call and never after that. Patch by Strahinja Petrovic. Differential Revision: http://reviews.llvm.org/D17337 Reland the original change with a small modification (first do a null check and then do the cast) to satisfy ubsan. llvm-svn: 263011
* Revert "[InstCombine] Combine A->B->A BitCast"Junmo Park2016-03-082-104/+0
| | | | | | This reverts commit r262670 due to compile failure. llvm-svn: 262916
* [InstCombine] Combine A->B->A BitCastGuozhi Wei2016-03-032-0/+104
| | | | | | | | | | This patch enhances InstCombine to handle following case: A -> B bitcast PHI B -> A bitcast llvm-svn: 262670
* [InstCombine] transform bitcasted bitwise logic ops with constants (PR26702)Sanjay Patel2016-03-031-7/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Given that we're not actually reducing the instruction count in the included regression tests, I think we would call this a canonicalization step. The motivation comes from the example in PR26702: https://llvm.org/bugs/show_bug.cgi?id=26702 If we hoist the bitwise logic ahead of the bitcast, the previously unoptimizable example of: define <4 x i32> @is_negative(<4 x i32> %x) { %lobit = ashr <4 x i32> %x, <i32 31, i32 31, i32 31, i32 31> %not = xor <4 x i32> %lobit, <i32 -1, i32 -1, i32 -1, i32 -1> %bc = bitcast <4 x i32> %not to <2 x i64> %notnot = xor <2 x i64> %bc, <i64 -1, i64 -1> %bc2 = bitcast <2 x i64> %notnot to <4 x i32> ret <4 x i32> %bc2 } Simplifies to the expected: define <4 x i32> @is_negative(<4 x i32> %x) { %lobit = ashr <4 x i32> %x, <i32 31, i32 31, i32 31, i32 31> ret <4 x i32> %lobit } Differential Revision: http://reviews.llvm.org/D17583 llvm-svn: 262645
* Explode store of arrays in instcombineAmaury Sechet2016-03-021-1/+33
| | | | | | | | | | | | Summary: This is the last step toward supporting aggregate memory access in instcombine. This explodes stores of arrays into a serie of stores for each element, allowing them to be optimized. Reviewers: joker.eph, reames, hfinkel, majnemer, mgrang Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D17828 llvm-svn: 262530
* Unpack array of all sizes in InstCombineAmaury Sechet2016-03-021-5/+38
| | | | | | | | | | | | Summary: This is another step toward improving fca support. This unpack load of array in a series of load to array's elements. Reviewers: chandlerc, joker.eph, majnemer, reames, hfinkel Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D15890 llvm-svn: 262521
* revert r262424 because there's a *clang test* for AArch64 that checks -O3 ↵Sanjay Patel2016-03-021-17/+5
| | | | | | | | asm output that is broken by this change llvm-svn: 262440
* [InstCombine] convert 'isPositive' and 'isNegative' vector comparisons to ↵Sanjay Patel2016-03-011-5/+17
| | | | | | | | | | | | | | | | | | shifts (PR26701) As noted in the code comment, I don't think we can do the same transform that we do for *scalar* integers comparisons to *vector* integers comparisons because it might pessimize the general case. Exhibit A for an incomplete integer comparison ISA remains x86 SSE/AVX: it only has EQ and GT for integer vectors. But we should now recognize all the variants of this construct and produce the optimal code for the cases shown in: https://llvm.org/bugs/show_bug.cgi?id=26701 llvm-svn: 262424
* Perform InstructioinCombiningPass before SampleProfile pass.Dehao Chen2016-03-011-20/+0
| | | | | | | | | | | | Summary: SampleProfile pass needs to be performed after InstructionCombiningPass, which helps eliminate un-inlinable function calls. Reviewers: davidxl, dnovillo Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D17742 llvm-svn: 262419
* Fix an issue where fast math flags were dropped during scalarization.Owen Anderson2016-03-011-2/+4
| | | | | | | Most portions of InstCombine properly propagate fast math flags, but apparently the vector scalarization section was overlooked. llvm-svn: 262376
* Revert "calculate builtin_object_size if argument is a removable pointer"Petar Jovanovic2016-03-011-19/+6
| | | | | | | Revert r262337 as "check-llvm ubsan" step failed on sanitizer-x86_64-linux-fast buildbot. llvm-svn: 262349
* calculate builtin_object_size if argument is a removable pointerPetar Jovanovic2016-03-011-6/+19
| | | | | | | | | | | | This patch fixes calculating correct value for builtin_object_size function when pointer is used only in builtin_object_size function call and never after that. Patch by Strahinja Petrovic. Differential Revision: http://reviews.llvm.org/D17337 llvm-svn: 262337
* [x86, InstCombine] transform more x86 masked loads to LLVM intrinsicsSanjay Patel2016-02-291-1/+7
| | | | | | | Continuation of: http://reviews.llvm.org/rL262269 llvm-svn: 262273
* [x86, InstCombine] transform x86 AVX masked loads to LLVM intrinsicsSanjay Patel2016-02-291-1/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The intended effect of this patch in conjunction with: http://reviews.llvm.org/rL259392 http://reviews.llvm.org/rL260145 is that customers using the AVX intrinsics in C will benefit from combines when the load mask is constant: __m128 mload_zeros(float *f) { return _mm_maskload_ps(f, _mm_set1_epi32(0)); } __m128 mload_fakeones(float *f) { return _mm_maskload_ps(f, _mm_set1_epi32(1)); } __m128 mload_ones(float *f) { return _mm_maskload_ps(f, _mm_set1_epi32(0x80000000)); } __m128 mload_oneset(float *f) { return _mm_maskload_ps(f, _mm_set_epi32(0x80000000, 0, 0, 0)); } ...so none of the above will actually generate a masked load for optimized code. This is the masked load counterpart to: http://reviews.llvm.org/rL262064 llvm-svn: 262269
* [InstCombine] Be more conservative about removing stackrestoreReid Kleckner2016-02-271-1/+7
| | | | | | | We ended up removing a save/restore pair around an inalloca call, leading to a miscompile in Chromium. llvm-svn: 262095
* [x86, InstCombine] transform x86 AVX2 masked stores to LLVM intrinsicsSanjay Patel2016-02-261-1/+4
| | | | | | | | | Replicate everything for integers...because x86. Continuation of: http://reviews.llvm.org/rL262064 llvm-svn: 262077
* [x86, InstCombine] transform x86 AVX masked stores to LLVM intrinsicsSanjay Patel2016-02-261-0/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The intended effect of this patch in conjunction with: http://reviews.llvm.org/rL259392 http://reviews.llvm.org/rL260145 is that customers using the AVX intrinsics in C will benefit from combines when the store mask is constant: void mstore_zero_mask(float *f, __m128 v) { _mm_maskstore_ps(f, _mm_set1_epi32(0), v); } void mstore_fake_ones_mask(float *f, __m128 v) { _mm_maskstore_ps(f, _mm_set1_epi32(1), v); } void mstore_ones_mask(float *f, __m128 v) { _mm_maskstore_ps(f, _mm_set1_epi32(0x80000000), v); } void mstore_one_set_elt_mask(float *f, __m128 v) { _mm_maskstore_ps(f, _mm_set_epi32(0x80000000, 0, 0, 0), v); } ...so none of the above will actually generate a masked store for optimized code. Differential Revision: http://reviews.llvm.org/D17485 llvm-svn: 262064
* [InstCombine] enable optimization of casted vector xor instructionsSanjay Patel2016-02-241-18/+8
| | | | | | | | | | | | | | | | This is part of the payoff for the refactoring in: http://reviews.llvm.org/rL261649 http://reviews.llvm.org/rL261707 In addition to removing a pile of duplicated code, the xor case was missing the optimization for vector types because it checked "SrcTy->isIntegerTy()" rather than "SrcTy->isIntOrIntVectorTy()" like 'and' and 'or' were already doing. This solves part of: https://llvm.org/bugs/show_bug.cgi?id=26702 llvm-svn: 261750
* NFC. Move isDereferenceable to Loads.h/cppArtur Pilipenko2016-02-241-0/+1
| | | | | | | | | | This is a part of the refactoring to unify isSafeToLoadUnconditionally and isDereferenceablePointer functions. In subsequent change I'm going to eliminate isDerferenceableAndAlignedPointer from Loads API, leaving isSafeToLoadSpecualtively the only function to check is load instruction can be speculated. Reviewed By: hfinkel Differential Revision: http://reviews.llvm.org/D16180 llvm-svn: 261736
* [InstCombine] refactor visitOr() to use foldCastedBitwiseLogic()Sanjay Patel2016-02-231-47/+31
| | | | | | | | | | | | | | | | Note: The 'and' case in foldCastedBitwiseLogic() is inheriting one extra check from the nearly identical 'or' case: if ((!isa<ICmpInst>(Cast0Src) || !isa<ICmpInst>(Cast1Src)) But I'm not sure how to expose that difference in a regression test. Without that check, the 'or' path will infinite loop on: test/Transforms/InstCombine/zext-or-icmp.ll because the zext-or-icmp fold is attempting a reverse transform. The refactoring should extend to the 'xor' case next to solve part of PR26702. llvm-svn: 261707
* [InstCombine] improve readability ; NFCISanjay Patel2016-02-231-30/+36
| | | | | | Less indenting, named local variables, more descriptive names. llvm-svn: 261659
* [InstCombine] less indenting; NFCSanjay Patel2016-02-231-31/+32
| | | | llvm-svn: 261652
* [InstCombine] add helper function to foldCastedBitwiseLogic() ; NFCISanjay Patel2016-02-232-29/+41
| | | | | | | | | | | | This is a straight cut and paste of the existing code and is intended to be the first step in solving part of PR26702: https://llvm.org/bugs/show_bug.cgi?id=26702 We should be able to reuse most of this and delete the nearly identical existing code in visitOr(). Then, we can enhance visitXor() to use the same code too. llvm-svn: 261649
* Revert "[attrs] Handle convergent CallSites."Justin Lebar2016-02-221-10/+1
| | | | | | | This reverts r261544, which was causing a test failure in Transforms/FunctionAttrs/readattrs.ll. llvm-svn: 261549
* [attrs] Handle convergent CallSites.Justin Lebar2016-02-221-1/+10
| | | | | | | | | | | | | | | | | | | | Summary: Previously we had a notion of convergent functions but not of convergent calls. This is insufficient to correctly analyze calls where the target is unknown, e.g. indirect calls. Now a call is convergent if it targets a known-convergent function, or if it's explicitly marked as convergent. As usual, we can remove convergent where we can prove that no convergent operations are performed in the call. Reviewers: chandlerc, jingyue Subscribers: hfinkel, jhen, tra, llvm-commits Differential Revision: http://reviews.llvm.org/D17317 llvm-svn: 261544
* fix inaccurate comment; NFCSanjay Patel2016-02-211-2/+1
| | | | llvm-svn: 261484
* [InstCombine] add getNegativeIsTrueBoolVec() helper function; NFCSanjay Patel2016-02-211-22/+20
| | | | | | | | | Originally part of: http://reviews.llvm.org/D17485 We need this when simplifying masked memory ops too. llvm-svn: 261483
* [InstCombine] SSE/SSE2 (u)comiss/(u)comisd comparison intrinsics only use ↵Simon Pilgrim2016-02-201-0/+40
| | | | | | the lowest vector element llvm-svn: 261460
OpenPOWER on IntegriCloud