summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/InstCombine
Commit message (Collapse)AuthorAgeFilesLines
...
* Remove alignment argument from memcpy/memmove/memset in favour of alignment ↵Daniel Neilson2018-01-191-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | attributes (Step 1) Summary: This is a resurrection of work first proposed and discussed in Aug 2015: http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.html and initially landed (but then backed out) in Nov 2015: http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html The @llvm.memcpy/memmove/memset intrinsics currently have an explicit argument which is required to be a constant integer. It represents the alignment of the dest (and source), and so must be the minimum of the actual alignment of the two. This change is the first in a series that allows source and dest to each have their own alignments by using the alignment attribute on their arguments. In this change we: 1) Remove the alignment argument. 2) Add alignment attributes to the source & dest arguments. We, temporarily, require that the alignments for source & dest be equal. For example, code which used to read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 100, i32 4, i1 false) will now read call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 4 %dest, i8* align 4 %src, i32 100, i1 false) Downstream users may have to update their lit tests that check for @llvm.memcpy/memmove/memset call/declaration patterns. The following extended sed script may help with updating the majority of your tests, but it does not catch all possible patterns so some manual checking and updating will be required. s~declare void @llvm\.mem(set|cpy|move)\.p([^(]*)\((.*), i32, i1\)~declare void @llvm.mem\1.p\2(\3, i1)~g s~call void @llvm\.memset\.p([^(]*)i8\(i8([^*]*)\* (.*), i8 (.*), i8 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i8(i8\2* \3, i8 \4, i8 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i16\(i8([^*]*)\* (.*), i8 (.*), i16 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i16(i8\2* \3, i8 \4, i16 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i32\(i8([^*]*)\* (.*), i8 (.*), i32 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i32(i8\2* \3, i8 \4, i32 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i64\(i8([^*]*)\* (.*), i8 (.*), i64 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i64(i8\2* \3, i8 \4, i64 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i128\(i8([^*]*)\* (.*), i8 (.*), i128 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i128(i8\2* \3, i8 \4, i128 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i8\(i8([^*]*)\* (.*), i8 (.*), i8 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i8(i8\2* align \6 \3, i8 \4, i8 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i16\(i8([^*]*)\* (.*), i8 (.*), i16 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i16(i8\2* align \6 \3, i8 \4, i16 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i32\(i8([^*]*)\* (.*), i8 (.*), i32 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i32(i8\2* align \6 \3, i8 \4, i32 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i64\(i8([^*]*)\* (.*), i8 (.*), i64 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i64(i8\2* align \6 \3, i8 \4, i64 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i128\(i8([^*]*)\* (.*), i8 (.*), i128 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i128(i8\2* align \6 \3, i8 \4, i128 \5, i1 \7)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i8\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i8 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i8(i8\3* \4, i8\5* \6, i8 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i16\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i16 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i16(i8\3* \4, i8\5* \6, i16 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i32\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i32 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i32(i8\3* \4, i8\5* \6, i32 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i64\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i64 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i64(i8\3* \4, i8\5* \6, i64 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i128\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i128 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i128(i8\3* \4, i8\5* \6, i128 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i8\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i8 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i8(i8\3* align \8 \4, i8\5* align \8 \6, i8 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i16\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i16 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i16(i8\3* align \8 \4, i8\5* align \8 \6, i16 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i32\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i32 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i32(i8\3* align \8 \4, i8\5* align \8 \6, i32 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i64\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i64 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i64(i8\3* align \8 \4, i8\5* align \8 \6, i64 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i128\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i128 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i128(i8\3* align \8 \4, i8\5* align \8 \6, i128 \7, i1 \9)~g The remaining changes in the series will: Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing source and dest alignments. Step 3) Update Clang to use the new IRBuilder API. Step 4) Update Polly to use the new IRBuilder API. Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API, and those that use use MemIntrinsicInst::[get|set]Alignment() to use getDestAlignment() and getSourceAlignment() instead. Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the MemIntrinsicInst::[get|set]Alignment() methods. Reviewers: pete, hfinkel, lhames, reames, bollu Reviewed By: reames Subscribers: niosHD, reames, jholewinski, qcolombet, jfb, sanjoy, arsenm, dschuff, dylanmckay, mehdi_amini, sdardis, nemanjai, david2050, nhaehnle, javed.absar, sbc100, jgravelle-google, eraman, aheejin, kbarton, JDevlieghere, asb, rbar, johnrusso, simoncook, jordy.potman.lists, apazos, sabuasal, llvm-commits Differential Revision: https://reviews.llvm.org/D41675 llvm-svn: 322965
* [InstCombine] Make foldSelectOpOp able to handle two-operand getelementptrJohn Brawn2018-01-191-7/+19
| | | | | | | | | | Three (or more) operand getelementptrs could plausibly also be handled, but handling only two-operand fits in easily with the existing BinaryOperator handling. Differential Revision: https://reviews.llvm.org/D39958 llvm-svn: 322930
* [InstCombine] fix demanded-bits propagation for zext/truncSanjay Patel2018-01-171-1/+1
| | | | | | | | | I was comparing the demanded-bits implementations between InstCombine and TargetLowering as part of investigating questions in D42088 and noticed that this was wrong in IR. We were losing all of the prior known bits when we got back to the 'zext'. llvm-svn: 322662
* [NFC] Change MemIntrinsicInst::setAlignment() to take an unsigned instead of ↵Daniel Neilson2018-01-121-3/+2
| | | | | | | | | | | a Constant Summary: In preparation for https://reviews.llvm.org/D41675 this NFC changes this prototype of MemIntrinsicInst::setAlignment() to accept an unsigned instead of a Constant. llvm-svn: 322403
* [InstCombine] Apply the fix from r322284 for sin / cos -> tan tooBenjamin Kramer2018-01-111-2/+3
| | | | llvm-svn: 322285
* [InstCombine] For cos/sin -> tan copy attributes from cos instead of theBenjamin Kramer2018-01-111-2/+3
| | | | | | | | | | | parent function Ideally we should merge the attributes from the functions somehow, but this is obviously an improvement over taking random attributes from the caller which will trip up the verifier if they're nonsensical for an unary intrinsic call. llvm-svn: 322284
* [InstCombine] Missed optimization in math expression: sin(x) / cos(x) => tan(x)Dmitry Venikov2018-01-111-0/+35
| | | | | | | | | | | | | | Summary: This patch enables folding sin(x) / cos(x) -> tan(x), cos(x) / sin(x) -> 1 / tan(x) under -ffast-math flag Reviewers: hfinkel, spatel Reviewed By: spatel Subscribers: andrew.w.kaylor, efriedma, scanon, llvm-commits Differential Revision: https://reviews.llvm.org/D41286 llvm-svn: 322255
* [InstCombine] weaken assertions for icmp folds (PR35846)Sanjay Patel2018-01-091-10/+4
| | | | | | | | | Because of potential UB (known bits conflicts with an llvm.assume), we have to check rather than assert here because InstSimplify doesn't kill the compare: https://bugs.llvm.org/show_bug.cgi?id=35846 llvm-svn: 322104
* [InstCombine] Check for out of range ashr values using APInt before calling ↵Simon Pilgrim2018-01-091-3/+5
| | | | | | | | getZExtValue Reduced from oss-fuzz #5032 test case llvm-svn: 322078
* [InstCombine] fold min/max tree with common operand (PR35717)Sanjay Patel2018-01-081-0/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is precedence for factorization transforms in instcombine for FP ops with fast-math. We also have similar logic in foldSPFofSPF(). It would take more work to add this to reassociate because that's specialized for binops, and min/max are not binops (or even single instructions). Also, I don't have evidence that larger min/max trees than this exist in real code, but if we find that's true, we might want to reorganize where/how we do this optimization. In the motivating example from https://bugs.llvm.org/show_bug.cgi?id=35717 , we have: int test(int xc, int xm, int xy) { int xk; if (xc < xm) xk = xc < xy ? xc : xy; else xk = xm < xy ? xm : xy; return xk; } This patch solves that problem because we recognize more min/max patterns after rL321672 https://rise4fun.com/Alive/Qjne https://rise4fun.com/Alive/3yg Differential Revision: https://reviews.llvm.org/D41603 llvm-svn: 321998
* [InstCombine] relax use constraint for min/max (~a, ~b) --> ~min/max(a, b)Sanjay Patel2018-01-061-2/+2
| | | | | | | | | | | In the minimal case, this won't remove instructions, but it still improves uses of existing values. In the motivating example from PR35834, it does remove instructions, and sets that case up to be optimized by something like D41603: https://reviews.llvm.org/D41603 llvm-svn: 321936
* [InstCombine] add folds for min(~a, b) --> ~max(a, b)Sanjay Patel2018-01-051-22/+12
| | | | | | | | | | | | | | | | | Besides the bug of omitting the inverse transform of max(~a, ~b) --> ~min(a, b), the use checking and operand creation were off. We were potentially creating repeated identical instructions of existing values. This led to infinite looping after I added the extra folds. By using the simpler m_Not matcher and not creating new 'not' ops for a and b, we avoid that problem. It's possible that not using IsFreeToInvert() here is more limiting than the simpler matcher, but there are no tests for anything more exotic. It's also possible that we should relax the use checking further to handle a case like PR35834: https://bugs.llvm.org/show_bug.cgi?id=35834 ...but we can make that a follow-up if it is needed. llvm-svn: 321882
* [InstCombine] safely create a constant of the right type (PR35794)Sanjay Patel2018-01-041-4/+4
| | | | llvm-svn: 321801
* [InstCombine] Check for out of range shift values using APInt before calling ↵Simon Pilgrim2018-01-031-4/+4
| | | | | | | | getZExtValue Reduced from oss-fuzz #4871 test case llvm-svn: 321748
* [InstCombine] Missed optimization in math expression: squashing sqrt functionsDmitry Venikov2018-01-021-0/+17
| | | | | | | | | | | | | | Summary: This patch enables folding under -ffast-math flag sqrt(a) * sqrt(b) -> sqrt(a*b) Reviewers: hfinkel, spatel, davide Reviewed By: spatel, davide Subscribers: davide, llvm-commits Differential Revision: https://reviews.llvm.org/D41322 llvm-svn: 321637
* [InstCombine] Check for isa<Instruction> before using cast<>Simon Pilgrim2017-12-281-1/+1
| | | | | | | | Protects against casts from constexpr etc. Reduced from oss-fuzz #4788 test case llvm-svn: 321515
* [InstCombine] Gracefully handle out of range extractelement indicesSimon Pilgrim2017-12-271-3/+5
| | | | | | | | InstSimplify is responsible for handling these, but we shouldn't just assert here. Reduced from oss-fuzz #4808 test case llvm-svn: 321489
* [instcombine] add powi(x, 2) -> x * xPhilip Reames2017-12-271-0/+4
| | | | llvm-svn: 321468
* Sink a couple of transforms from instcombine into instsimplify.Philip Reames2017-12-271-24/+2
| | | | llvm-svn: 321467
* [NFC] Extract out a helper function for SimplifyCall(CS, Q)Philip Reames2017-12-271-3/+1
| | | | | | This simplifies code, but the real motivation is that it lets me clean up some downstream code. llvm-svn: 321466
* [InstCombine] fix miscompile of frem with 0.0 operand (PR34870)Sanjay Patel2017-12-261-4/+0
| | | | | | | We might want to select NAN here or do this transform with fast-math, but this should at least fix the miscompile. llvm-svn: 321461
* [InstCombine] Add debug location to new caller.Florian Hahn2017-12-201-0/+1
| | | | | | | | | | Reviewers: rnk, aprantl, majnemer Reviewed By: aprantl Differential Revision: https://reviews.llvm.org/D414 llvm-svn: 321191
* [InstCombine] canonicalize shifty abs(): ashr+add+xor --> cmp+neg+selSanjay Patel2017-12-161-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We want to do this for 2 reasons: 1. Value tracking does not recognize the ashr variant, so it would fail to match for cases like D39766. 2. DAGCombiner does better at producing optimal codegen when we have the cmp+sel pattern. More detail about what happens in the backend: 1. DAGCombiner has a generic transform for all targets to convert the scalar cmp+sel variant of abs into the shift variant. That is the opposite of this IR canonicalization. 2. DAGCombiner has a generic transform for all targets to convert the vector cmp+sel variant of abs into either an ABS node or the shift variant. That is again the opposite of this IR canonicalization. 3. DAGCombiner has a generic transform for all targets to convert the exact shift variants produced by #1 or #2 into an ISD::ABS node. Note: It would be an efficiency improvement if we had #1 go directly to an ABS node when that's legal/custom. 4. The pattern matching above is incomplete, so it is possible to escape the intended/optimal codegen in a variety of ways. a. For #2, the vector path is missing the case for setlt with a '1' constant. b. For #3, we are missing a match for commuted versions of the shift variants. 5. Therefore, this IR canonicalization can only help get us to the optimal codegen. The version of cmp+sel produced by this patch will be recognized in the DAG and converted to an ABS node when possible or the shift sequence when not. 6. In the following examples with this patch applied, we may get conditional moves rather than the shift produced by the generic DAGCombiner transforms. The conditional move is created using a target-specific decision for any given target. Whether it is optimal or not for a particular subtarget may be up for debate. define i32 @abs_shifty(i32 %x) { %signbit = ashr i32 %x, 31 %add = add i32 %signbit, %x %abs = xor i32 %signbit, %add ret i32 %abs } define i32 @abs_cmpsubsel(i32 %x) { %cmp = icmp slt i32 %x, zeroinitializer %sub = sub i32 zeroinitializer, %x %abs = select i1 %cmp, i32 %sub, i32 %x ret i32 %abs } define <4 x i32> @abs_shifty_vec(<4 x i32> %x) { %signbit = ashr <4 x i32> %x, <i32 31, i32 31, i32 31, i32 31> %add = add <4 x i32> %signbit, %x %abs = xor <4 x i32> %signbit, %add ret <4 x i32> %abs } define <4 x i32> @abs_cmpsubsel_vec(<4 x i32> %x) { %cmp = icmp slt <4 x i32> %x, zeroinitializer %sub = sub <4 x i32> zeroinitializer, %x %abs = select <4 x i1> %cmp, <4 x i32> %sub, <4 x i32> %x ret <4 x i32> %abs } > $ ./opt -instcombine shiftyabs.ll -S | ./llc -o - -mtriple=x86_64 -mattr=avx > abs_shifty: > movl %edi, %eax > negl %eax > cmovll %edi, %eax > retq > > abs_cmpsubsel: > movl %edi, %eax > negl %eax > cmovll %edi, %eax > retq > > abs_shifty_vec: > vpabsd %xmm0, %xmm0 > retq > > abs_cmpsubsel_vec: > vpabsd %xmm0, %xmm0 > retq > > $ ./opt -instcombine shiftyabs.ll -S | ./llc -o - -mtriple=aarch64 > abs_shifty: > cmp w0, #0 // =0 > cneg w0, w0, mi > ret > > abs_cmpsubsel: > cmp w0, #0 // =0 > cneg w0, w0, mi > ret > > abs_shifty_vec: > abs v0.4s, v0.4s > ret > > abs_cmpsubsel_vec: > abs v0.4s, v0.4s > ret > > $ ./opt -instcombine shiftyabs.ll -S | ./llc -o - -mtriple=powerpc64le > abs_shifty: > srawi 4, 3, 31 > add 3, 3, 4 > xor 3, 3, 4 > blr > > abs_cmpsubsel: > srawi 4, 3, 31 > add 3, 3, 4 > xor 3, 3, 4 > blr > > abs_shifty_vec: > vspltisw 3, -16 > vspltisw 4, 15 > vsubuwm 3, 4, 3 > vsraw 3, 2, 3 > vadduwm 2, 2, 3 > xxlxor 34, 34, 35 > blr > > abs_cmpsubsel_vec: > vspltisw 3, -16 > vspltisw 4, 15 > vsubuwm 3, 4, 3 > vsraw 3, 2, 3 > vadduwm 2, 2, 3 > xxlxor 34, 34, 35 > blr > Differential Revision: https://reviews.llvm.org/D40984 llvm-svn: 320921
* [PM][InstCombine] fixing omission of AliasAnalysis in new-pass-manager's ↵Fedor Sergeev2017-12-141-2/+3
| | | | | | | | | | | | | | | | | | version of InstCombine Summary: Passing AliasAnalysis results instead of nullptr appears to work just fine. A couple new-pass-manager tests updated to align with new order of analyses. Reviewers: chandlerc, spatel, craig.topper Reviewed By: chandlerc Subscribers: mehdi_amini, eraman, llvm-commits Differential Revision: https://reviews.llvm.org/D41203 llvm-svn: 320687
* Remove redundant includes from lib/Transforms.Michael Zolotukhin2017-12-133-4/+0
| | | | llvm-svn: 320628
* Reintroduce r320049, r320014 and r319894.Igor Laevsky2017-12-131-0/+4
| | | | | | OpenGL issues should be fixed by now. llvm-svn: 320568
* [InstCombine] Fix PR35618: Instcombine hangs on single minmax load bitcast.Alexey Bataev2017-12-121-4/+20
| | | | | | | | | | | | | | | | | Summary: If we have pattern `store (load(bitcast(select (cmp(V1, V2), &V1, &V2)))), bitcast)`, but the load is used in other instructions, it leads to looping in InstCombiner. Patch adds additional check that all users of the load instructions are stores and then replaces all uses of load instruction by the new one with new type. Reviewers: RKSimon, spatel, majnemer Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D41072 llvm-svn: 320525
* Revert "[InstCombine] Fix PR35618: Instcombine hangs on single minmax load ↵Alexey Bataev2017-12-121-29/+11
| | | | | | | | bitcast." This reverts commit r320510 - again sanitizers bbots. llvm-svn: 320513
* [InstCombine] Fix PR35618: Instcombine hangs on single minmax load bitcast.Alexey Bataev2017-12-121-11/+29
| | | | | | | | | | | | | | | | | Summary: If we have pattern `store (load(bitcast(select (cmp(V1, V2), &V1, &V2)))), bitcast)`, but the load is used in other instructions, it leads to looping in InstCombiner. Patch adds additional check that all users of the load instructions are stores and then replaces all uses of load instruction by the new one with new type. Reviewers: RKSimon, spatel, majnemer Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D41072 llvm-svn: 320510
* Revert "[InstCombine] Fix PR35618: Instcombine hangs on single minmax load ↵Alexey Bataev2017-12-121-21/+4
| | | | | | | | | bitcast." This reverts commit r320499 again to resolve the problem with the sanitizers bbots. llvm-svn: 320501
* [InstCombine] Fix PR35618: Instcombine hangs on single minmax load bitcast.Alexey Bataev2017-12-121-4/+21
| | | | | | | | | | | | | | | | | Summary: If we have pattern `store (load(bitcast(select (cmp(V1, V2), &V1, &V2)))), bitcast)`, but the load is used in other instructions, it leads to looping in InstCombiner. Patch adds additional check that all users of the load instructions are stores and then replaces all uses of load instruction by the new one with new type. Reviewers: RKSimon, spatel, majnemer Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D41072 llvm-svn: 320499
* Revert "[InstCombine] Fix PR35618: Instcombine hangs on single minmax load ↵Alexey Bataev2017-12-121-29/+11
| | | | | | | | | bitcast." This reverts commit r320496 to solve the problems with sanitizer buildbots. llvm-svn: 320498
* [InstCombine] Fix PR35618: Instcombine hangs on single minmax load bitcast.Alexey Bataev2017-12-121-11/+29
| | | | | | | | | | | | | | | | | Summary: If we have pattern `store (load(bitcast(select (cmp(V1, V2), &V1, &V2)))), bitcast)`, but the load is used in other instructions, it leads to looping in InstCombiner. Patch adds additional check that all users of the load instructions are stores and then replaces all uses of load instruction by the new one with new type. Reviewers: RKSimon, spatel, majnemer Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D41072 llvm-svn: 320496
* Revert "[InstCombine] Fix PR35618: Instcombine hangs on single minmax load ↵Alexey Bataev2017-12-121-20/+4
| | | | | | | | bitcast." This reverts commit r320488 because of the failed asan buildbots.. llvm-svn: 320490
* [InstCombine] Fix PR35618: Instcombine hangs on single minmax load bitcast.Alexey Bataev2017-12-121-4/+20
| | | | | | | | | | | | | | | | | Summary: If we have pattern `store (load(bitcast(select (cmp(V1, V2), &V1, &V2)))), bitcast)`, but the load is used in other instructions, it leads to looping in InstCombiner. Patch adds additional check that all users of the load instructions are stores and then replaces all uses of load instruction by the new one with new type. Reviewers: RKSimon, spatel, majnemer Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D41072 llvm-svn: 320488
* Revert "[InstCombine] Fix PR35618: Instcombine hangs on single minmax load ↵Alexey Bataev2017-12-121-17/+2
| | | | | | | | bitcast." This reverts commit r320483 because of the failed Windows buildbots. llvm-svn: 320485
* [InstCombine] Fix PR35618: Instcombine hangs on single minmax load bitcast.Alexey Bataev2017-12-121-2/+17
| | | | | | | | | | | | | | | | If we have pattern `store (load(bitcast(select (cmp(V1, V2), &V1, &V2)))), bitcast)`, but the load is used in other instructions, it leads to looping in InstCombiner. Patch adds additional check that all users of the load instructions are stores and then replaces all uses of load instruction by the new one with new type. Reviewers: RKSimon, spatel, majnemer Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D41072 llvm-svn: 320483
* [InstComineLoadStoreAlloca] Optimize stores to GEP off null baseAnna Thomas2017-12-121-1/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Currently, in InstCombineLoadStoreAlloca, we have simplification rules for the following cases: 1. load off a null 2. load off a GEP with null base 3. store to a null This patch adds support for the fourth case which is store into a GEP with null base. Since this is UB as well (and directly analogous to the load off a GEP with null base), we can substitute the stored val with undef in instcombine, so that SimplifyCFG can optimize this code into unreachable code. Note: Right now, simplifyCFG hasn't been taught about optimizing this to unreachable and adding an llvm.trap (this is already done for the above 3 cases). Reviewers: majnemer, hfinkel, sanjoy, davide Reviewed by: sanjoy, davide Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D41026 llvm-svn: 320480
* Revert r320049, r320014 and r319894Igor Laevsky2017-12-121-4/+0
| | | | | | | They were causing failures of the piglit OpenGL tests with AMD GPUs using the Mesa radeonsi driver. llvm-svn: 320466
* Revert r320407 "[InstCombine] Fix PR35618: Instcombine hangs on single ↵Hans Wennborg2017-12-111-17/+2
| | | | | | | | | | | | | | | | | | | | | minmax load bitcast." The tests fail (opt asserts) on Windows. > Summary: > If we have pattern `store (load(bitcast(select (cmp(V1, V2), &V1, > &V2)))), bitcast)`, but the load is used in other instructions, it leads > to looping in InstCombiner. Patch adds additional check that all users > of the load instructions are stores and then replaces all uses of load > instruction by the new one with new type. > > Reviewers: RKSimon, spatel, majnemer > > Subscribers: llvm-commits > > Differential Revision: https://reviews.llvm.org/D41072 llvm-svn: 320421
* [InstCombine] Fix PR35618: Instcombine hangs on single minmax load bitcast.Alexey Bataev2017-12-111-2/+17
| | | | | | | | | | | | | | | | | Summary: If we have pattern `store (load(bitcast(select (cmp(V1, V2), &V1, &V2)))), bitcast)`, but the load is used in other instructions, it leads to looping in InstCombiner. Patch adds additional check that all users of the load instructions are stores and then replaces all uses of load instruction by the new one with new type. Reviewers: RKSimon, spatel, majnemer Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D41072 llvm-svn: 320407
* [InstCombine] Fix SimplifyDemandedUseBits SHL handling (PR35515)Simon Pilgrim2017-12-091-6/+5
| | | | | | Don't assume that the pattern matched SRL can be cast to an Instruction (might be ConstExpr etc.) llvm-svn: 320270
* Hardware-assisted AddressSanitizer (llvm part).Evgeniy Stepanov2017-12-091-1/+2
| | | | | | | | | | | | | | | | | | | | | Summary: This is LLVM instrumentation for the new HWASan tool. It is basically a stripped down copy of ASan at this point, w/o stack or global support. Instrumenation adds a global constructor + runtime callbacks for every load and store. HWASan comes with its own IR attribute. A brief design document can be found in clang/docs/HardwareAssistedAddressSanitizerDesign.rst (submitted earlier). Reviewers: kcc, pcc, alekseyshl Subscribers: srhines, mehdi_amini, mgorny, javed.absar, eraman, llvm-commits, hiraditya Differential Revision: https://reviews.llvm.org/D40932 llvm-svn: 320217
* [InstCombine] PR35354: Convert store(bitcast, load bitcast (select (Cond, ↵Alexey Bataev2017-12-081-1/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | &V1, &V2)) --> store (, load (select(Cond, load &V1, load &V2))) Summary: If we have the code like this: ``` float a, b; a = std::max(a ,b); ``` it is converted into something like this: ``` %call = call dereferenceable(4) float* @_ZSt3maxIfERKT_S2_S2_(float* nonnull dereferenceable(4) %a.addr, float* nonnull dereferenceable(4) %b.addr) %1 = bitcast float* %call to i32* %2 = load i32, i32* %1, align 4 %3 = bitcast float* %a.addr to i32* store i32 %2, i32* %3, align 4 ``` After inlinning this code is converted to the next: ``` %1 = load float, float* %a.addr %2 = load float, float* %b.addr %cmp.i = fcmp fast olt float %1, %2 %__b.__a.i = select i1 %cmp.i, float* %a.addr, float* %b.addr %3 = bitcast float* %__b.__a.i to i32* %4 = load i32, i32* %3, align 4 %5 = bitcast float* %arrayidx to i32* store i32 %4, i32* %5, align 4 ``` This pattern is not recognized as minmax pattern. Patch solves this problem by converting sequence ``` store (bitcast, (load bitcast (select ((cmp V1, V2), &V1, &V2)))) ``` to a sequence ``` store (,load (select((cmp V1, V2), &V1, &V2))) ``` After this the code is recognized as minmax pattern. Reviewers: RKSimon, spatel Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D40304 llvm-svn: 320157
* [InstCombine] Don't crash on out of bounds index in the insertelementIgor Laevsky2017-12-071-0/+4
| | | | | | Differential Revision: https://reviews.llvm.org/D40390 llvm-svn: 320049
* [InstCombine] canonicalize constant-minus-boolean to select-of-constantsSanjay Patel2017-12-061-1/+6
| | | | | | | | | | | | | | | | | | | | | | | This restores the half of: https://reviews.llvm.org/rL75531 that was reverted at: https://reviews.llvm.org/rL159230 For the x86 case mentioned there, we now produce: leal 1(%rdi), %eax subl %esi, %eax We have target hooks to invert this in DAGCombiner (and x86 is enabled) with: https://reviews.llvm.org/rL296977 https://reviews.llvm.org/rL311731 AArch64 and possibly other targets would probably benefit from enabling those hooks too. See PR30327: https://bugs.llvm.org/show_bug.cgi?id=30327#c2 Differential Revision: https://reviews.llvm.org/D40612 llvm-svn: 319964
* [InstCombine] use 'auto' with 'dyn_cast'; NFCSanjay Patel2017-11-271-3/+2
| | | | llvm-svn: 319067
* [InstCombine] include 'sub' in the list of narrow-able binopsSanjay Patel2017-11-161-10/+7
| | | | | | | | | | | | | | | | | | | | // trunc (binop X, C) --> binop (trunc X, C') // trunc (binop (ext X), Y) --> binop X, (trunc Y) I'm grouping sub with the other binops because that makes the code simpler and the transforms are valid: https://rise4fun.com/Alive/UeF ...so even though we don't expect a sub with constant Op1 or any of the other opcodes with constant Op0 due to canonicalization rules, we might as well handle those situations if non-canonical code somehow reaches this point (it should just make instcombine more efficient in reaching its end goal). This should solve the problem that later manifests in the vectorizers in PR35295: https://bugs.llvm.org/show_bug.cgi?id=35295 llvm-svn: 318404
* [InstCombine] trunc (binop X, C) --> binop (trunc X, C')Sanjay Patel2017-11-151-4/+17
| | | | | | | | | Note that one-use and shouldChangeType() are checked ahead of the switch. Without the narrowing folds, we can produce inferior vector code as shown in PR35299: https://bugs.llvm.org/show_bug.cgi?id=35299 llvm-svn: 318323
* [InstCombine] Salvage debug info during initial DCEReid Kleckner2017-11-151-0/+1
| | | | | | | | | | | InstCombine salvages debug info for every instruction it erases from its worklist, but it wasn't doing it during its initial DCE when populating its worklist. This fixes that. This should help improve availability of 'this' in optimized debug info when casts are necessary. llvm-svn: 318320
OpenPOWER on IntegriCloud