summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms
Commit message (Collapse)AuthorAgeFilesLines
* [InstCombine] allow icmp (shr/shl) folds for vectorsSanjay Patel2016-09-152-0/+20
| | | | | | | | | | | | These 2 helper functions were already using APInt internally, so just change the API and caller to allow folds for splats. The scalar regression tests look quite thorough, so I just added a couple of tests to prove that vectors are handled too. These folds should be grouped with the other cmp+shift folds though. That can be an NFC follow-up. llvm-svn: 281663
* regenerate checksSanjay Patel2016-09-151-227/+310
| | | | llvm-svn: 281655
* [GlobalOpt] Dead Eliminate declarationsMehdi Amini2016-09-151-0/+7
| | | | | | | | | | | | GlobalOpt is already dead-code-eliminating global definitions. With this change it also takes care of declarations. Hopefully this should make it now a strict superset of GlobalDCE. This is important for LTO/ThinLTO as we don't want the linker to see "undefined reference" when it processes the input files: it could prevent proper internalization (or even load an extra file from a static archive, changing the behavior of the program!). llvm-svn: 281653
* [InstCombine] Do not RAUW a constant GEPDavid Majnemer2016-09-151-0/+20
| | | | | | | | canRewriteGEPAsOffset expects to process instructions, not constants. This fixes PR30342. llvm-svn: 281650
* [InstCombine] allow icmp (sub nsw) folds for vectorsSanjay Patel2016-09-151-8/+4
| | | | | | Also, clean up the code and comments for the existing folds in foldICmpSubConstant(). llvm-svn: 281631
* [InstCombine] add vector tests for icmp (sub nsw)Sanjay Patel2016-09-151-0/+44
| | | | llvm-svn: 281630
* [InstCombine] allow (icmp sgt smin(PosA, B), 0) fold for vectorsSanjay Patel2016-09-151-11/+3
| | | | llvm-svn: 281624
* [InstCombine] add vector tests for icmp sgt sminSanjay Patel2016-09-151-6/+59
| | | | llvm-svn: 281623
* [InstCombine] auto-generate checksSanjay Patel2016-09-151-6/+16
| | | | llvm-svn: 281621
* [InstCombine] use m_APInt to allow icmp folds using known bits for splat ↵Sanjay Patel2016-09-151-28/+7
| | | | | | constant vectors llvm-svn: 281613
* llvm/test/Transforms/CorrelatedValuePropagation/alloca.ll REQUIRES +Asserts.NAKAMURA Takumi2016-09-151-0/+1
| | | | llvm-svn: 281598
* Add some shortcuts in LazyValueInfo to reduce compile time of Correlated ↵Wei Mi2016-09-151-0/+48
| | | | | | | | | | | | | | | Value Propagation. The patch is to partially fix PR10584. Correlated Value Propagation queries LVI to check non-null for pointer params of each callsite. If we know the def of param is an alloca instruction, we know it is non-null and can return early from LVI. Similarly, CVP queries LVI to check whether pointer for each mem access is constant. If the def of the pointer is an alloca instruction, we know it is not a constant pointer. These shortcuts can reduce the cost of CVP significantly. Differential Revision: https://reviews.llvm.org/D18066 llvm-svn: 281586
* [InstCombine] add vector tests for foldICmpUsingKnownBits()Sanjay Patel2016-09-141-3/+97
| | | | llvm-svn: 281559
* [LV] Process pointer IVs with PHINodes in collectLoopUniformsMatthew Simpson2016-09-141-0/+169
| | | | | | | | | | | | | | | This patch moves the processing of pointer induction variables in collectLoopUniforms from the consecutive pointer phase of the analysis to the phi node phase. Previously, if a pointer induction variable was used by both a scalarized non-memory instruction as well as a vectorized memory instruction, we would incorrectly identify the pointer as uniform. Pointer induction variables should be treated the same as other phi nodes. That is, they are uniform if all users of the induction variable and induction variable update are uniform. Differential Revision: https://reviews.llvm.org/D24511 llvm-svn: 281485
* [InstCombine] Merged two test files and regenerated checks using ↵Andrea Di Biagio2016-09-143-34/+49
| | | | | | update_test_checks.py. NFC. llvm-svn: 281478
* [ObjCARC] Traverse chain downwards to replace uses of argument passed toAkira Hatanaka2016-09-131-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ObjC library call with call return. ARC contraction tries to replace uses of an argument passed to an objective-c library call with the call return value. For example, in the following IR, it replaces uses of argument %9 and uses of the values discovered traversing the chain upwards (%7 and %8) with the call return %10, if they are dominated by the call to @objc_autoreleaseReturnValue. This transformation enables code-gen to tail-call the call to @objc_autoreleaseReturnValue, which is necessary to enable auto release return value optimization. %7 = tail call i8* @objc_loadWeakRetained(i8** %6) %8 = bitcast i8* %7 to %0* %9 = bitcast %0* %8 to i8* %10 = tail call i8* @objc_autoreleaseReturnValue(i8* %9) ret %0* %8 Since r276727, llvm started removing redundant bitcasts and as a result started feeding the following IR to ARC contraction: %7 = tail call i8* @objc_loadWeakRetained(i8** %6) %8 = bitcast i8* %7 to %0* %9 = tail call i8* @objc_autoreleaseReturnValue(i8* %7) ret %0* %8 ARC contraction no longer does the optimization described above since it only traverses the chain upwards and fails to recognize that the function return can be replaced by the call return. This commit changes ARC contraction to traverse the chain downwards too and replace uses of bitcasts with the call return. rdar://problem/28011339 Differential Revision: https://reviews.llvm.org/D24523 llvm-svn: 281419
* add tests for PR28672Sanjay Patel2016-09-131-0/+30
| | | | | | | I'm not sure if we actually want to transform all of these in InstCombine yet, so I'm not labeling these with FIXME. llvm-svn: 281386
* Reapply "InstCombine: Reduce trunc (shl x, K) width."Matt Arsenault2016-09-133-24/+247
| | | | | | | This reapplies r272987 with a fix for infinitely looping when the truncated value is another shift of a constant. llvm-svn: 281379
* [ConstantFold] Improve the bitcast folding logic for constant vectors.Andrea Di Biagio2016-09-132-23/+22
| | | | | | | | | | | | | | | | | | | | The constant folder didn't know how to always fold bitcasts of constant integer vectors. In particular, it was unable to handle the case where a constant vector had some undef elements, and the resulting (i.e. bitcasted) vector type had more elements than the original vector type. Example: %cast = bitcast <2 x i64><i64 undef, i64 2> to <4 x i32> On a little endian target, %cast could have been folded to: <4 x i32><i32 undef, i32 undef, i32 2, i32 0> This patch improves the folding logic by teaching how to correctly propagate undef elements in the folded vector. Differential Revision: https://reviews.llvm.org/D24301 llvm-svn: 281343
* [InstSimplify] Add tests to show missed bitcast folding opportunities.Andrea Di Biagio2016-09-131-0/+144
| | | | | | | | | | InstSimplify doesn't always know how to fold a bitcast of a constant vector. In particular, the logic in InstSimplify doesn't know how to handle the case where the constant vector in input contains some undef elements, and the number of elements is smaller than the number of elements of the bitcast vector type. llvm-svn: 281332
* Remove InstCombine test fileSam Parker2016-09-131-17/+0
| | | | | | My previous commit should of removed a test file but I missed it. llvm-svn: 281326
* Enable simplify libcalls for ARM PCSSam Parker2016-09-132-0/+229
| | | | | | | | | | Teach SimplifyLibcalls that in can treat functions annotated with apcs, aapcs or aapcs_vfp like normal C functions if they only take and return integer or pointer values, and the target is not iOS. Differential Revision: https://reviews.llvm.org/D24453 llvm-svn: 281322
* DebugInfo: New metadata representation for global variables.Peter Collingbourne2016-09-139-26/+49
| | | | | | | | | | | | | This patch reverses the edge from DIGlobalVariable to GlobalVariable. This will allow us to more easily preserve debug info metadata when manipulating global variables. Fixes PR30362. A program for upgrading test cases is attached to that bug. Differential Revision: http://reviews.llvm.org/D20147 llvm-svn: 281284
* add more tests for PR30273Sanjay Patel2016-09-121-2/+32
| | | | llvm-svn: 281270
* [InstCombine] add test for PR30327Sanjay Patel2016-09-121-0/+14
| | | | llvm-svn: 281248
* [InstCombine] regenerate checksSanjay Patel2016-09-121-4/+7
| | | | llvm-svn: 281247
* [InstCombine] use m_APInt to allow icmp X, C folds for splat constant vectorsSanjay Patel2016-09-122-5/+3
| | | | | | isSignBitCheck could be changed to take a pointer param to avoid the 'UnusedBit' ugliness. llvm-svn: 281231
* [FunctionAttrs] Don't try to infer returned if it is already on an argumentDavid Majnemer2016-09-121-0/+12
| | | | | | | | | | | Trying to infer the 'returned' attribute if an argument is already 'returned' can lead to verification failure: inference might determine that a different argument is passed through which would result in two different arguments marked as 'returned'. This fixes PR30350. llvm-svn: 281221
* [InstCombine] add tests to show missing vector foldsSanjay Patel2016-09-122-1/+29
| | | | llvm-svn: 281219
* [InstCombine] regenerate checksSanjay Patel2016-09-121-52/+55
| | | | llvm-svn: 281186
* [InstCombine] regenerate checksSanjay Patel2016-09-121-98/+108
| | | | llvm-svn: 281185
* [SimplifyCFG] Be even more conservative in SinkThenElseCodeToEndJames Molloy2016-09-112-14/+25
| | | | | | | | This should *actually* fix PR30244. This cranks up the workaround for PR30188 so that we never sink loads or stores of allocas. The idea is that these should be removed by SROA/Mem2Reg, and any movement of them may well confuse SROA or just cause unwanted code churn. It's not ideal that the midend should be crippled like this, but that unwanted churn can really cause significant regressions in important workloads (tsan). llvm-svn: 281162
* [SimplifyCFG] Harden up the profitability heuristic for block splitting ↵James Molloy2016-09-111-3/+52
| | | | | | | | | | | | during sinking Exposed by PR30244, we will split a block currently if we think we can sink at least one instruction. However this isn't right - the reason we split predecessors is so that we can sink instructions that otherwise couldn't be sunk because it isn't safe to do so - stores, for example. So, change the heuristic to only split if it thinks it can sink at least one non-speculatable instruction. Should fix PR30244. llvm-svn: 281160
* Add handling of !invariant.load to PropagateMetadata.Justin Lebar2016-09-111-0/+17
| | | | | | | | | | | | | | Summary: This will let e.g. the load/store vectorizer propagate this metadata appropriately. Reviewers: arsenm Subscribers: tra, jholewinski, hfinkel, mzolotukhin Differential Revision: https://reviews.llvm.org/D23479 llvm-svn: 281153
* InstCombine: Don't combine loads/stores from swifterror to a new typeArnold Schwaighofer2016-09-102-0/+33
| | | | | | | | | This generates invalid IR: the only users of swifterror can be call arguments, loads, and stores. rdar://28242257 llvm-svn: 281144
* Inliner: Don't mark swifterror allocas with lifetime markersArnold Schwaighofer2016-09-091-0/+17
| | | | | | | | | This would create a bitcast use which fails the verifier: swifterror values may only be used by loads, stores, and as function arguments. rdar://28233244 llvm-svn: 281114
* LSV: Fix incorrectly increasing alignmentMatt Arsenault2016-09-091-0/+129
| | | | | | | If the unaligned access has a dynamic offset, it may be odd which would make the adjusted alignment incorrect to use. llvm-svn: 281110
* [InstCombine] use m_APInt to allow icmp ult X, C folds for splat constant ↵Sanjay Patel2016-09-094-10/+14
| | | | | | vectors llvm-svn: 281107
* Do not widen load for different variable in GVN.Dehao Chen2016-09-095-15/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Widening load in GVN is too early because it will block other optimizations like PRE, LICM. https://llvm.org/bugs/show_bug.cgi?id=29110 The SPECCPU2006 benchmark impact of this patch: Reference: o2_nopatch (1): o2_patched Benchmark Base:Reference (1) ------------------------------------------------------- spec/2006/fp/C++/444.namd 25.2 -0.08% spec/2006/fp/C++/447.dealII 45.92 +1.05% spec/2006/fp/C++/450.soplex 41.7 -0.26% spec/2006/fp/C++/453.povray 35.65 +1.68% spec/2006/fp/C/433.milc 23.79 +0.42% spec/2006/fp/C/470.lbm 41.88 -1.12% spec/2006/fp/C/482.sphinx3 47.94 +1.67% spec/2006/int/C++/471.omnetpp 22.46 -0.36% spec/2006/int/C++/473.astar 21.19 +0.24% spec/2006/int/C++/483.xalancbmk 36.09 -0.11% spec/2006/int/C/400.perlbench 33.28 +1.35% spec/2006/int/C/401.bzip2 22.76 -0.04% spec/2006/int/C/403.gcc 32.36 +0.12% spec/2006/int/C/429.mcf 41.04 -0.41% spec/2006/int/C/445.gobmk 26.94 +0.04% spec/2006/int/C/456.hmmer 24.5 -0.20% spec/2006/int/C/458.sjeng 28 -0.46% spec/2006/int/C/462.libquantum 55.25 +0.27% spec/2006/int/C/464.h264ref 45.87 +0.72% geometric mean +0.23% For most benchmarks, it's a wash, but we do see stable improvements on some benchmarks, e.g. 447,453,482,400. Reviewers: davidxl, hfinkel, dberlin, sanjoy, reames Subscribers: gberry, junbuml Differential Revision: https://reviews.llvm.org/D24096 llvm-svn: 281074
* [InstCombine] add tests to show pattern matching failures due to commutationSanjay Patel2016-09-093-0/+148
| | | | | | | I was looking to fix a bug in getComplexity(), and these cases showed up as obvious failures. I'm not sure how to find these in general though. llvm-svn: 281055
* [Coroutines] Part13: Handle single edge PHINodes across suspendsGor Nishanov2016-09-091-0/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: If one of the uses of the value is a single edge PHINode, handle it. Original: %val = something <suspend> %p = PHINode [%val] After Spill + Part13: %val = something %slot = gep val.spill.slot store %val, %slot <suspend> %p = load %slot Plus tiny fixes/changes: * use correct index for coro.free in CoroCleanup * fixup id parameter in coro.free to allow authoring coroutine in plain C with __builtins Reviewers: majnemer Subscribers: mehdi_amini, llvm-commits Differential Revision: https://reviews.llvm.org/D24242 llvm-svn: 281020
* Remove debug info when hoisting instruction from then/else branch.Dehao Chen2016-09-081-0/+83
| | | | | | | | | | | | Summary: The hoisted instruction is executed speculatively. It could affect the debugging experience as user would see gdb go into code that may not be expected to execute. It will also affect sample profile accuracy by assigning incorrect frequency to source within then/else branch. Reviewers: davidxl, dblaikie, chandlerc, kcc, echristo Subscribers: mehdi_amini, probinson, eric_niebler, andreadb, llvm-commits Differential Revision: https://reviews.llvm.org/D24164 llvm-svn: 280995
* [InstCombine] regenerate checksSanjay Patel2016-09-081-228/+284
| | | | llvm-svn: 280993
* [LV] Ensure proper handling of multi-use case when collecting uniformsMatthew Simpson2016-09-081-1/+4
| | | | | | | | | | | The test case included in r280979 wasn't checking what it was supposed to be checking for the predicated store case. Fixing the test revealed that the multi-use case (when a pointer is used by both vectorized and scalarized memory accesses) wasn't being handled properly. We can't skip over non-consecutive-like pointers since they may have looked consecutive-like with a different memory access. llvm-svn: 280992
* [InstCombine] regenerate checksSanjay Patel2016-09-081-60/+77
| | | | llvm-svn: 280991
* [LV] Don't mark pointers used by scalarized memory accesses uniformMatthew Simpson2016-09-081-0/+268
| | | | | | | | | | | | | | | | | | Previously, all consecutive pointers were marked uniform after vectorization. However, if a consecutive pointer is used by a memory access that is eventually scalarized, the pointer won't remain uniform after all. An example is predicated stores. Even though a predicated store may be consecutive, it will still be scalarized, making it's pointer operand non-uniform. This patch updates the logic in collectLoopUniforms to consider the cases where a memory access may be scalarized. If a memory access may be scalarized, its pointer operand is not marked uniform. The determination of whether a given memory instruction will be scalarized or not has been moved into a common function that is used by the vectorizer, cost model, and legality analysis. Differential Revision: https://reviews.llvm.org/D24271 llvm-svn: 280979
* Add unittest for r280760Dehao Chen2016-09-081-0/+31
| | | | llvm-svn: 280963
* [InstCombine][X86] Regenerate masked memory op combine testsSimon Pilgrim2016-09-081-88/+114
| | | | llvm-svn: 280960
* [InstCombine][X86] Regenerate vperm2f128/vperm2i128 combine testsSimon Pilgrim2016-09-081-86/+116
| | | | llvm-svn: 280959
* [InstCombine][X86] Regenerate insertps combine testsSimon Pilgrim2016-09-081-43/+59
| | | | llvm-svn: 280957
OpenPOWER on IntegriCloud