summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Analysis
Commit message (Collapse)AuthorAgeFilesLines
...
* [LoopPred] Fix two subtle issues found by inspectionPhilip Reames2019-11-061-0/+11
| | | | | | | | | | | | This patch fixes two issues noticed by inspection when going to enable the loop predication code in IndVarSimplify. Issue 1 - Both the LoopPredication transform, and the already on by default optimizeLoopExits transform, modify the exit count of the exits they modify. (either to 0 or Infinity) Looking at the code more closely, this was not reflected into SCEV and we were instead running later transforms with incorrect SCEVs. Fixing this requires forgetting the loop, weakening a too strong assert, and updating SCEV to not pessimize results when a loop is provable untaken. I haven't been able to find a test case to demonstrate the miscompile. Issue 2 - For modules without a data layout, we can end up with unsized pointer typed exit counts. Just bail out of this case. I think these are the last two issues which need addressed before we enable this by default. The code has already survived a decent amount of fuzzing without revealing either of the above. Differential Revision: https://reviews.llvm.org/D69695
* [TTI][LV] preferPredicateOverEpilogueSjoerd Meijer2019-11-061-0/+6
| | | | | | | | | | | | | | | | | We have two ways to steer creating a predicated vector body over creating a scalar epilogue. To force this, we have 1) a command line option and 2) a pragma available. This adds a third: a target hook to TargetTransformInfo that can be queried whether predication is preferred or not, which allows the vectoriser to make the decision without forcing it. While this change behaves as a non-functional change for now, it shows the required TTI plumbing, usage of this new hook in the vectoriser, and the beginning of an ARM MVE implementation. I will follow up on this with: - a complete MVE implementation, see D69845. - a patch to disable this, i.e. we should respect "vector_predicate(disable)" and its corresponding loophint. Differential Revision: https://reviews.llvm.org/D69040
* [InstSimplify] use FMF to improve fcmp+select foldSanjay Patel2019-11-041-7/+10
| | | | | This is part of a series of patches needed to solve PR39535: https://bugs.llvm.org/show_bug.cgi?id=39535
* [SCEV] Fixed 'Uninitialized variable 'ContainsAddRec' used.' warning. NFCI.Dávid Bolvanský2019-11-031-1/+1
|
* [MemorySSA] Fixed null check after dereferencing warning. NFCI.Dávid Bolvanský2019-11-031-0/+1
|
* [LV] Generalize conditions for sinking instrs for first order recurrences.Florian Hahn2019-11-021-14/+26
| | | | | | | | | | | | | | | | | | | | | | | | | If the recurrence PHI node has a single user, we can sink any instruction without side effects, given that all users are dominated by the instruction computing the incoming value of the next iteration ('Previous'). We can sink instructions that may cause traps, because that only causes the trap to occur later, but not on any new paths. With the relaxed check, we also have to make sure that we do not have a direct cycle (meaning PHI user == 'Previous), which indicates a reduction relation, which potentially gets missed by ReductionDescriptor. As follow-ups, we can also sink stores, iff they do not alias with other instructions we move them across and we could also support sinking chains of instructions and multiple users of the PHI. Fixes PR43398. Reviewers: hsaito, dcaballe, Ayal, rengolin Reviewed By: Ayal Differential Revision: https://reviews.llvm.org/D69228
* [PGO][PGSO] TargetLowering/TargetTransformationInfo/SwitchLoweringUtils part.Hiroshi Yamauchi2019-10-312-4/+6
| | | | | | | | | | | | | | | | Summary: (Split of off D67120) TargetLowering/TargetTransformationInfo/SwitchLoweringUtils changes for profile guided size optimization. Reviewers: davidxl Subscribers: eraman, hiraditya, haicheng, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D69580
* [ValueTracking] Allow context-sensitive nullness check for non-pointersJohannes Doerfert2019-10-312-7/+12
| | | | | | | | | | | | | | | Same as D60846 but with a fix for the problem encountered there which was a missing context adjustment in the handling of PHI nodes. The test that caused D60846 to be reverted was added in e15ab8f277c7. Reviewers: nikic, nlopes, mkazantsev,spatel, dlrobertson, uabelho, hakzsam Subscribers: hiraditya, bollu, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D69571
* [FIX] Make LSan happy by *not* leaking memoryJohannes Doerfert2019-10-311-4/+13
| | | | | | | I left a memory leak in a printer pass which made LSan sad so I remove the memory leak now to make LSan happy. Reported and tested by vlad.tsyrklevich.
* [MustExecute] Silence clang warning about unused captured 'this'Mikael Holmen2019-10-311-2/+2
| | | | | | | | | | | | New code introduced in fe799c97fa caused clang to complain with ../lib/Analysis/MustExecute.cpp:360:34: error: lambda capture 'this' is not used [-Werror,-Wunused-lambda-capture] GetterTy<LoopInfo> LIGetter = [this](const Function &F) { ^~~~ ../lib/Analysis/MustExecute.cpp:365:44: error: lambda capture 'this' is not used [-Werror,-Wunused-lambda-capture] GetterTy<PostDominatorTree> PDTGetter = [this](const Function &F) { ^~~~ 2 errors generated.
* [MustExecute] Forward iterate over conditional branchesJohannes Doerfert2019-10-311-1/+187
| | | | | | | | | | | | | | | | | | | Summary: If a conditional branch is encountered we can try to find a join block where the execution is known to continue. This means finding a suitable block, e.g., the immediate post dominator of the conditional branch, and proofing control will always reach that block. This patch implements different techniques that work with and without provided analysis. Reviewers: uenoku, sstefan1, hfinkel Subscribers: hiraditya, bollu, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D68933
* [Alignment][NFC] getMemoryOpCost uses MaybeAlignGuillaume Chatelet2019-10-251-5/+5
| | | | | | | | | | | | | | | Summary: This is patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790 Reviewers: courbet Subscribers: nemanjai, hiraditya, kbarton, MaskRay, jsji, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D69307
* [SCEV] Expose and use maximum constant exit counts for individual loop exitsPhilip Reames2019-10-241-3/+15
| | | | | | | | We were already going to all of the trouble of computing maximum constant exit counts for each loop exit, we might as well expose them through the API. The change in IndVars is mostly to demonstrate that the wired up code works, but it als very slightly strengthens the transform. The strengthened case is rather narrow though: it requires one exactly analyzeable exit, one imprecisely analyzeable exit (with the upper bound less than the precise one), and one unanalyzeable exit. I coudn't construct a reasonably stable test case. This does increase the memory usage of the BackedgeTakenCount by a factor of 2 in the worst case. I also noticed the loop in IndVars is O(#Exits ^ 2). This doesn't change with this patch. A future patch will cache this result inside of SCEV to avoid requering.
* Fix Clang -Wcovered-switch-default warning by moving llvm_unreachable ↵David Blaikie2019-10-241-5/+2
| | | | default to after the switch
* [SCEV] Start reworking backedge taken count APIs to unify max handling [NFC]Philip Reames2019-10-241-13/+21
| | | | This is a first step in figuring out a proper API for maximum (non constant) exit counts. This may evolve a bit as we get experience with the API needs; suggestions very welcome. This patch just tried to provide a framework that we can later add maximum too in a clean and obvious way.
* Fix MSVC "switch statement contains 'default' but no 'case' labels" warning. ↵Simon Pilgrim2019-10-241-7/+4
| | | | NFCI.
* [LVI][NFC] Factor solveBlockValueSaturatingIntrinsic() out of ↵Roman Lebedev2019-10-231-11/+25
| | | | | | solveBlockValueIntrinsic() Now that there's SaturatingInst class, this is cleaner.
* [LVI][CVP] LazyValueInfoImpl::solveBlockValueBinaryOp(): use no-wrap flags ↵Roman Lebedev2019-10-231-2/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | from `add` op Summary: This was suggested in https://reviews.llvm.org/D69277#1717210 In this form (this is what was suggested, right?), the results aren't staggering (especially since given LVI cross-block focus) this does catch some things (as per test-suite), but not too much: | statistic | old | new | delta | % change | | correlated-value-propagation.NumAddNSW | 4981 | 4982 | 1 | 0.0201% | | correlated-value-propagation.NumAddNW | 12125 | 12126 | 1 | 0.0082% | | correlated-value-propagation.NumCmps | 1199 | 1202 | 3 | 0.2502% | | correlated-value-propagation.NumDeadCases | 112 | 111 | -1 | -0.8929% | | correlated-value-propagation.NumMulNSW | 275 | 278 | 3 | 1.0909% | | correlated-value-propagation.NumMulNUW | 1323 | 1326 | 3 | 0.2268% | | correlated-value-propagation.NumMulNW | 1598 | 1604 | 6 | 0.3755% | | correlated-value-propagation.NumNSW | 7158 | 7167 | 9 | 0.1257% | | correlated-value-propagation.NumNUW | 13304 | 13310 | 6 | 0.0451% | | correlated-value-propagation.NumNW | 20462 | 20477 | 15 | 0.0733% | | correlated-value-propagation.NumOverflows | 4 | 7 | 3 | 75.0000% | | correlated-value-propagation.NumPhis | 15366 | 15381 | 15 | 0.0976% | | correlated-value-propagation.NumSExt | 6273 | 6277 | 4 | 0.0638% | | correlated-value-propagation.NumShlNSW | 1172 | 1171 | -1 | -0.0853% | | correlated-value-propagation.NumShlNUW | 2793 | 2794 | 1 | 0.0358% | | correlated-value-propagation.NumSubNSW | 730 | 736 | 6 | 0.8219% | | correlated-value-propagation.NumSubNUW | 2044 | 2046 | 2 | 0.0978% | | correlated-value-propagation.NumSubNW | 2774 | 2782 | 8 | 0.2884% | | instcount.NumAddInst | 277586 | 277569 | -17 | -0.0061% | | instcount.NumAndInst | 66056 | 66054 | -2 | -0.0030% | | instcount.NumBrInst | 709147 | 709146 | -1 | -0.0001% | | instcount.NumCallInst | 528579 | 528576 | -3 | -0.0006% | | instcount.NumExtractValueInst | 18307 | 18301 | -6 | -0.0328% | | instcount.NumOrInst | 102660 | 102665 | 5 | 0.0049% | | instcount.NumPHIInst | 318008 | 318007 | -1 | -0.0003% | | instcount.NumSelectInst | 46373 | 46370 | -3 | -0.0065% | | instcount.NumSExtInst | 79496 | 79488 | -8 | -0.0101% | | instcount.NumShlInst | 40654 | 40657 | 3 | 0.0074% | | instcount.NumTruncInst | 62251 | 62249 | -2 | -0.0032% | | instcount.NumZExtInst | 68211 | 68221 | 10 | 0.0147% | | instcount.TotalBlocks | 843910 | 843909 | -1 | -0.0001% | | instcount.TotalInsts | 7387448 | 7387423 | -25 | -0.0003% | Reviewers: nikic, reames Reviewed By: nikic Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D69321
* [Alignment][NFC] Finish transition for `Loads`Guillaume Chatelet2019-10-213-51/+48
| | | | | | | | | | | | | | | | | Summary: This is patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790 Reviewers: courbet Subscribers: hiraditya, asbirlea, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D69253 llvm-svn: 375419
* Use Align for TFL::TransientStackAlignmentGuillaume Chatelet2019-10-211-1/+1
| | | | | | | | | | | | | | | | | Summary: This is patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790 Reviewers: courbet Subscribers: arsenm, dschuff, jyknight, sdardis, jvesely, nhaehnle, sbc100, jgravelle-google, hiraditya, aheejin, fedor.sergeev, jrtc27, atanasyan, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D69216 llvm-svn: 375398
* [SCEV] Simplify umin/max of zext and sext of the same valuePhilip Reames2019-10-191-1/+34
| | | | | | | | This is a common idiom which arises after induction variables are widened, and we have two or more exit conditions. Interestingly, we don't have instcombine or instsimplify support for this either. Differential Revision: https://reviews.llvm.org/D69006 llvm-svn: 375349
* [SCEV] Removing deprecated comment in ScalarEvolutionExpanderVictor Campos2019-10-181-3/+0
| | | | | | | Removing a comment in the ScalarEvolutionExpander.cpp file that was about the class SCEVSDivExpr, which has been long gone from LLVM. llvm-svn: 375232
* Reland: Dead Virtual Function EliminationOliver Stannard2019-10-171-0/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove dead virtual functions from vtables with replaceNonMetadataUsesWith, so that CGProfile metadata gets cleaned up correctly. Original commit message: Currently, it is hard for the compiler to remove unused C++ virtual functions, because they are all referenced from vtables, which are referenced by constructors. This means that if the constructor is called from any live code, then we keep every virtual function in the final link, even if there are no call sites which can use it. This patch allows unused virtual functions to be removed during LTO (and regular compilation in limited circumstances) by using type metadata to match virtual function call sites to the vtable slots they might load from. This information can then be used in the global dead code elimination pass instead of the references from vtables to virtual functions, to more accurately determine which functions are reachable. To make this transformation safe, I have changed clang's code-generation to always load virtual function pointers using the llvm.type.checked.load intrinsic, instead of regular load instructions. I originally tried writing this using clang's existing code-generation, which uses the llvm.type.test and llvm.assume intrinsics after doing a normal load. However, it is possible for optimisations to obscure the relationship between the GEP, load and llvm.type.test, causing GlobalDCE to fail to find virtual function call sites. The existing linkage and visibility types don't accurately describe the scope in which a virtual call could be made which uses a given vtable. This is wider than the visibility of the type itself, because a virtual function call could be made using a more-visible base class. I've added a new !vcall_visibility metadata type to represent this, described in TypeMetadata.rst. The internalization pass and libLTO have been updated to change this metadata when linking is performed. This doesn't currently work with ThinLTO, because it needs to see every call to llvm.type.checked.load in the linkage unit. It might be possible to extend this optimisation to be able to use the ThinLTO summary, as was done for devirtualization, but until then that combination is rejected in the clang driver. To test this, I've written a fuzzer which generates random C++ programs with complex class inheritance graphs, and virtual functions called through object and function pointers of different types. The programs are spread across multiple translation units and DSOs to test the different visibility restrictions. I've also tried doing bootstrap builds of LLVM to test this. This isn't ideal, because only classes in anonymous namespaces can be optimised with -fvisibility=default, and some parts of LLVM (plugins and bugpoint) do not work correctly with -fvisibility=hidden. However, there are only 12 test failures when building with -fvisibility=hidden (and an unmodified compiler), and this change does not cause any new failures for either value of -fvisibility. On the 7 C++ sub-benchmarks of SPEC2006, this gives a geomean code-size reduction of ~6%, over a baseline compiled with "-O2 -flto -fvisibility=hidden -fwhole-program-vtables". The best cases are reductions of ~14% in 450.soplex and 483.xalancbmk, and there are no code size increases. I've also run this on a set of 8 mbed-os examples compiled for Armv7M, which show a geomean size reduction of ~3%, again with no size increases. I had hoped that this would have no effect on performance, which would allow it to awlays be enabled (when using -fwhole-program-vtables). However, the changes in clang to use the llvm.type.checked.load intrinsic are causing ~1% performance regression in the C++ parts of SPEC2006. It should be possible to recover some of this perf loss by teaching optimisations about the llvm.type.checked.load intrinsic, which would make it worth turning this on by default (though it's still dependent on -fwhole-program-vtables). Differential revision: https://reviews.llvm.org/D63932 llvm-svn: 375094
* [Alignment][NFC] Value::getPointerAlignment returns MaybeAlignGuillaume Chatelet2019-10-152-16/+19
| | | | | | | | | | | | | | | | | Summary: This is patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790 Reviewers: courbet, jdoerfert Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D68398 llvm-svn: 374889
* Revert "Dead Virtual Function Elimination"Jorge Gorbe Moya2019-10-141-32/+0
| | | | | | This reverts commit 9f6a873268e1ad9855873d9d8007086c0d01cf4f. llvm-svn: 374844
* [NFC][TTI] Add Alignment for isLegalMasked[Load/Store]Sam Parker2019-10-141-4/+6
| | | | | | | | | Add an extra parameter so the backend can take the alignment into consideration. Differential Revision: https://reviews.llvm.org/D68400 llvm-svn: 374763
* recommit: [LoopVectorize][PowerPC] Estimate int and float register pressure ↵Zi Xuan Wu2019-10-121-2/+10
| | | | | | | | | | | | | | | | | | | | | | | separately in loop-vectorize In loop-vectorize, interleave count and vector factor depend on target register number. Currently, it does not estimate different register pressure for different register class separately(especially for scalar type, float type should not be on the same position with int type), so it's not accurate. Specifically, it causes too many times interleaving/unrolling, result in too many register spills in loop body and hurting performance. So we need classify the register classes in IR level, and importantly these are abstract register classes, and are not the target register class of backend provided in td file. It's used to establish the mapping between the types of IR values and the number of simultaneous live ranges to which we'd like to limit for some set of those types. For example, POWER target, register num is special when VSX is enabled. When VSX is enabled, the number of int scalar register is 32(GPR), float is 64(VSR), but for int and float vector register both are 64(VSR). So there should be 2 kinds of register class when vsx is enabled, and 3 kinds of register class when VSX is NOT enabled. It runs on POWER target, it makes big(+~30%) performance improvement in one specific bmk(503.bwaves_r) of spec2017 and no other obvious degressions. Differential revision: https://reviews.llvm.org/D67148 llvm-svn: 374634
* Dead Virtual Function EliminationOliver Stannard2019-10-111-0/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, it is hard for the compiler to remove unused C++ virtual functions, because they are all referenced from vtables, which are referenced by constructors. This means that if the constructor is called from any live code, then we keep every virtual function in the final link, even if there are no call sites which can use it. This patch allows unused virtual functions to be removed during LTO (and regular compilation in limited circumstances) by using type metadata to match virtual function call sites to the vtable slots they might load from. This information can then be used in the global dead code elimination pass instead of the references from vtables to virtual functions, to more accurately determine which functions are reachable. To make this transformation safe, I have changed clang's code-generation to always load virtual function pointers using the llvm.type.checked.load intrinsic, instead of regular load instructions. I originally tried writing this using clang's existing code-generation, which uses the llvm.type.test and llvm.assume intrinsics after doing a normal load. However, it is possible for optimisations to obscure the relationship between the GEP, load and llvm.type.test, causing GlobalDCE to fail to find virtual function call sites. The existing linkage and visibility types don't accurately describe the scope in which a virtual call could be made which uses a given vtable. This is wider than the visibility of the type itself, because a virtual function call could be made using a more-visible base class. I've added a new !vcall_visibility metadata type to represent this, described in TypeMetadata.rst. The internalization pass and libLTO have been updated to change this metadata when linking is performed. This doesn't currently work with ThinLTO, because it needs to see every call to llvm.type.checked.load in the linkage unit. It might be possible to extend this optimisation to be able to use the ThinLTO summary, as was done for devirtualization, but until then that combination is rejected in the clang driver. To test this, I've written a fuzzer which generates random C++ programs with complex class inheritance graphs, and virtual functions called through object and function pointers of different types. The programs are spread across multiple translation units and DSOs to test the different visibility restrictions. I've also tried doing bootstrap builds of LLVM to test this. This isn't ideal, because only classes in anonymous namespaces can be optimised with -fvisibility=default, and some parts of LLVM (plugins and bugpoint) do not work correctly with -fvisibility=hidden. However, there are only 12 test failures when building with -fvisibility=hidden (and an unmodified compiler), and this change does not cause any new failures for either value of -fvisibility. On the 7 C++ sub-benchmarks of SPEC2006, this gives a geomean code-size reduction of ~6%, over a baseline compiled with "-O2 -flto -fvisibility=hidden -fwhole-program-vtables". The best cases are reductions of ~14% in 450.soplex and 483.xalancbmk, and there are no code size increases. I've also run this on a set of 8 mbed-os examples compiled for Armv7M, which show a geomean size reduction of ~3%, again with no size increases. I had hoped that this would have no effect on performance, which would allow it to awlays be enabled (when using -fwhole-program-vtables). However, the changes in clang to use the llvm.type.checked.load intrinsic are causing ~1% performance regression in the C++ parts of SPEC2006. It should be possible to recover some of this perf loss by teaching optimisations about the llvm.type.checked.load intrinsic, which would make it worth turning this on by default (though it's still dependent on -fwhole-program-vtables). Differential revision: https://reviews.llvm.org/D63932 llvm-svn: 374539
* [SCEV] Add stricter verification option.Florian Hahn2019-10-111-5/+8
| | | | | | | | | | | | | | | | | | | | | | Currently -verify-scev only fails if there is a constant difference between two BE counts. This misses a lot of cases. This patch adds a -verify-scev-strict options, which fails for any non-zero differences, if used together with -verify-scev. With the stricter checking, some unit tests fail because of mis-matches, especially around IndVarSimplify. If there is no reason I am missing for just checking constant deltas, I am planning on looking into the various failures. Reviewers: efriedma, sanjoy.google, reames, atrick Reviewed By: sanjoy.google Differential Revision: https://reviews.llvm.org/D68592 llvm-svn: 374535
* [MemorySSA] Update Phi simplification.Alina Sbirlea2019-10-101-5/+12
| | | | | | | | | | When simplifying a Phi to the unique value found incoming, check that there wasn't a Phi already created to break a cycle. If so, remove it. Resolves PR43541. Some additional nits included. llvm-svn: 374471
* [ValueTracking] Improve pointer offset computation for cases of same baseRong Xu2019-10-101-9/+39
| | | | | | | | | | | | This patch improves the handling of pointer offset in GEP expressions where one argument is the base pointer. isPointerOffset() is being used by memcpyopt where current code synthesizes consecutive 32 bytes stores to one store and two memset intrinsic calls. With this patch, we convert the stores to one memset intrinsic. Differential Revision: https://reviews.llvm.org/D67989 llvm-svn: 374454
* [MemorySSA] Additional handling of unreachable blocks.Alina Sbirlea2019-10-101-0/+4
| | | | | | | | | | | | | | | | | | | | | Summary: Whenever we get the previous definition, the assumption is that the recursion starts ina reachable block. If the recursion starts in an unreachable block, we may recurse indefinitely. Handle this case by returning LoE if the block is unreachable. Resolves PR43426. Reviewers: george.burgess.iv Subscribers: Prazek, sanjoy.google, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D68809 llvm-svn: 374447
* [System Model] [TTI] Move default cache/prefetch implementationsDavid Greene2019-10-101-28/+0
| | | | | | | | | | Move the default implementations of cache and prefetch queries to TargetTransformInfoImplBase and delete them from NoTIIImpl. This brings these interfaces in line with how other TTI interfaces work. Differential Revision: https://reviews.llvm.org/D68804 llvm-svn: 374446
* [Alignment][NFC] Make VectorUtils uas llvm::AlignGuillaume Chatelet2019-10-101-6/+6
| | | | | | | | | | | | | | | | | Summary: This is patch is part of a series to introduce an Alignment type. See this thread for context: http://lists.llvm.org/pipermail/llvm-dev/2019-July/133851.html See this patch for the introduction of the type: https://reviews.llvm.org/D64790 Reviewers: courbet Subscribers: hiraditya, rogfer01, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D68784 llvm-svn: 374330
* [System Model] [TTI] Update cache and prefetch TTI interfacesDavid Greene2019-10-091-0/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Re-apply 9fdfb045ae8b/r365676 with fixes for PPC and Hexagon. This involved moving defaults from TargetTransformInfoImplBase to MCSubtargetInfo. Rework the TTI cache and software prefetching APIs to prepare for the introduction of a general system model. Changes include: - Marking existing interfaces const and/or override as appropriate - Adding comments - Adding BasicTTIImpl interfaces that delegate to a subtarget implementation - Moving the default TargetTransformInfoImplBase implementation to a default MCSubtarget implementation Only a handful of targets use these interfaces currently: AArch64, Hexagon, PPC and SystemZ. AArch64 already has a custom subtarget implementation, so its custom TTI implementation is migrated to use the new facilities in BasicTTIImpl to invoke its custom subtarget implementation. The custom TTI implementations continue to exist for the other targets with this change. They are not moved over to subtarget-based implementations. The end goal is to have the default subtarget implementation defer to the system model defined by the target. With this change, the default MCSubtargetInfo implementation essentially returns the defaults TargetTransformInfoImplBase used to return. Existing users of TTI defaults will hit the defaults now in MCSubtargetInfo. Targets that define their own custom TTI implementations won't use the BasicTTIImpl implementations that route to the subtarget. Once system models are in place for the targets that use these interfaces, their custom TTI implementations can be removed. Differential Revision: https://reviews.llvm.org/D63614 llvm-svn: 374205
* [MemorySSA] Make the use of moveAllAfterMergeBlocks consistent.Alina Sbirlea2019-10-091-15/+22
| | | | | | | | | | | | | | | | | | | | | | | | Summary: The rule for the moveAllAfterMergeBlocks API si for all instructions from `From` to have been moved to `To`, while keeping the CFG edges (and block terminators) unchanged. Update all the callsites for moveAllAfterMergeBlocks to follow this. Pending follow-up: since the same behavior is needed everytime, merge all callsites into one. The common denominator may be the call to `MergeBlockIntoPredecessor`. Resolves PR43569. Reviewers: george.burgess.iv Subscribers: Prazek, sanjoy.google, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D68659 llvm-svn: 374177
* Revert "[LoopVectorize][PowerPC] Estimate int and float register pressure ↵Jinsong Ji2019-10-081-10/+2
| | | | | | | | | | | | | | separately in loop-vectorize" Also Revert "[LoopVectorize] Fix non-debug builds after rL374017" This reverts commit 9f41deccc0e648a006c9f38e11919f181b6c7e0a. This reverts commit 18b6fe07bcf44294f200bd2b526cb737ed275c04. The patch is breaking PowerPC internal build, checked with author, reverting on behalf of him for now due to timezone. llvm-svn: 374091
* [SVE][IR] Scalable Vector size queries and IR instruction supportGraham Hunter2019-10-081-2/+4
| | | | | | | | | | | | | | | | | | | | | | * Adds a TypeSize struct to represent the known minimum size of a type along with a flag to indicate that the runtime size is a integer multiple of that size * Converts existing size query functions from Type.h and DataLayout.h to return a TypeSize result * Adds convenience methods (including a transparent conversion operator to uint64_t) so that most existing code 'just works' as if the return values were still scalars. * Uses the new size queries along with ElementCount to ensure that all supported instructions used with scalable vectors can be constructed in IR. Reviewers: hfinkel, lattner, rkruppe, greened, rovka, rengolin, sdesmalen Reviewed By: rovka, sdesmalen Differential Revision: https://reviews.llvm.org/D53137 llvm-svn: 374042
* [LoopVectorize][PowerPC] Estimate int and float register pressure separately ↵Zi Xuan Wu2019-10-081-2/+10
| | | | | | | | | | | | | | | | | | | | | | | in loop-vectorize In loop-vectorize, interleave count and vector factor depend on target register number. Currently, it does not estimate different register pressure for different register class separately(especially for scalar type, float type should not be on the same position with int type), so it's not accurate. Specifically, it causes too many times interleaving/unrolling, result in too many register spills in loop body and hurting performance. So we need classify the register classes in IR level, and importantly these are abstract register classes, and are not the target register class of backend provided in td file. It's used to establish the mapping between the types of IR values and the number of simultaneous live ranges to which we'd like to limit for some set of those types. For example, POWER target, register num is special when VSX is enabled. When VSX is enabled, the number of int scalar register is 32(GPR), float is 64(VSR), but for int and float vector register both are 64(VSR). So there should be 2 kinds of register class when vsx is enabled, and 3 kinds of register class when VSX is NOT enabled. It runs on POWER target, it makes big(+~30%) performance improvement in one specific bmk(503.bwaves_r) of spec2017 and no other obvious degressions. Differential revision: https://reviews.llvm.org/D67148 llvm-svn: 374017
* Second attempt to add iterator_range::empty()Jordan Rose2019-10-071-1/+1
| | | | | | | | | | | | Doing this makes MSVC complain that `empty(someRange)` could refer to either C++17's std::empty or LLVM's llvm::empty, which previously we avoided via SFINAE because std::empty is defined in terms of an empty member rather than begin and end. So, switch callers over to the new method as it is added. https://reviews.llvm.org/D68439 llvm-svn: 373935
* Revert "[SLP] avoid reduction transform on patterns that the backend can ↵Martin Storsjo2019-10-071-53/+0
| | | | | | | | | | load-combine" This reverts SVN r373833, as it caused a failed assert "Non-zero loop cost expected" on building numerous projects, see PR43582 for details and reproduction samples. llvm-svn: 373882
* [LOOPGUARD] Remove asserts in getLoopGuardBranchWhitney Tsang2019-10-061-3/+9
| | | | | | | | | | | | | Summary: The assertion in getLoopGuardBranch can be a 'return nullptr' under if condition. Authored By: DTharun Reviewer: Whitney, fhahn Reviewed By: Whitney, fhahn Subscribers: fhahn, llvm-commits Tag: LLVM Differential Revision: https://reviews.llvm.org/D66084 llvm-svn: 373857
* [SLP] avoid reduction transform on patterns that the backend can load-combineSanjay Patel2019-10-051-0/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I don't see an ideal solution to these 2 related, potentially large, perf regressions: https://bugs.llvm.org/show_bug.cgi?id=42708 https://bugs.llvm.org/show_bug.cgi?id=43146 We decided that load combining was unsuitable for IR because it could obscure other optimizations in IR. So we removed the LoadCombiner pass and deferred to the backend. Therefore, preventing SLP from destroying load combine opportunities requires that it recognizes patterns that could be combined later, but not do the optimization itself ( it's not a vector combine anyway, so it's probably out-of-scope for SLP). Here, we add a scalar cost model adjustment with a conservative pattern match and cost summation for a multi-instruction sequence that can probably be reduced later. This should prevent SLP from creating a vector reduction unless that sequence is extremely cheap. In the x86 tests shown (and discussed in more detail in the bug reports), SDAG combining will produce a single instruction on these tests like: movbe rax, qword ptr [rdi] or: mov rax, qword ptr [rdi] Not some (half) vector monstrosity as we currently do using SLP: vpmovzxbq ymm0, dword ptr [rdi + 1] # ymm0 = mem[0],zero,zero,.. vpsllvq ymm0, ymm0, ymmword ptr [rip + .LCPI0_0] movzx eax, byte ptr [rdi] movzx ecx, byte ptr [rdi + 5] shl rcx, 40 movzx edx, byte ptr [rdi + 6] shl rdx, 48 or rdx, rcx movzx ecx, byte ptr [rdi + 7] shl rcx, 56 or rcx, rdx or rcx, rax vextracti128 xmm1, ymm0, 1 vpor xmm0, xmm0, xmm1 vpshufd xmm1, xmm0, 78 # xmm1 = xmm0[2,3,0,1] vpor xmm0, xmm0, xmm1 vmovq rax, xmm0 or rax, rcx vzeroupper ret Differential Revision: https://reviews.llvm.org/D67841 llvm-svn: 373833
* [MemorySSA] Update Phi creation when inserting a Def.Alina Sbirlea2019-10-021-37/+40
| | | | | | | | | MemoryPhis should be added in the IDF of the blocks newly gaining Defs. This includes the blocks that gained a Phi and the block gaining a Def, if the block did not have one before. Resolves PR43427. llvm-svn: 373505
* Fix: Actually erase remove the elements from AssumeHandlesAditya Kumar2019-10-021-1/+4
| | | | | | | | | | | | | | Reviewers: sdmitriev, tejohnson Reviewed by: tejohnson Subscribers: llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D68318 llvm-svn: 373494
* MemorySSAUpdater::applyInsertUpdates - silence static analyzer ↵Simon Pilgrim2019-10-021-1/+1
| | | | | | | | dyn_cast<MemoryAccess> null dereference warning. NFCI. The static analyzer is warning about a potential null dereference, but we should be able to use cast<MemoryAccess> directly and if not assert will fire for us. llvm-svn: 373467
* MemorySSA tryOptimizePhi - assert that we've found a DefChainEnd. NFCI.Simon Pilgrim2019-10-021-0/+1
| | | | | | Silences static analyzer null dereference warning. llvm-svn: 373466
* LoopAccessAnalysis isConsecutiveAccess() - silence static analyzer ↵Simon Pilgrim2019-10-021-2/+2
| | | | | | | | dyn_cast<SCEVConstant> null dereference warning. NFCI. The static analyzer is warning about potential null dereferences, but in these cases we should be able to use cast<SCEVConstant> directly and if not assert will fire for us. llvm-svn: 373465
* [InstCombine] Simplify fma multiplication to nan for undef or nan operands.Florian Hahn2019-10-021-3/+3
| | | | | | | | | | | | | | | | | In similar fashion to D67721, we can simplify FMA multiplications if any of the operands is NaN or undef. In instcombine, we will simplify the FMA to an fadd with a NaN operand, which in turn gets folded to NaN. Note that this just changes SimplifyFMAFMul, so we still not catch the case where only the Add part of the FMA is Nan/Undef. Reviewers: cameron.mcinally, mcberg2017, spatel, arsenm Reviewed By: cameron.mcinally Differential Revision: https://reviews.llvm.org/D68265 llvm-svn: 373459
* [InstSimplify] fold fma/fmuladd with a NaN or undef operandSanjay Patel2019-10-021-0/+9
| | | | | | | | | | | This is intended to be similar to the constant folding results from D67446 and earlier, but not all operands are constant in these tests, so the responsibility for folding is left to InstSimplify. Differential Revision: https://reviews.llvm.org/D67721 llvm-svn: 373455
OpenPOWER on IntegriCloud