summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms
Commit message (Collapse)AuthorAgeFilesLines
* Create SampleProfileLoader pass in llvm instead of clangDehao Chen2016-12-141-0/+5
| | | | | | | | | | | | Summary: We used to create SampleProfileLoader pass in clang. This makes LTO/ThinLTO unable to add this pass in the linker plugin. This patch moves the SampleProfileLoader pass creation from clang to llvm pass manager builder. Reviewers: tejohnson, davidxl, dnovillo Subscribers: llvm-commits, mehdi_amini Differential Revision: https://reviews.llvm.org/D27743 llvm-svn: 289669
* Replace APFloatBase static fltSemantics data members with getter functionsStephan Bergmann2016-12-143-11/+11
| | | | | | | | | | | | | At least the plugin used by the LibreOffice build (<https://wiki.documentfoundation.org/Development/Clang_plugins>) indirectly uses those members (through inline functions in LLVM/Clang include files in turn using them), but they are not exported by utils/extract_symbols.py on Windows, and accessing data across DLL/EXE boundaries on Windows is generally problematic. Differential Revision: https://reviews.llvm.org/D26671 llvm-svn: 289647
* [X86][InstCombine] Handle demanded elements for operand of AVX-512 scalar ↵Craig Topper2016-12-141-1/+17
| | | | | | floating point to integer conversion intrinsics. llvm-svn: 289639
* [X86][InstCombine] Teach SimplifyDemandedVectorElts to handle masked scalar ↵Craig Topper2016-12-142-20/+14
| | | | | | | | add/sub/mul/div/max/min intrinsics better. Now we can remove these intrinsics if element 0 isn't used. Also fix undef element tracking. llvm-svn: 289636
* [X86][InstCombine] Handle scalar fmadd intrinsics correctly in ↵Craig Topper2016-12-142-15/+22
| | | | | | | | SimplifyDemandedVectorElts. Now we pass a modified version of DemandedElts to each operand and we calculate undef elts correctly. llvm-svn: 289632
* [X86][InstCombine] Teach SimplifyDemandedVectorElts to handle scalar round ↵Craig Topper2016-12-142-38/+21
| | | | | | | | | | | | intrinsics more correctly. Now we only pass bit 0 of the DemandedElts to optimize operand 1 as we recurse since the upper bits are unused. Similarly we clear bit 0 for optimizing operand 0. Also calculate UndefElts correctly. Simplify InstCombineCalls for these instrinics to just call SimplifyDemandedVectorElts for the call instrution to reuse this support. llvm-svn: 289629
* [X86][InstCombine] Teach SimplifyDemandedVectorElts to handle scalar ↵Craig Topper2016-12-142-20/+34
| | | | | | | | | | | | min/max/cmp intrinsics more correctly. Now we only pass bit 0 of the DemandedElts to optimize operand 1 as we recurse since the upper bits are unused. Also calculate UndefElts correctly. Simplify InstCombineCalls for these instrinics to just call SimplifyDemandedVectorElts for the call instrution to reuse this support. llvm-svn: 289628
* Change CoverageTracker from a global variable to member variable to avoid ↵Dehao Chen2016-12-131-52/+52
| | | | | | breaking thread-safety. (NFC) llvm-svn: 289603
* [IRCE] Avoid loop optimizations on pre and post loopsAnna Thomas2016-12-131-0/+34
| | | | | | | | | | | | | | | | | | | | | Summary: This patch will add loop metadata on the pre and post loops generated by IRCE. Currently, we have metadata for disabling optimizations such as vectorization, unrolling, loop distribution and LICM versioning (and confirmed that these optimizations check for the metadata before proceeding with the transformation). The pre and post loops generated by IRCE need not go through loop opts (since these are slow paths). Added two test cases as well. Reviewers: sanjoy, reames Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D26806 llvm-svn: 289588
* [LV] Don't vectorize when we have a small static bound on trip countMichael Kuperstein2016-12-131-2/+2
| | | | | | | | | | We currently check if the exact trip count is known and is smaller than the "tiny loop" bound. We should be checking the maximum bound on the trip count instead. Differential Revision: https://reviews.llvm.org/D27690 llvm-svn: 289583
* [ADCE] Add code to remove dead branchesDavid Callahan2016-12-131-54/+227
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is last in of a series of patches to evolve ADCE.cpp to support removing of unnecessary control flow. This patch adds the code to update the control and data flow graphs to remove the dead control flow. Also update unit tests to test the capability to remove dead, may-be-infinite loop which is enabled by the switch -adce-remove-loops. Previous patches: D23824 [ADCE] Add handling of PHI nodes when removing control flow D23559 [ADCE] Add control dependence computation D23225 [ADCE] Modify data structures to support removing control flow D23065 [ADCE] Refactor anticipating new functionality (NFC) D23102 [ADCE] Refactoring for new functionality (NFC) Reviewers: dberlin, majnemer, nadav, mehdi_amini Subscribers: llvm-commits, david2050, freik, twoh Differential Revision: https://reviews.llvm.org/D24918 llvm-svn: 289548
* [X86][InstCombine] Fix SimplifyDemandedVectorElts to handle frcz scalar ↵Craig Topper2016-12-132-0/+18
| | | | | | | | | | intrinsics correctly. Only the lower bits of the input element are used. And only the lower element can be undef since the upper bits are zeroed. Have InstCombineCalls call SimplifyDemandedVectorElts for these intrinsics to reuse this support. llvm-svn: 289523
* [PGO] Fix insane counts due to nonreturn callsRong Xu2016-12-131-2/+11
| | | | | | | | | | | | | | | | Summary: Since we don't break BBs for function calls. We might get some insane counts (wrap of unsigned) in the presence of noreturn calls. This patch sets these counts to zero instead of the wrapped number. Reviewers: davidxl Subscribers: xur, eraman, llvm-commits Differential Revision: https://reviews.llvm.org/D27602 llvm-svn: 289521
* [SCCP] Debug diagnostic goes under DEBUG(). NFCI.Davide Italiano2016-12-131-1/+1
| | | | llvm-svn: 289519
* [SLP] Fix sign-extends for type-shrinkingMatthew Simpson2016-12-121-15/+62
| | | | | | | | | | | | | | This patch ensures the correct minimum bit width during type-shrinking. Previously when type-shrinking, we always sign-extended values back to their original width. However, if we are going to sign-extend, and the sign bit is unknown, we have to increase the minimum bit width by one bit so the sign-extend will fill the upper bits correctly. If the sign bit is known to be zero, we can perform a zero-extend instead. This should fix PR31243. Reference: https://llvm.org/bugs/show_bug.cgi?id=31243 Differential Revision: https://reviews.llvm.org/D27466 llvm-svn: 289470
* [ThinLTO] Remove useless code (NFC)Teresa Johnson2016-12-121-4/+0
| | | | | | Should have been removed in r288446. llvm-svn: 289466
* [InstCombine] fix bug when offsetting case values of a switch (PR31260)Sanjay Patel2016-12-121-25/+15
| | | | | | | | | | | | | We could truncate the condition and then try to fold the add into the original condition value causing wrong case constants to be used. Move the offset transform ahead of the truncate transform and return after each transform, so there's no chance of getting confused values. Fix for: https://llvm.org/bugs/show_bug.cgi?id=31260 llvm-svn: 289442
* [InstCombine] clean up range-for-loops in visitSwitchInst(); NFCISanjay Patel2016-12-121-7/+7
| | | | llvm-svn: 289439
* [InstCombine][XOP] The instructions for the scalar frcz intrinsics are ↵Craig Topper2016-12-111-2/+14
| | | | | | defined to put 0 in the upper bits, not pass bits through like other intrinsics. So we should return a zero vector instead. llvm-svn: 289411
* [SCCP] Use the appropriate helper function. NFCI.Davide Italiano2016-12-111-2/+2
| | | | llvm-svn: 289406
* [X86][InstCombine] Add support for scalar FMA intrinsics to ↵Craig Topper2016-12-111-0/+29
| | | | | | | | SimplifyDemandedVectorElts. This teaches SimplifyDemandedElts that the FMA can be removed if the lower element isn't used. It also teaches it that if upper elements of the first operand aren't used then we can simplify them. llvm-svn: 289377
* [X86][InstCombine] Teach InstCombineCalls to simplify demanded elements for ↵Craig Topper2016-12-111-0/+8
| | | | | | | | scalar FMA intrinsics. These intrinsics don't read the upper bits of their second and third inputs so we can try to simplify them. llvm-svn: 289372
* [AVX-512][InstCombine] Teach InstCombineCalls how to simplify demanded for ↵Craig Topper2016-12-111-1/+3
| | | | | | | | scalar cmp intrinsics with masking and rounding. These intrinsics don't read the upper elements of their first and second input. These are slightly different the the SSE version which does use the upper bits of its first element as passthru bits since the result goes to an XMM register. For AVX-512 the result goes to a mask register instead. llvm-svn: 289371
* [AVX-512][InstCombine] Teach InstCombineCalls how to simplify demanded ↵Craig Topper2016-12-111-0/+31
| | | | | | | | elements for scalar add,div,mul,sub,max,min intrinsics with masking and rounding. These intrinsics don't read the upper bits of their second input. And the third input is the passthru for masking and that only uses the lower element as well. llvm-svn: 289370
* [AVX-512][InstCombine] Add 512-bit vpermilvar intrinsics to InstCombineCalls ↵Craig Topper2016-12-111-10/+10
| | | | | | to match 128 and 256-bit. llvm-svn: 289354
* [X86][InstCombine] Teach InstCombineCalls to turn pshufb intrinsic into a ↵Craig Topper2016-12-111-2/+3
| | | | | | shufflevector if the indices are constant. llvm-svn: 289348
* [InstCombine] add helper for shift-by-shift folds; NFCISanjay Patel2016-12-101-150/+162
| | | | | | | These are currently limited to integer types, but we should be able to extend to splat vectors and possibly general vectors. llvm-svn: 289343
* [SCCP] Teach the pass about `mul %x 0` even if %x is overdefined.Davide Italiano2016-12-091-2/+5
| | | | | | | | | | | | | | | | | | | | | | | The motivating example is: extern int patatino; int goo() { int x = 0; for (int i = 0; i < 1000000; ++i) { x *= patatino; } return x; } Currently SCCP will not realize that this function returns always zero, therefore will try to unroll and vectorize the loop at -O3 producing an awful lot of (useless) code. With this change, it will just produce: 0000000000000000 <g>: xor %eax,%eax retq llvm-svn: 289175
* WholeProgramDevirt: Teach the pass to handle structs of arrays.Peter Collingbourne2016-12-091-23/+22
| | | | | | This will become necessary in some cases once D22296 lands. llvm-svn: 289165
* Make WholeProgramDevirt understand ConstStruct vtables.Peter Collingbourne2016-12-091-13/+37
| | | | | | | | Based on a patch by LemonBoy! Differential Revision: https://reviews.llvm.org/D26581 llvm-svn: 289162
* [SCCP] Make sure SCCP and ConstantFolding agree on undef >> a.Davide Italiano2016-12-081-2/+2
| | | | | | | Currently SCCP folds the value to -1, while ConstantProp folds to 0. This changes SCCP to do what ConstantFolding does. llvm-svn: 289147
* [SLP] Fix for PR6246: vectorization for scalar ops on vector elements.Alexey Bataev2016-12-081-66/+80
| | | | | | | | | | | | | | | When trying to vectorize trees that start at insertelement instructions function tryToVectorizeList() uses vectorization factor calculated as MinVecRegSize/ScalarTypeSize. But sometimes it does not work as tree cost for this fixed vectorization factor is too high. Patch tries to improve the situation. It tries different vectorization factors from max(PowerOf2Floor(NumberOfVectorizedValues), MinVecRegSize/ScalarTypeSize) to MinVecRegSize/ScalarTypeSize and tries to choose the best one. Differential Revision: https://reviews.llvm.org/D27215 llvm-svn: 289043
* CFI-icall on ThumbEvgeniy Stepanov2016-12-081-4/+14
| | | | | | | | | | | | Replace @progbits in the section directive with %progbits, because "@" starts a comment on arm/thumb. Use b.w branch instruction. Use .thumb_function and .thumb_set for proper arm/thumb interwork. This way jumptable entry addresses on thumb have bit 0 set (correctly). This does not affect CFI check math, because the address of the jumptable start also has that bit set. This does not work on thumbv5, because it does not support b.w, and the linker would not insert a veneer (trampoline?) to extend the range of b.n. We may need to do full-range plt-style jumptables on thumbv54, which are 12 bytes per entry. Another option is "push lr; bl; pop pc" (4 bytes) but that needs unwinding instructions, etc. Differential Revision: https://reviews.llvm.org/D27499 llvm-svn: 289008
* [BDCE] Skip metadata while replacing uses.Davide Italiano2016-12-071-2/+3
| | | | | | | | | | | | The fix committed in r288851 doesn't cover all the cases. In particular, if we have an instruction with side effects which has a no non-dbg use not depending on the bits, we still perform RAUW destroying the dbg.value's first argument. Prevent metadata from being replaced here to avoid the issue. Differential Revision: https://reviews.llvm.org/D27534 llvm-svn: 288987
* [GVNHoist] Invalidate MemDep when an instruction is moved.Eli Friedman2016-12-071-0/+1
| | | | | | | | | | See also r279907. Fixes https://llvm.org/bugs/show_bug.cgi?id=30991 . Differential Revision: https://reviews.llvm.org/D27493 llvm-svn: 288968
* [LV] Scalarize operands of predicated instructionsMatthew Simpson2016-12-071-7/+210
| | | | | | | | | | | | | | | | | | | | | | This patch attempts to scalarize the operand expressions of predicated instructions if they were conditionally executed in the original loop. After scalarization, the expressions will be sunk inside the blocks created for the predicated instructions. The transformation essentially performs un-if-conversion on the operands. The cost model has been updated to determine if scalarization is profitable. It compares the cost of a vectorized instruction, assuming it will be if-converted, to the cost of the scalarized instruction, assuming that the instructions corresponding to each vector lane will be sunk inside a predicated block, possibly avoiding execution. If it's more profitable to scalarize the entire expression tree feeding the predicated instruction, the expression will be scalarized; otherwise, it will be vectorized. We only consider the cost of the entire expression to accurately estimate the cost of the required insertelement and extractelement instructions. Differential Revision: https://reviews.llvm.org/D26083 llvm-svn: 288909
* Try unbreaking the MSVC build.Benjamin Kramer2016-12-071-1/+1
| | | | llvm-svn: 288907
* [LowerTypeTests] Use the TrailingObjects infrastructure for trailing objects.Benjamin Kramer2016-12-071-6/+10
| | | | | | Also avoid allocating ~3x as much memory as needed. llvm-svn: 288904
* When GVN removes a redundant load, it should not modify the debug location ↵Andrea Di Biagio2016-12-071-1/+4
| | | | | | | | | | | | | | | of the dominating load. In the case of a fully redundant load LI dominated by an equivalent load V, GVN should always preserve the original debug location of V. Otherwise, we risk to introduce an incorrect stepping. If V has debug info, then clearly it should not be modified. If V has a null debugloc, then it is still potentially incorrect to propagate LI's debugloc because LI may not post-dominate V. Differential Revision: https://reviews.llvm.org/D27468 llvm-svn: 288903
* [InlineFunction] Refactor code in function `fixupLineNumbers' as suggested ↵Andrea Di Biagio2016-12-071-16/+18
| | | | | | by David in D27462. NFC llvm-svn: 288901
* [InlineFunction] Do not propagate the callsite debug location to ↵Andrea Di Biagio2016-12-071-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | instructions inlined from functions with debug info. When a function F is inlined, InlineFunction extends the debug location of every instruction inlined from F by adding an InlinedAt. However, if an instruction has a 'null' debug location, InlineFunction would propagate the callsite debug location to it. This behavior existed since revision 210459. Revision 210459 was originally committed specifically to workaround the lack of debug information for instructions inlined from intrinsic functions (which are usually declared with attributes `__always_inline__, __nodebug__`). The problem with revision 210459 is that it doesn't make any sort of distinction between instructions inlined from a 'nodebug' function and instructions which are inlined from a function built with debug info. This issue may lead to incorrect stepping in the debugger. This patch works under the assumption that a nodebug function does not have a DISubprogram. When a function F is inlined into another function G, InlineFunction checks if F has debug info associated with it. For nodebug functions, the InlineFunction logic is unchanged (i.e. it would still propagate the callsite debugloc to the inlined instructions). Otherwise, InlineFunction no longer propagates the callsite debug location. Differential Revision: https://reviews.llvm.org/D27462 llvm-svn: 288895
* LowerTypeTests: Improve performance by optimising type metadata queries.Peter Collingbourne2016-12-061-88/+129
| | | | | | | | | | | | | | | | | | | | | | Requesting metadata for a global is a relatively expensive operation as it involves a map lookup, but it's one that we need to do relatively frequently in this pass to collect the list of type metadata nodes associated with a global. This change improves the performance of type metadata queries by prebuilding data structures that keep the global together with its list of type metadata, and changing the pass to use that data structure wherever we were previously passing global references around. This change also eliminates some O(N^2) behavior by collecting the list of globals associated with each type identifier during the first pass over the list of globals rather than visiting each global to compute that list every time we add a new type identifier. Reduces pass runtime on a module containing Chrome's vtables from over 60s to 0.9s. Differential Revision: https://reviews.llvm.org/D27484 llvm-svn: 288859
* [BDCE/DebugInfo] Preserve llvm.dbg.value's argument.Davide Italiano2016-12-061-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | BDCE has two phases: 1. It asks SimplifyDemandedBits if all the bits of an instruction are dead, and if so, replaces all its uses with the constant zero. 2. Then, it asks SimplifyDemandedBits again if the instruction is really dead (no side effects etc..) and if so, eliminates it. Now, in 1) if all the bits of an instruction are dead, we may end up replacing a dbg use: %call = tail call i32 (...) @g() #4, !dbg !15 tail call void @llvm.dbg.value(metadata i32 %call, i64 0, metadata !8, metadata !16), !dbg !17 -> %call = tail call i32 (...) @g() #4, !dbg !15 tail call void @llvm.dbg.value(metadata i32 0, i64 0, metadata !8, metadata !16), !dbg !17 but not eliminating the call because it may have arbitrary side effects. In other words, we lose some debug informations. This patch fixes the problem making sure that BDCE does nothing with the instruction if it has side effects and no non-dbg uses. Differential Revision: https://reviews.llvm.org/D27471 llvm-svn: 288851
* Revert "[SCCP] Remove manual folding of terminator instructions."Davide Italiano2016-12-061-2/+27
| | | | | | This reverts commit r288725 as it broke a bot. llvm-svn: 288759
* [SCCP] Remove manual folding of terminator instructions.Davide Italiano2016-12-051-27/+2
| | | | | | | | | | | | | There are two cases handled here: 1) a branch on undef 2) a switch with an undef condition. Both cases are currently handled by ResolvedUndefsIn. If we have a branch on undef, we force its value to false (which is trivially foldable). If we have a switch on undef, we force to the first constant (which is also foldable). llvm-svn: 288725
* [DIExpression] Introduce a dedicated DW_OP_LLVM_fragment operationAdrian Prantl2016-12-052-29/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | so we can stop using DW_OP_bit_piece with the wrong semantics. The entire back story can be found here: http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20161114/405934.html The gist is that in LLVM we've been misinterpreting DW_OP_bit_piece's offset field to mean the offset into the source variable rather than the offset into the location at the top the DWARF expression stack. In order to be able to fix this in a subsequent patch, this patch introduces a dedicated DW_OP_LLVM_fragment operation with the semantics that we used to apply to DW_OP_bit_piece, which is what we actually need while inside of LLVM. This patch is complete with a bitcode upgrade for expressions using the old format. It does not yet fix the DWARF backend to use DW_OP_bit_piece correctly. Implementation note: We discussed several options for implementing this, including reserving a dedicated field in DIExpression for the fragment size and offset, but using an custom operator at the end of the expression works just fine and is more efficient because we then only pay for it when we need it. Differential Revision: https://reviews.llvm.org/D27361 rdar://problem/29335809 llvm-svn: 288683
* [InstCombine] change select type to eliminate bitcastsSanjay Patel2016-12-031-0/+47
| | | | | | | | | | | | | | This solves a secondary problem seen in PR6137: https://llvm.org/bugs/show_bug.cgi?id=6137#c6 This is similar to the bitwise logic op fold added with: https://reviews.llvm.org/rL287707 And like that patch, I'm artificially restricting the transform from vector <-> scalar types until we're sure that the backend can handle that. llvm-svn: 288584
* Remove stale comment. NFC.Michael Kuperstein2016-12-031-3/+0
| | | | llvm-svn: 288572
* [sanitizer-coverage] use IRB.SetCurrentDebugLocation after IRB.SetInsertPointKostya Serebryany2016-12-031-1/+1
| | | | llvm-svn: 288568
* [PGO] Fix PGO use ICE when there are unreachable BBsRong Xu2016-12-022-21/+53
| | | | | | | | | | | | | | For -O0 there might be unreachable BBs, which breaks the assumption that all the BBs have an auxiliary data structure. In this patch, we add another interface called findBBInfo() so that a nullptr can be returned for the unreachable BBs (and the callers can ignore those BBs). This fixes the bug reported https://llvm.org/bugs/show_bug.cgi?id=31209 Differential Revision: https://reviews.llvm.org/D27280 llvm-svn: 288528
OpenPOWER on IntegriCloud