summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen/CodeGenPrepare.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Clarify that we match BSwap in InstCombine and BitReverse in CGP. NFC.Chad Rosier2016-05-251-1/+1
| | | | | | | | Also, rename recognizeBitReverseOrBSwapIdiom to recognizeBSwapOrBitReverseIdiom, so the ordering of the MatchBSwaps and MatchBitReversals arguments are consistent with the function name. llvm-svn: 270715
* Rename getLargestLegalIntTypeSize to getLargestLegalIntTypeSizeInBits(). NFC.Jun Bum Lim2016-05-131-1/+1
| | | | | | | | | | | | Summary: Rename DataLayout::getLargestLegalIntTypeSize to DataLayout::getLargestLegalIntTypeSizeInBits() to prevent similar mistakes fixed in r269433. Reviewers: joker.eph, mcrosier Subscribers: mcrosier, llvm-commits Differential Revision: http://reviews.llvm.org/D20248 llvm-svn: 269456
* [CGP] avoid crashing from weightlessnessSanjay Patel2016-05-091-3/+5
| | | | | | | It's possible that we have branch weights with 0 values. In that case, don't try to create an impossible BranchProbability. llvm-svn: 268935
* [CodeGenPrepare] Don't sink a cast past its userDavid Majnemer2016-04-271-0/+5
| | | | | | | | | | The sink cast machinery is supposed to sink casts as close to their user as possible. However, an EH pad is the first instruction in it's basic block. Don't sink if the user is an EH pad. This fixes PR27536. llvm-svn: 267767
* [CodeGenPrepare] use branch weight metadata to decide if a select should be ↵Sanjay Patel2016-04-261-11/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | turned into a branch This is part of solving PR27344: https://llvm.org/bugs/show_bug.cgi?id=27344 CGP should undo the SimplifyCFG transform for the same reason that earlier patches have used this same mechanism: it's possible that passes between SimplifyCFG and CGP may be able to optimize the IR further with a select in place. For the TLI hook default, >99% taken or not taken is chosen as the default threshold for a highly predictable branch. Even the most limited HW branch predictors will be correct on this branch almost all the time, so even a massive mispredict penalty perf loss would be overcome by the win from all the times the branch was predicted correctly. As a follow-up, we could make the default target hook less conservative by using the SchedMachineModel's MispredictPenalty. Or we could just let targets override the default by implementing the hook with that and other target-specific options. Note that trying to statically determine mispredict rates for close-to-balanced profile weight data is generally impossible if the HW is sufficiently advanced. Ie, 50/50 taken/not-taken might still be 100% predictable. Finally, note that this patch as-is will not solve PR27344 because the current __builtin_unpredictable() branch weight default values are 4 and 64. A proposal to change that is in D19435. Differential Revision: http://reviews.llvm.org/D19488 llvm-svn: 267572
* [CodeGenPrepare] don't convert an unpredictable select into control flowSanjay Patel2016-04-261-1/+2
| | | | | | | Suggested in the review of D19488: http://reviews.llvm.org/D19488 llvm-svn: 267504
* replace duplicated static functions for profile metadata access with ↵Sanjay Patel2016-04-231-25/+2
| | | | | | BranchInst member function; NFCI llvm-svn: 267295
* Re-commit optimization bisect support (r267022) without new pass manager ↵Andrew Kaylor2016-04-221-1/+1
| | | | | | | | | | support. The original commit was reverted because of a buildbot problem with LazyCallGraph::SCC handling (not related to the OptBisect handling). Differential Revision: http://reviews.llvm.org/D19172 llvm-svn: 267231
* Revert "Initial implementation of optimization bisect support."Vedant Kumar2016-04-221-1/+1
| | | | | | | | This reverts commit r267022, due to an ASan failure: http://lab.llvm.org:8080/green/job/clang-stage2-cmake-RgSan_check/1549 llvm-svn: 267115
* Initial implementation of optimization bisect support.Andrew Kaylor2016-04-211-1/+1
| | | | | | | | | | | | This patch implements a optimization bisect feature, which will allow optimizations to be selectively disabled at compile time in order to track down test failures that are caused by incorrect optimizations. The bisection is enabled using a new command line option (-opt-bisect-limit). Individual passes that may be skipped call the OptBisect object (via an LLVMContext) to see if they should be skipped based on the bisect limit. A finer level of control (disabling individual transformations) can be managed through an addition OptBisect method, but this is not yet used. The skip checking in this implementation is based on (and replaces) the skipOptnoneFunction check. Where that check was being called, a new call has been inserted in its place which checks the bisect limit and the optnone attribute. A new function call has been added for module and SCC passes that behaves in a similar way. Differential Revision: http://reviews.llvm.org/D19172 llvm-svn: 267022
* Calculate __builtin_object_size when pointer depends on a conditionPetar Jovanovic2016-04-131-3/+12
| | | | | | | | | | | | | | | | This patch fixes calculating of builtin_object_size if it depends on a condition. Before this patch compiler did not know how to calculate the object size when it finds a condition that cannot be eliminated. This patch enables calculating of builtin_object_size even in case when condition cannot be eliminated by choosing minimum or maximum value as a result from condition. Choosing minimum or maximum value from condition is based on the second argument of __builtin_object_size function. Patch by Strahinja Petrovic. Differential Revision: http://reviews.llvm.org/D18438 llvm-svn: 266193
* use range-loops; NFCISanjay Patel2016-04-111-13/+8
| | | | llvm-svn: 265985
* Don't delete empty preheaders in CodeGenPrepare if it would create a ↵Chuang-Yu Cheng2016-04-051-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | critical edge Presently, CodeGenPrepare deletes all nearly empty (only phi and branch) basic blocks. This pass can delete loop preheaders which frequently creates critical edges. A preheader can be a convenient place to spill registers to the stack. If the entrance to a loop body is a critical edge, then spills may occur in the loop body rather than immediately before it. This patch protects loop preheaders from deletion in CodeGenPrepare even if they are nearly empty. Since the patch alters the CFG, it affects a large number of test cases. In most cases, the changes are merely cosmetic (basic blocks have different names or instruction orders change slightly). I am somewhat concerned about the test/CodeGen/Mips/brdelayslot.ll test case. If the loop preheader is not deleted, then the MIPS backend does not take advantage of a branch delay slot. Consequently, I would like some close review by a MIPS expert. The patch also partially subsumes D16893 from George Burgess IV. George correctly notes that CodeGenPrepare does not actually preserve the dominator tree. I think the dominator tree was usually not valid when CodeGenPrepare ran, but I am using LoopInfo to mark preheaders, so the dominator tree is now always valid before CodeGenPrepare. Author: Tom Jablin (tjablin) Reviewers: hfinkel george.burgess.iv vkalintiris dsanders kbarton cycheng http://reviews.llvm.org/D16984 llvm-svn: 265397
* [CodeGenPrepare] Fix r265264 (again).Peter Zotov2016-04-031-3/+3
| | | | | | | Don't require TLI for SinkCmpExpression, like it wasn't before r265264. llvm-svn: 265271
* [CodeGenPrepare] Fix r265264.Peter Zotov2016-04-031-3/+3
| | | | | | | The case where there was no TargetLowering was not handled, leading to null pointer dereferences. llvm-svn: 265265
* [CodeGenPrepare] Avoid sinking soft-FP comparisonsPeter Zotov2016-04-031-5/+9
| | | | | | | | | | | | | | | Sinking comparisons in CGP can undo the job of hoisting them done earlier by LICM, and soft-FP makes this an expensive mistake. A common pattern that produces floating point comparisons uniform over a loop is an explicit check for division by zero. If the divisor is hoisted out of the loop, the comparison can also be, but hoisting the function that unwinds is never legal, since it may cause side effects in the loop body prior to the unwinding to not be executed. Differential Revision: http://reviews.llvm.org/D18744 llvm-svn: 265264
* Keep CodeGenPrepare from preserving the domtree.George Burgess IV2016-03-221-1/+2
| | | | | | | | | | CGP modifies the domtree in some cases, so saying that it preserves the domtree is a lie. We'll be able to selectively preserve it with the new pass manager. Differential Revision: http://reviews.llvm.org/D16893 llvm-svn: 264099
* Minor code cleanups. NFC.Junmo Park2016-03-111-3/+3
| | | | llvm-svn: 263200
* [CGP] Duplicate addressing computation in cold paths if required to sink ↵Philip Reames2016-03-091-8/+45
| | | | | | | | | | | | | | addressing mode This patch teaches CGP to duplicate addressing mode computations into cold paths (detected via explicit cold attribute on calls) if required to let addressing mode be safely sunk into the basic block containing each load and store. In general, duplicating code into cold blocks may result in code growth, but should not effect performance. In this case, it's better to duplicate some code than to put extra pressure on the register allocator by making it keep the address through the entirely of the fast path. This patch only handles addressing computations, but in principal, we could implement a more general cold cold scheduling heuristic which tries to reduce register pressure in the fast path by duplicating code into the cold path. Getting the profitability of the general case right seemed likely to be challenging, so I stuck to the existing case (addressing computation) we already had. Differential Revision: http://reviews.llvm.org/D17652 llvm-svn: 263074
* [CodeGenPrepare] Remove load-based heuristicJunmo Park2016-02-251-11/+0
| | | | | | | | | | | | | | Summary: Both the hardware and LLVM have changed since 2012. Now, load-based heuristic don't show big differences any more on OoO cores. There is no notable regressons and improvements on spec2000/2006. (Cortex-A57, Core i5). Reviewers: spatel, zansari Differential Revision: http://reviews.llvm.org/D16836 llvm-svn: 261809
* ADT: Stop using getNodePtrUnchecked on end() iteratorsDuncan P. N. Exon Smith2016-02-211-5/+3
| | | | | | | | | | | Stop using `getNodePtrUnchecked()` when building IR. Eventually a dereference will be required to get at the downcast node, since the iterator will only store an `ilist_node_base` of some sort. This should have no functionality change for now, but is a path towards removing some more UB from ilist. llvm-svn: 261495
* CodeGen: Avoid getNodePtrUnchecked() where we need a Value, NFCDuncan P. N. Exon Smith2016-02-211-4/+6
| | | | | | | | | | | | | | | `ilist_iterator<NodeTy>::getNodePtrUnchecked()` is documented as being for internal use only, but CodeGenPrepare was using it anyway. This code relies on pulling out the `Value*` pointer even after the lifetime of the iterator is over. But having this pointer available in ilist_iterator depends on UB in the first place. Instead, safely pull out the `Value*` when the iterator is alive and stop using the internal-only API. There should be no functionality change here. llvm-svn: 261493
* Annotate dump() methods with LLVM_DUMP_METHOD, addressing Richard Smith ↵Yaron Keren2016-01-291-1/+1
| | | | | | | | r259192 post commit comment. clang part in r259232, this is the LLVM part of the patch. llvm-svn: 259240
* Minor code cleanups. NFC.Junmo Park2016-01-281-1/+1
| | | | llvm-svn: 259033
* function names start with a lowercase letter; NFCSanjay Patel2016-01-221-8/+8
| | | | llvm-svn: 258552
* fix formatting; NFCSanjay Patel2016-01-201-3/+3
| | | | llvm-svn: 258330
* GlobalValue: use getValueType() instead of getType()->getPointerElementType().Manuel Jacob2016-01-161-1/+1
| | | | | | | | | | | | Reviewers: mjacob Subscribers: jholewinski, arsenm, dsanders, dblaikie Patch by Eduard Burtescu. Differential Revision: http://reviews.llvm.org/D16260 llvm-svn: 257999
* Stop increasing alignment of externally-visible globals on ELFJames Y Knight2016-01-151-2/+2
| | | | | | | | | | | | | | | | | | | | | platforms. With ELF, the alignment of a global variable in a shared library will get copied into an executables linked against it, if the executable even accesss the variable. So, it's not possible to implicitly increase alignment based on access patterns, or you'll break existing binaries. This happened to affect libc++'s std::cout symbol, for example. See thread: http://thread.gmane.org/gmane.comp.compilers.clang.devel/45311 (This is a re-commit of r257719, without the bug reported in PR26144. I've tweaked the code to not assert-fail in enforceKnownAlignment when computeKnownBits doesn't recurse far enough to find the underlying Alloca/GlobalObject value.) Differential Revision: http://reviews.llvm.org/D16145 llvm-svn: 257902
* [CodeGenPrepare] Try and appease sanitizersJames Molloy2016-01-151-1/+1
| | | | | | dupRetToEnableTailCallOpts(BB) can invalidate BB. It must run *after* we iterate across BB! llvm-svn: 257886
* [InstCombine] Rewrite bswap/bitreverse handling completely.James Molloy2016-01-151-0/+29
| | | | | | | | | | | | | | There are several requirements that ended up with this design; 1. Matching bitreversals is too heavyweight for InstCombine and doesn't really need to be done so early. 2. Bitreversals and byteswaps are very related in their matching logic. 3. We want to implement support for matching more advanced bswap/bitreverse patterns like partial bswaps/bitreverses. 4. Bswaps are best matched early in InstCombine. The result of these is that a new utility function is created in Transforms/Utils/Local.h that can be configured to search for bswaps, bitreverses or both. InstCombine uses it to find only bswaps, CGP uses it to find only bitreversals. We can then extend the matching logic in one place only. llvm-svn: 257875
* Revert "Stop increasing alignment of externally-visible globals on ELF ↵James Y Knight2016-01-141-2/+2
| | | | | | | | platforms." This reverts commit r257719, due to PR26144. llvm-svn: 257775
* Stop increasing alignment of externally-visible globals on ELFJames Y Knight2016-01-131-2/+2
| | | | | | | | | | | | | | platforms. With ELF, the alignment of a global variable in a shared library will get copied into an executables linked against it, if the executable even accesss the variable. So, it's not possible to implicitly increase alignment based on access patterns, or you'll break existing binaries. This happened to affect libc++'s std::cout symbol, for example. See thread: http://thread.gmane.org/gmane.comp.compilers.clang.devel/45311 llvm-svn: 257719
* Remove extra whitespace. NFC.Junmo Park2016-01-081-8/+8
| | | | llvm-svn: 257144
* [Statepoints] Refactor GCRelocateOperands into an intrinsic wrapper. NFC.Manuel Jacob2016-01-051-25/+20
| | | | | | | | | | | | | | | Summary: This commit renames GCRelocateOperands to GCRelocateInst and makes it an intrinsic wrapper, similar to e.g. MemCpyInst. Also, all users of GCRelocateOperands were changed to use the new intrinsic wrapper instead. Reviewers: sanjoy, reames Subscribers: reames, sanjoy, llvm-commits Differential Revision: http://reviews.llvm.org/D15762 llvm-svn: 256811
* Clarify that the bypassSlowDivision optimization operates on a single BB [v2]Eric Christopher2016-01-041-2/+8
| | | | | | | | | | | | | | | | Update some comments to be more explicit. Change bypassSlowDivision and the functions it calls so that they take BasicBlock*s and Instruction*s, rather than Function::iterator&s and BasicBlock::iterator&s. Change the APIs so that the caller is responsible for updating the iterator, rather than the callee. This makes control flow much easier to follow. Patch by Justin Lebar! llvm-svn: 256789
* Remove unnecessary casts. NFC.Manuel Jacob2015-12-191-6/+5
| | | | llvm-svn: 256101
* getParent() ^ 3 == getModule() ; NFCISanjay Patel2015-12-141-1/+1
| | | | llvm-svn: 255511
* [CGP] Reimplement r255055 a different wayReid Kleckner2015-12-081-0/+4
| | | | llvm-svn: 255070
* Revert "[CGP] Check that we have an insert point before moving ↵Reid Kleckner2015-12-081-8/+4
| | | | | | | | | | llvm.dbg.value around" This reverts commit r255055. Breakage has been reported. llvm-svn: 255063
* [CGP] Check that we have an insert point before moving llvm.dbg.value aroundReid Kleckner2015-12-081-4/+8
| | | | llvm-svn: 255055
* [WinEH] Fix problem where CodeGenPrepare incorrectly sinks a bitcast into an ↵Andrew Kaylor2015-11-231-0/+6
| | | | | | | | EH pad. Differential Revision: http://reviews.llvm.org/D14842 llvm-svn: 253902
* [CodeGenPrepare] Create more extloads and fewer andsGeoff Berry2015-11-201-1/+190
| | | | | | | | | | | | | | | Summary: Add and instructions immediately after loads that only have their low bits used, assuming that the (and (load x) c) will be matched as a extload and the ands/truncs fed by the extload will be removed by isel. Reviewers: mcrosier, qcolombet, ab Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D14584 llvm-svn: 253722
* [CGP] despeculate expensive cttz/ctlz intrinsicsSanjay Patel2015-11-191-0/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is another step towards allowing SimplifyCFG to speculate harder, but then have CGP clean things up if the target doesn't like it. Previous patches in this series: http://reviews.llvm.org/D12882 http://reviews.llvm.org/D13297 D13297 should catch most expensive ops, but speculation of cttz/ctlz requires special handling because of weirdness in the intrinsic definition for handling a zero input (that definition can probably be blamed on x86). For example, if we have the usual speculated-by-select expensive op pattern like this: %tobool = icmp eq i64 %A, 0 %0 = tail call i64 @llvm.cttz.i64(i64 %A, i1 true) ; is_zero_undef == true %cond = select i1 %tobool, i64 64, i64 %0 ret i64 %cond There's an instcombine that will turn it into: %0 = tail call i64 @llvm.cttz.i64(i64 %A, i1 false) ; is_zero_undef == false This CGP patch is looking for that case and despeculating it back into: entry: %tobool = icmp eq i64 %A, 0 br i1 %tobool, label %cond.end, label %cond.true cond.true: %0 = tail call i64 @llvm.cttz.i64(i64 %A, i1 true) ; is_zero_undef == true br label %cond.end cond.end: %cond = phi i64 [ %0, %cond.true ], [ 64, %entry ] ret i64 %cond This unfortunately may lead to poorer codegen (see the changes in the existing x86 test), but if we increase speculation in SimplifyCFG (the next step in this patch series), then we should avoid those kinds of cases in the first place. The need for this patch was originally mentioned here: http://reviews.llvm.org/D7506 with follow-up here: http://reviews.llvm.org/D7554 Differential Revision: http://reviews.llvm.org/D14630 llvm-svn: 253573
* Revert "Change memcpy/memset/memmove to have dest and source alignments."Pete Cooper2015-11-191-2/+2
| | | | | | | | | | This reverts commit r253511. This likely broke the bots in http://lab.llvm.org:8011/builders/clang-ppc64-elf-linux2/builds/20202 http://bb.pgr.jp/builders/clang-3stage-i686-linux/builds/3787 llvm-svn: 253543
* Change memcpy/memset/memmove to have dest and source alignments.Pete Cooper2015-11-181-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Note, this was reviewed (and more details are in) http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html These intrinsics currently have an explicit alignment argument which is required to be a constant integer. It represents the alignment of the source and dest, and so must be the minimum of those. This change allows source and dest to each have their own alignments by using the alignment attribute on their arguments. The alignment argument itself is removed. There are a few places in the code for which the code needs to be checked by an expert as to whether using only src/dest alignment is safe. For those places, they currently take the minimum of src/dest alignments which matches the current behaviour. For example, code which used to read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 500, i32 8, i1 false) will now read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 8 %dest, i8* align 8 %src, i32 500, i1 false) For out of tree owners, I was able to strip alignment from calls using sed by replacing: (call.*llvm\.memset.*)i32\ [0-9]*\,\ i1 false\) with: $1i1 false) and similarly for memmove and memcpy. I then added back in alignment to test cases which needed it. A similar commit will be made to clang which actually has many differences in alignment as now IRBuilder can generate different source/dest alignments on calls. In IRBuilder itself, a new argument was added. Instead of calling: CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, /* isVolatile */ false) you now call CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, SrcAlign, /* isVolatile */ false) There is a temporary class (IntegerAlignment) which takes the source alignment and rejects implicit conversion from bool. This is to prevent isVolatile here from passing its default parameter to the source alignment. Note, changes in future can now be made to codegen. I didn't change anything here, but this change should enable better memcpy code sequences. Reviewed by Hal Finkel. llvm-svn: 253511
* [CodegenPrepare] Do not rematerialize gc.relocates across different basic blocksIgor Laevsky2015-11-031-0/+8
| | | | | | Differential Revision: http://reviews.llvm.org/D14258 llvm-svn: 251957
* [CGP] widen switch condition and case constants to target's register width ↵Sanjay Patel2015-11-021-0/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (2nd try) This is a redo of r251849 except the tests have been split into arch-specific folders to hopefully make the bots happy. This is a follow-up from the discussion in D12965. The block-at-a-time limitation of SelectionDAG also came up in D13297. Without the InstCombine change from D12965, I don't expect this patch to make any difference in the real world because InstCombine does not shrink cases like this in visitSwitchInst(). But we need to have this CGP safety harness in place before proceeding with any shrinkage in D12965, so we won't generate extra extends for compares. I've opted for IR regression tests in the patch because that seems like a clearer way to test the transform, but PowerPC CodeGen for an i16 widening test is shown below. x86 will need more work to solve: https://llvm.org/bugs/show_bug.cgi?id=22473 Before: BB#0: mr 4, 3 extsh. 3, 4 ble 0, .LBB0_5 BB#1: cmpwi 3, 99 bgt 0, .LBB0_9 BB#2: rlwinm 4, 4, 0, 16, 31 <--- 32-bit mask/extend li 3, 0 cmplwi 4, 1 beqlr 0 BB#3: cmplwi 4, 10 bne 0, .LBB0_12 BB#4: li 3, 1 blr .LBB0_5: rlwinm 3, 4, 0, 16, 31 <--- 32-bit mask/extend cmplwi 3, 65436 beq 0, .LBB0_13 BB#6: cmplwi 3, 65526 beq 0, .LBB0_15 BB#7: cmplwi 3, 65535 bne 0, .LBB0_12 BB#8: li 3, 4 blr .LBB0_9: rlwinm 3, 4, 0, 16, 31 <--- 32-bit mask/extend cmplwi 3, 100 beq 0, .LBB0_14 ... After: BB#0: rlwinm 4, 3, 0, 16, 31 <--- mask/extend to 32-bit and then use that for comparisons cmpwi 4, 999 ble 0, .LBB0_5 BB#1: lis 3, 0 ori 3, 3, 65525 cmpw 4, 3 bgt 0, .LBB0_9 BB#2: cmplwi 4, 1000 beq 0, .LBB0_14 BB#3: cmplwi 4, 65436 bne 0, .LBB0_13 BB#4: li 3, 6 blr .LBB0_5: li 3, 0 cmplwi 4, 1 beqlr 0 BB#6: cmplwi 4, 10 beq 0, .LBB0_12 BB#7: cmplwi 4, 100 bne 0, .LBB0_13 BB#8: li 3, 2 blr .LBB0_9: cmplwi 4, 65526 beq 0, .LBB0_15 BB#10: cmplwi 4, 65535 bne 0, .LBB0_13 ... Differential Revision: http://reviews.llvm.org/D13532 llvm-svn: 251857
* revert r251849; need to move tests to arch-specific foldersSanjay Patel2015-11-021-47/+0
| | | | llvm-svn: 251851
* [CGP] widen switch condition and case constants to target's register widthSanjay Patel2015-11-021-0/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a follow-up from the discussion in D12965. The block-at-a-time limitation of SelectionDAG also came up in D13297. Without the InstCombine change from D12965, I don't expect this patch to make any difference in the real world because InstCombine does not shrink cases like this in visitSwitchInst(). But we need to have this CGP safety harness in place before proceeding with any shrinkage in D12965, so we won't generate extra extends for compares. I've opted for IR regression tests in the patch because that seems like a clearer way to test the transform, but PowerPC CodeGen for an i16 widening test is shown below. x86 will need more work to solve: https://llvm.org/bugs/show_bug.cgi?id=22473 Before: BB#0: mr 4, 3 extsh. 3, 4 ble 0, .LBB0_5 BB#1: cmpwi 3, 99 bgt 0, .LBB0_9 BB#2: rlwinm 4, 4, 0, 16, 31 <--- 32-bit mask/extend li 3, 0 cmplwi 4, 1 beqlr 0 BB#3: cmplwi 4, 10 bne 0, .LBB0_12 BB#4: li 3, 1 blr .LBB0_5: rlwinm 3, 4, 0, 16, 31 <--- 32-bit mask/extend cmplwi 3, 65436 beq 0, .LBB0_13 BB#6: cmplwi 3, 65526 beq 0, .LBB0_15 BB#7: cmplwi 3, 65535 bne 0, .LBB0_12 BB#8: li 3, 4 blr .LBB0_9: rlwinm 3, 4, 0, 16, 31 <--- 32-bit mask/extend cmplwi 3, 100 beq 0, .LBB0_14 ... After: BB#0: rlwinm 4, 3, 0, 16, 31 <--- mask/extend to 32-bit and then use that for comparisons cmpwi 4, 999 ble 0, .LBB0_5 BB#1: lis 3, 0 ori 3, 3, 65525 cmpw 4, 3 bgt 0, .LBB0_9 BB#2: cmplwi 4, 1000 beq 0, .LBB0_14 BB#3: cmplwi 4, 65436 bne 0, .LBB0_13 BB#4: li 3, 6 blr .LBB0_5: li 3, 0 cmplwi 4, 1 beqlr 0 BB#6: cmplwi 4, 10 beq 0, .LBB0_12 BB#7: cmplwi 4, 100 bne 0, .LBB0_13 BB#8: li 3, 2 blr .LBB0_9: cmplwi 4, 65526 beq 0, .LBB0_15 BB#10: cmplwi 4, 65535 bne 0, .LBB0_13 ... Differential Revision: http://reviews.llvm.org/D13532 llvm-svn: 251849
* Scalarizer for masked.gather and masked.scatter intrinsics.Elena Demikhovsky2015-10-251-1/+261
| | | | | | | | | | When the target does not support these intrinsics they should be converted to a chain of scalar load or store operations. If the mask is not constant, the scalarizer will build a chain of conditional basic blocks. I added isLegalMaskedGather() isLegalMaskedScatter() APIs. Differential Revision: http://reviews.llvm.org/D13722 llvm-svn: 251237
OpenPOWER on IntegriCloud