summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/InstCombine
Commit message (Collapse)AuthorAgeFilesLines
* make instcombine produce calls to llvm.donothing instead of a random intrinsicNuno Lopes2012-06-281-7/+4
| | | | llvm-svn: 159384
* Remove a instcombine transform that (no longer?) makes sense:Evan Cheng2012-06-261-5/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | // C - zext(bool) -> bool ? C - 1 : C if (ZExtInst *ZI = dyn_cast<ZExtInst>(Op1)) if (ZI->getSrcTy()->isIntegerTy(1)) return SelectInst::Create(ZI->getOperand(0), SubOne(C), C); This ends up forming sext i1 instructions that codegen to terrible code. e.g. int blah(_Bool x, _Bool y) { return (x - y) + 1; } => movzbl %dil, %eax movzbl %sil, %ecx shll $31, %ecx sarl $31, %ecx leal 1(%rax,%rcx), %eax ret Without the rule, llvm now generates: movzbl %sil, %ecx movzbl %dil, %eax incl %eax subl %ecx, %eax ret It also helps with ARM (and pretty much any target that doesn't have a sext i1 :-). The transformation was done as part of Eli's r75531. He has given the ok to remove it. rdar://11748024 llvm-svn: 159230
* Replacing zero-sized alloca's with a null pointer is too aggressive, insteadDuncan Sands2012-06-261-8/+40
| | | | | | | | | | | | | | | | merge all zero-sized alloca's into one, fixing c43204g from the Ada ACATS conformance testsuite. What happened there was that a variable sized object was being allocated on the stack, "alloca i8, i32 %size". It was then being passed to another function, which tested that the address was not null (raising an exception if it was) then manipulated %size bytes in it (load and/or store). The optimizers cleverly managed to deduce that %size was zero (congratulations to them, as it isn't at all obvious), which made the alloca zero size, causing the optimizers to replace it with null, which then caused the check mentioned above to fail, and the exception to be raised, wrongly. Note that no loads and stores were actually being done to the alloca (the loop that does them is executed %size times, i.e. is not executed), only the not-null address check. llvm-svn: 159202
* improve optimization of invoke instructions:Nuno Lopes2012-06-251-1/+8
| | | | | | | | - simplifycfg: invoke undef/null -> unreachable - instcombine: invoke new -> invoke expect(0, 0) (an arbitrary NOOP intrinsic; only done if the allocated memory is unused, of course) - verifier: allow invoke of intrinsics (to make the previous step work) llvm-svn: 159146
* llvm/lib: [CMake] Add explicit dependency to intrinsics_gen.NAKAMURA Takumi2012-06-241-0/+2
| | | | llvm-svn: 159112
* Revert remaining part of r93200: "Disable folding sext(trunc(x)) -> x"Jakob Stoklund Olesen2012-06-221-9/+4
| | | | | | | | | | | This fixes PR5997. These transforms were disabled because codegen couldn't deal with other uses of trunc(x). This is now handled by the peephole pass. This causes no regressions on x86-64. llvm-svn: 159003
* instcombine: disable optimization of 'invoke null/undef'. I'll move this ↵Nuno Lopes2012-06-211-11/+11
| | | | | | | | functionality to SimplifyCFG (since we cannot make changes to the CFG here). Fixes the crashes with the attached test case llvm-svn: 158951
* Look pass zext to strength reduce an udiv. Patch by David Majnemer. ↵Evan Cheng2012-06-211-1/+4
| | | | | | rdar://11721329 llvm-svn: 158946
* Add support for invoke to the MemoryBuiltin analysid.Nuno Lopes2012-06-212-2/+7
| | | | | | | | Update comments accordingly. Make instcombine remove useless invokes to C++'s 'new' allocation function (test attached). llvm-svn: 158937
* refactor the MemoryBuiltin analysis:Nuno Lopes2012-06-212-81/+7
| | | | | | | | | | | | - provide more extensive set of functions to detect library allocation functions (e.g., malloc, calloc, strdup, etc) - provide an API to compute the size and offset of an object pointed by Move a few clients (GVN, AA, instcombine, ...) to the new API. This implementation is a lot more aggressive than each of the custom implementations being replaced. Patch reviewed by Nick Lewycky and Chandler Carruth, thanks. llvm-svn: 158919
* replace usage of EmitGEPOffset() with TargetData::getIndexedOffset() when ↵Nuno Lopes2012-06-202-8/+6
| | | | | | | | | | the GEP offset is known to be constant. With this change, we avoid relying on the IR Builder to constant fold the operations. No functionality change intended. llvm-svn: 158829
* InstCombine: fix a bug when combining (fcmp cc0 x, y) && (fcmp cc1 x, y).Manman Ren2012-06-141-2/+4
| | | | | | uno && ueq was converted to ueq, it should be converted to uno. llvm-svn: 158441
* InstCombine: factor code better.Benjamin Kramer2012-06-111-14/+7
| | | | | | No functionality change. llvm-svn: 158301
* InstCombine: Turn (zext A) == (B & (1<<X)-1) into A == (trunc B), narrowing ↵Benjamin Kramer2012-06-101-1/+23
| | | | | | | | | | | | | | | | | | | | the compare. This saves a cast, and zext is more expensive on platforms with subreg support than trunc is. This occurs in the BSD implementation of memchr(3), see PR12750. On the synthetic benchmark from that bug stupid_memchr and bsd_memchr have the same performance now when not inlining either function. stupid_memchr: 323.0us bsd_memchr: 321.0us memchr: 479.0us where memchr is the llvm-gcc compiled bsd_memchr from osx lion's libc. When inlining is enabled bsd_memchr still regresses down to llvm-gcc memchr time, I haven't fully understood the issue yet, something is grossly mangling the loop after inlining. llvm-svn: 158297
* canonicalize:Nuno Lopes2012-06-081-4/+5
| | | | | | | | | | | | | | -%a + 42 into 42 - %a previously we were emitting: -(%a + 42) This fixes the infinite loop in PR12338. The generated code is still not perfect, though. Will work on that next llvm-svn: 158237
* Fix a bug in FoldSelectOpOp. Bitcast ops may change the number of vector ↵Nadav Rotem2012-06-071-0/+6
| | | | | | elements, which may disagree with the select condition type. llvm-svn: 158166
* Fix combine of uno && ord -> false so that the ordering of the fcmps doesn'tChad Rosier2012-06-061-1/+3
| | | | | | | matter. rdar://11579835 llvm-svn: 158084
* Fix suspicous hasOneUse() check, found by PVS Studio (PR12357).Benjamin Kramer2012-05-281-1/+1
| | | | llvm-svn: 157592
* InstCombine: Fix infinite loop when encountering switch on trivial icmp.Benjamin Kramer2012-05-281-1/+1
| | | | | | | | | | | | The test case feeds the following into InstCombine's visitSelect: %tobool8 = icmp ne i32 0, 0 %phitmp = select i1 %tobool8, i32 3, i32 0 Then instcombine replaces the right side of the switch with 0, doesn't notice that nothing changes and tries again indefinitely. This fixes PR12897. llvm-svn: 157587
* switch AttrListPtr::get to take an ArrayRef, simplifying a lot of clients.Chris Lattner2012-05-281-4/+2
| | | | llvm-svn: 157556
* PR12967: Don't crash when trying to fold a shift that's larger than the ↵Benjamin Kramer2012-05-271-1/+1
| | | | | | type's size. llvm-svn: 157548
* add a new pass to instrument loads and stores for run-time bounds checkingNuno Lopes2012-05-223-62/+5
| | | | | | | | move EmitGEPOffset from InstCombine to Transforms/Utils/Local.h (a draft of this) patch reviewed by Andrew, thanks. llvm-svn: 157261
* revert my previous patches that introduced an additional parameter to the ↵Nuno Lopes2012-05-221-106/+60
| | | | | | | | objectsize intrinsic. After a lot of discussion, we realized it's not the best option for run-time bounds checking llvm-svn: 157255
* objectsize: add a few more tests and fix a bugNuno Lopes2012-05-111-1/+1
| | | | llvm-svn: 156625
* Fix a minor logic mistake transforming compares in instcombine. PR12514.Eli Friedman2012-05-111-1/+1
| | | | llvm-svn: 156600
* objectsize: add support for GEPs with non-constant indexesNuno Lopes2012-05-103-34/+34
| | | | | | add an additional parameter to InstCombiner::EmitGEPOffset() to force it to *not* emit operations with NUW flag llvm-svn: 156585
* objectsize:Nuno Lopes2012-05-091-55/+96
| | | | | | | refactor code a bit to enable future changes to support run-time information add support to compute allocation sizes at run-time if penalty > 1 (e.g., malloc(x), calloc(x, y), and VLAs) llvm-svn: 156515
* Remove trailing spaces.Jakub Staszak2012-05-061-60/+60
| | | | llvm-svn: 156257
* Small fix in InstCombineCasts.cpp. Restored "alloca + bitcast" reducing for ↵Stepan Dyatkovskiy2012-05-051-1/+1
| | | | | | | | case when alloca's size is calculated within the "add/sub/... nsw". Also added fix to 2011-06-13-nsw-alloca.ll test. llvm-svn: 156231
* remove calls to calloc if the allocated memory is not used (it was already ↵Nuno Lopes2012-05-031-1/+1
| | | | | | | | being done for malloc) fix a few typos found by Chad in my previous commit llvm-svn: 156110
* add support for calloc to objectsize loweringNuno Lopes2012-05-031-5/+17
| | | | llvm-svn: 156102
* replace 'break's with 'return 0' in visitCallInst code for objectsize, since ↵Nuno Lopes2012-05-031-5/+5
| | | | | | | | there is no need to fallback to visitCallSite. This gives a 0.9% in a test case llvm-svn: 156069
* Add support for llvm.arm.neon.vmull* intrinsics to InstCombine. FixesLang Hames2012-05-011-0/+51
| | | | | | | | | <rdar://problem/11291436>. This is a second attempt at a fix for this, the first was r155468. Thanks to Chandler, Bob and others for the feedback that helped me improve this. llvm-svn: 155866
* Add instcombine patterns for the following transformations:Chad Rosier2012-04-262-0/+19
| | | | | | | | | | (x & y) | (x ^ y) -> x | y (x & y) + (x ^ y) -> x | y Patch by Manman Ren. rdar://10770603 llvm-svn: 155674
* Reverting r155468. Chris and Chandler have convinced me that it's dangerous andLang Hames2012-04-251-35/+0
| | | | | | | | in poor taste. Talking through some alternate solutions with Chandler. llvm-svn: 155530
* Add support for llvm.arm.neon.vmull* intrinsics to InstCombine. This fixesLang Hames2012-04-241-0/+35
| | | | | | <rdar://problem/11291436>. llvm-svn: 155468
* Reapply r155136 after fixing PR12599.Jakob Stoklund Olesen2012-04-231-39/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Original commit message: Defer some shl transforms to DAGCombine. The shl instruction is used to represent multiplication by a constant power of two as well as bitwise left shifts. Some InstCombine transformations would turn an shl instruction into a bit mask operation, making it difficult for later analysis passes to recognize the constsnt multiplication. Disable those shl transformations, deferring them to DAGCombine time. An 'shl X, C' instruction is now treated mostly the same was as 'mul X, C'. These transformations are deferred: (X >>? C) << C --> X & (-1 << C) (When X >> C has multiple uses) (X >>? C1) << C2 --> X << (C2-C1) & (-1 << C2) (When C2 > C1) (X >>? C1) << C2 --> X >>? (C1-C2) & (-1 << C2) (When C1 > C2) The corresponding exact transformations are preserved, just like div-exact + mul: (X >>?,exact C) << C --> X (X >>?,exact C1) << C2 --> X << (C2-C1) (X >>?,exact C1) << C2 --> X >>?,exact (C1-C2) The disabled transformations could also prevent the instruction selector from recognizing rotate patterns in hash functions and cryptographic primitives. I have a test case for that, but it is too fragile. llvm-svn: 155362
* Revert r155136 "Defer some shl transforms to DAGCombine."Jakob Stoklund Olesen2012-04-201-35/+39
| | | | | | | | | While the patch was perfect and defect free, it exposed a really nasty bug in X86 SelectionDAG that caused an llc crash when compiling lencod. I'll put the patch back in after fixing the SelectionDAG problem. llvm-svn: 155181
* Defer some shl transforms to DAGCombine.Jakob Stoklund Olesen2012-04-191-39/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The shl instruction is used to represent multiplication by a constant power of two as well as bitwise left shifts. Some InstCombine transformations would turn an shl instruction into a bit mask operation, making it difficult for later analysis passes to recognize the constsnt multiplication. Disable those shl transformations, deferring them to DAGCombine time. An 'shl X, C' instruction is now treated mostly the same was as 'mul X, C'. These transformations are deferred: (X >>? C) << C --> X & (-1 << C) (When X >> C has multiple uses) (X >>? C1) << C2 --> X << (C2-C1) & (-1 << C2) (When C2 > C1) (X >>? C1) << C2 --> X >>? (C1-C2) & (-1 << C2) (When C1 > C2) The corresponding exact transformations are preserved, just like div-exact + mul: (X >>?,exact C) << C --> X (X >>?,exact C1) << C2 --> X << (C2-C1) (X >>?,exact C1) << C2 --> X >>?,exact (C1-C2) The disabled transformations could also prevent the instruction selector from recognizing rotate patterns in hash functions and cryptographic primitives. I have a test case for that, but it is too fragile. llvm-svn: 155136
* Teach InstCombine to nuke a common alloca pattern -- an alloca which hasChandler Carruth2012-04-081-1/+70
| | | | | | | | | | | | GEPs, bit casts, and stores reaching it but no other instructions. These often show up during the iterative processing of the inliner, SROA, and DCE. Once we hit this point, we can completely remove the alloca. These were actually showing up in the final, fully optimized code in a bunch of inliner tests I've been working on, and notably they show up after LLVM finishes optimizing away all function calls involved in hash_combine(a, b). llvm-svn: 154285
* Always compute all the bits in ComputeMaskedBits.Rafael Espindola2012-04-046-42/+26
| | | | | | | | This allows us to keep passing reduced masks to SimplifyDemandedBits, but know about all the bits if SimplifyDemandedBits fails. This allows instcombine to simplify cases like the one in the included testcase. llvm-svn: 154011
* 153465 was incorrect. In this code we wanted to check that the pointer ↵Nadav Rotem2012-03-261-4/+3
| | | | | | operand is of pointer type (and not vector type). llvm-svn: 153468
* PR12357: The pointer was used before it was checked.Nadav Rotem2012-03-261-1/+3
| | | | llvm-svn: 153465
* eliminate an unneeded branch, part of PR12357Chris Lattner2012-03-261-7/+2
| | | | llvm-svn: 153458
* Revert r152907.Bill Wendling2012-03-161-15/+3
| | | | llvm-svn: 152935
* The alignment of the pointer part of the store instruction may have anBill Wendling2012-03-161-3/+15
| | | | | | | | | | alignment. If that's the case, then we want to make sure that we don't increase the alignment of the store instruction. Because if we increase it to be "more aligned" than the pointer, code-gen may use instructions which require a greater alignment than the pointer guarantees. <rdar://problem/11043589> llvm-svn: 152907
* In InstCombiner::visitOr, make sure we reverse the operand swap used for ↵Eli Friedman2012-03-161-1/+7
| | | | | | checking for or-of-xor operations after those checks; a later check expects that any constant will be in Op1. PR12234. llvm-svn: 152884
* Use an iterator instead of calling .size() on the worklist every time, which ↵Bill Wendling2012-03-151-2/+2
| | | | | | is wasteful. llvm-svn: 152794
* llvm::SwitchInstStepan Dyatkovskiy2012-03-111-2/+2
| | | | | | | Renamed methods caseBegin, caseEnd and caseDefault with case_begin, case_end, and case_default. Added some notes relative to case iterators. llvm-svn: 152532
* Taken into account Duncan's comments for r149481 dated by 2nd Feb 2012:Stepan Dyatkovskiy2012-03-081-7/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20120130/136146.html Implemented CaseIterator and it solves almost all described issues: we don't need to mix operand/case/successor indexing anymore. Base iterator class is implemented as a template since it may be initialized either from "const SwitchInst*" or from "SwitchInst*". ConstCaseIt is just a read-only iterator. CaseIt is read-write iterator; it allows to change case successor and case value. Usage of iterator allows totally remove resolveXXXX methods. All indexing convertions done automatically inside the iterator's getters. Main way of iterator usage looks like this: SwitchInst *SI = ... // intialize it somehow for (SwitchInst::CaseIt i = SI->caseBegin(), e = SI->caseEnd(); i != e; ++i) { BasicBlock *BB = i.getCaseSuccessor(); ConstantInt *V = i.getCaseValue(); // Do something. } If you want to convert case number to TerminatorInst successor index, just use getSuccessorIndex iterator's method. If you want initialize iterator from TerminatorInst successor index, use CaseIt::fromSuccessorIndex(...) method. There are also related changes in llvm-clients: klee and clang. llvm-svn: 152297
OpenPOWER on IntegriCloud