summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/Scalar
Commit message (Collapse)AuthorAgeFilesLines
* Do not let MaskedValueIsZero consider undef to be zero, for reasonsChris Lattner2005-07-201-2/+8
| | | | | | | | explained in the comment. This fixes UnitTests/2003-09-18-BitFieldTest on darwin llvm-svn: 22483
* When transforming &A[i] < &A[j] -> i < j, make sure to perform the comparisonChris Lattner2005-07-181-4/+11
| | | | | | as a signed compare. This patch may fix PR597, but is correct in any case. llvm-svn: 22465
* Fix a problem that instcombine would hit when dealing with unreachable code.Chris Lattner2005-07-071-5/+29
| | | | | | | | | Because the instcombine has to scan the entire function when it starts up to begin with, we might as well do it in DFO so we can nuke unreachable code. This fixes: Transforms/InstCombine/2005-07-07-DeadPHILoop.ll llvm-svn: 22348
* prevent va_arg from being hoisted from a loopAndrew Lenharth2005-06-201-1/+1
| | | | llvm-svn: 22265
* core changes for varargsAndrew Lenharth2005-06-181-1/+1
| | | | llvm-svn: 22254
* Clean up some uninitialized variables and missing return statements thatReid Spencer2005-06-181-3/+3
| | | | | | GCC 4.0.0 compiler (sometimes incorrectly) warns about under release build. llvm-svn: 22249
* This is not true: (X != 13 | X < 15) -> X < 15Chris Lattner2005-06-171-2/+1
| | | | | | | It is actually always true. This fixes PR586 and Transforms/InstCombine/2005-06-16-SetCCOrSetCCMiscompile.ll llvm-svn: 22236
* Don't crash when dealing with INTMIN. This fixes PR585 andChris Lattner2005-06-171-0/+2
| | | | | | Transforms/InstCombine/2005-06-16-RangeCrash.ll llvm-svn: 22234
* avoid constructing out of range shift amounts.Chris Lattner2005-06-171-2/+4
| | | | llvm-svn: 22230
* Fix PR583 and testcase Transforms/InstCombine/2005-06-15-DivSelectCrash.llChris Lattner2005-06-161-1/+1
| | | | llvm-svn: 22227
* Fix PR571, removing code that does just the WRONG thing :)Chris Lattner2005-06-161-27/+1
| | | | llvm-svn: 22225
* Fix a bug in my previous patch. Do not get the shift amount type (whichChris Lattner2005-06-161-1/+1
| | | | | | is always ubyte, get the type being shifted). This unbreaks espresso llvm-svn: 22224
* Fix PR582. The rewriter can move casts around, which invalidated theChris Lattner2005-06-151-1/+11
| | | | | | BB iterator. This fixes Transforms/IndVarsSimplify/2005-06-15-InstMoveCrash.ll llvm-svn: 22221
* Fix PR577 and testcase InstCombine/2005-06-15-ShiftSetCCCrash.ll.Chris Lattner2005-06-151-2/+16
| | | | | | Do not perform undefined out of range shifts. llvm-svn: 22217
* Put the hack back in that removes features, causes regressions to fail, butReid Spencer2005-06-151-0/+2
| | | | | | allows test programs to succeed. Actual fix for this is forthcoming. llvm-svn: 22213
* Unbreak several InstCombine regression checks introduced by a hack toReid Spencer2005-06-131-2/+0
| | | | | | fix the bzip2 test. A better hack is needed. llvm-svn: 22209
* Fix a 64-bit problem, passing (int)0 through ... instead of (void*)0Chris Lattner2005-06-091-4/+4
| | | | llvm-svn: 22206
* hack to fix bzip2 (bug 571)Andrew Lenharth2005-06-041-0/+2
| | | | llvm-svn: 22192
* preserve calling conventions when hacking on codeChris Lattner2005-05-142-1/+4
| | | | llvm-svn: 22024
* calling a function with the wrong CC is undefined, turn it into an unreachableChris Lattner2005-05-131-0/+14
| | | | | | | instruction. This is useful for catching optimizers that don't preserve calling conventions llvm-svn: 21928
* When lowering invokes to calls, amke sure to preserve the calling conv. ThisChris Lattner2005-05-131-7/+9
| | | | | | fixes Ptrdist/anagram with x86 llcbeta llvm-svn: 21925
* Prefer int 0 instead of long 0 for GEP arguments.Chris Lattner2005-05-131-3/+3
| | | | llvm-svn: 21924
* Fix Reassociate/shifttest.llChris Lattner2005-05-101-6/+7
| | | | llvm-svn: 21839
* If a function contains no allocas, all of the calls in it are triviallyChris Lattner2005-05-091-3/+45
| | | | | | suitable for tail calls. llvm-svn: 21836
* implement and.ll:test33Chris Lattner2005-05-091-2/+18
| | | | llvm-svn: 21809
* Implement Reassociate/mul-neg-add.llChris Lattner2005-05-081-0/+12
| | | | llvm-svn: 21788
* Bail out earlierChris Lattner2005-05-081-4/+4
| | | | llvm-svn: 21786
* Teach reassociate that 0-X === X*-1Chris Lattner2005-05-081-4/+46
| | | | llvm-svn: 21785
* Fix PR557 and basictest[34].ll.Chris Lattner2005-05-081-12/+27
| | | | | | | | This makes reassociate realize that loads should be treated as unmovable, and gives distinct ranks to distinct values defined in the same basic block, allowing reassociate to do its thing. llvm-svn: 21783
* Add debugging informationChris Lattner2005-05-081-0/+18
| | | | llvm-svn: 21781
* eliminate gotosChris Lattner2005-05-081-3/+4
| | | | llvm-svn: 21780
* Improve reassociation handling of inverses, implementing inverses.ll.Chris Lattner2005-05-081-2/+104
| | | | llvm-svn: 21778
* clean up and modernize this pass.Chris Lattner2005-05-081-24/+18
| | | | llvm-svn: 21776
* Strength reduce SAR into SHR if there is no way sign bits could be shiftedChris Lattner2005-05-081-0/+10
| | | | | | | | | | | in. This tends to get cases like this: X = cast ubyte to int Y = shr int X, ... Tested by: shift.ll:test24 llvm-svn: 21775
* Refactor some codeChris Lattner2005-05-081-45/+55
| | | | llvm-svn: 21772
* Handle some simple cases where we can see that values get annihilated.Chris Lattner2005-05-081-7/+42
| | | | llvm-svn: 21771
* Fix a miscompilation of crafty by clobbering the "A" variable.Chris Lattner2005-05-071-9/+10
| | | | llvm-svn: 21770
* Rewrite the guts of the reassociate pass to be more efficient and logical. ↵Chris Lattner2005-05-071-103/+185
| | | | | | | | | Instead of trying to do local reassociation tweaks at each level, only process an expression tree once (at its root). This does not improve the reassociation pass in any real way. llvm-svn: 21768
* Convert shifts to muls to assist reassociation. This implementsChris Lattner2005-05-071-2/+27
| | | | | | Reassociate/shifttest.ll llvm-svn: 21761
* Simplify the code and rearrange it. No major functionality changes here.Chris Lattner2005-05-071-62/+82
| | | | llvm-svn: 21759
* Preserve tail markerChris Lattner2005-05-063-4/+7
| | | | llvm-svn: 21737
* Teach instcombine propagate zeroness through shl instructions, implementingChris Lattner2005-05-061-8/+4
| | | | | | and.ll:test31 llvm-svn: 21717
* Implement shift.ll:test23. If we are shifting right then immediately truncatingChris Lattner2005-05-061-3/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the result, turn signed shift rights into unsigned shift rights if possible. This leads to later simplification and happens *often* in 176.gcc. For example, this testcase: struct xxx { unsigned int code : 8; }; enum codes { A, B, C, D, E, F }; int foo(struct xxx *P) { if ((enum codes)P->code == A) bar(); } used to be compiled to: int %foo(%struct.xxx* %P) { %tmp.1 = getelementptr %struct.xxx* %P, int 0, uint 0 ; <uint*> [#uses=1] %tmp.2 = load uint* %tmp.1 ; <uint> [#uses=1] %tmp.3 = cast uint %tmp.2 to int ; <int> [#uses=1] %tmp.4 = shl int %tmp.3, ubyte 24 ; <int> [#uses=1] %tmp.5 = shr int %tmp.4, ubyte 24 ; <int> [#uses=1] %tmp.6 = cast int %tmp.5 to sbyte ; <sbyte> [#uses=1] %tmp.8 = seteq sbyte %tmp.6, 0 ; <bool> [#uses=1] br bool %tmp.8, label %then, label %UnifiedReturnBlock Now it is compiled to: %tmp.1 = getelementptr %struct.xxx* %P, int 0, uint 0 ; <uint*> [#uses=1] %tmp.2 = load uint* %tmp.1 ; <uint> [#uses=1] %tmp.2 = cast uint %tmp.2 to sbyte ; <sbyte> [#uses=1] %tmp.8 = seteq sbyte %tmp.2, 0 ; <bool> [#uses=1] br bool %tmp.8, label %then, label %UnifiedReturnBlock which is the difference between this: foo: subl $4, %esp movl 8(%esp), %eax movl (%eax), %eax shll $24, %eax sarl $24, %eax testb %al, %al jne .LBBfoo_2 and this: foo: subl $4, %esp movl 8(%esp), %eax movl (%eax), %eax testb %al, %al jne .LBBfoo_2 This occurs 3243 times total in the External tests, 215x in povray, 6x in each f2c'd program, 1451x in 176.gcc, 7x in crafty, 20x in perl, 25x in gap, 3x in m88ksim, 25x in ijpeg. Maybe this will cause a little jump on gcc tommorow :) llvm-svn: 21715
* Implement xor.ll:test22Chris Lattner2005-05-061-0/+9
| | | | llvm-svn: 21713
* implement and.ll:test30 and set.ll:test21Chris Lattner2005-05-061-18/+60
| | | | llvm-svn: 21712
* implement or.ll:test20Chris Lattner2005-05-061-0/+7
| | | | llvm-svn: 21709
* Instcombine: cast (X != 0) to int, cast (X == 1) to int -> X iff X has only ↵Chris Lattner2005-05-041-3/+25
| | | | | | | | | | | | | | | | | | | | | | the low bit set. This implements set.ll:test20. This triggers 2x on povray, 9x on mesa, 11x on gcc, 2x on crafty, 1x on eon, 6x on perlbmk and 11x on m88ksim. It allows us to compile these two functions into the same code: struct s { unsigned int bit : 1; }; unsigned foo(struct s *p) { if (p->bit) return 1; else return 0; } unsigned bar(struct s *p) { return p->bit; } llvm-svn: 21690
* Fixed a comment.John Criswell2005-05-021-3/+3
| | | | llvm-svn: 21653
* Implement getelementptr.ll:test11Chris Lattner2005-05-011-0/+16
| | | | llvm-svn: 21647
* Check for volatile loads only once.Chris Lattner2005-05-011-9/+35
| | | | | | Implement load.ll:test7 llvm-svn: 21645
OpenPOWER on IntegriCloud