summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms
Commit message (Collapse)AuthorAgeFilesLines
...
* Wrap long linesChris Lattner2005-05-061-6/+10
| | | | llvm-svn: 21720
* DCE intrinsic instructions without side effects.Chris Lattner2005-05-061-1/+20
| | | | llvm-svn: 21719
* Teach instcombine propagate zeroness through shl instructions, implementingChris Lattner2005-05-061-8/+4
| | | | | | and.ll:test31 llvm-svn: 21717
* Implement shift.ll:test23. If we are shifting right then immediately truncatingChris Lattner2005-05-061-3/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the result, turn signed shift rights into unsigned shift rights if possible. This leads to later simplification and happens *often* in 176.gcc. For example, this testcase: struct xxx { unsigned int code : 8; }; enum codes { A, B, C, D, E, F }; int foo(struct xxx *P) { if ((enum codes)P->code == A) bar(); } used to be compiled to: int %foo(%struct.xxx* %P) { %tmp.1 = getelementptr %struct.xxx* %P, int 0, uint 0 ; <uint*> [#uses=1] %tmp.2 = load uint* %tmp.1 ; <uint> [#uses=1] %tmp.3 = cast uint %tmp.2 to int ; <int> [#uses=1] %tmp.4 = shl int %tmp.3, ubyte 24 ; <int> [#uses=1] %tmp.5 = shr int %tmp.4, ubyte 24 ; <int> [#uses=1] %tmp.6 = cast int %tmp.5 to sbyte ; <sbyte> [#uses=1] %tmp.8 = seteq sbyte %tmp.6, 0 ; <bool> [#uses=1] br bool %tmp.8, label %then, label %UnifiedReturnBlock Now it is compiled to: %tmp.1 = getelementptr %struct.xxx* %P, int 0, uint 0 ; <uint*> [#uses=1] %tmp.2 = load uint* %tmp.1 ; <uint> [#uses=1] %tmp.2 = cast uint %tmp.2 to sbyte ; <sbyte> [#uses=1] %tmp.8 = seteq sbyte %tmp.2, 0 ; <bool> [#uses=1] br bool %tmp.8, label %then, label %UnifiedReturnBlock which is the difference between this: foo: subl $4, %esp movl 8(%esp), %eax movl (%eax), %eax shll $24, %eax sarl $24, %eax testb %al, %al jne .LBBfoo_2 and this: foo: subl $4, %esp movl 8(%esp), %eax movl (%eax), %eax testb %al, %al jne .LBBfoo_2 This occurs 3243 times total in the External tests, 215x in povray, 6x in each f2c'd program, 1451x in 176.gcc, 7x in crafty, 20x in perl, 25x in gap, 3x in m88ksim, 25x in ijpeg. Maybe this will cause a little jump on gcc tommorow :) llvm-svn: 21715
* Implement xor.ll:test22Chris Lattner2005-05-061-0/+9
| | | | llvm-svn: 21713
* implement and.ll:test30 and set.ll:test21Chris Lattner2005-05-061-18/+60
| | | | llvm-svn: 21712
* implement or.ll:test20Chris Lattner2005-05-061-0/+7
| | | | llvm-svn: 21709
* Fix a bug compimling Ruby, fixing this testcase:Chris Lattner2005-05-051-3/+11
| | | | | | LowerSetJmp/2005-05-05-OldUses.ll llvm-svn: 21696
* Instcombine: cast (X != 0) to int, cast (X == 1) to int -> X iff X has only ↵Chris Lattner2005-05-041-3/+25
| | | | | | | | | | | | | | | | | | | | | | the low bit set. This implements set.ll:test20. This triggers 2x on povray, 9x on mesa, 11x on gcc, 2x on crafty, 1x on eon, 6x on perlbmk and 11x on m88ksim. It allows us to compile these two functions into the same code: struct s { unsigned int bit : 1; }; unsigned foo(struct s *p) { if (p->bit) return 1; else return 0; } unsigned bar(struct s *p) { return p->bit; } llvm-svn: 21690
* Implement the IsDigitOptimization for simplifying calls to the isdigitReid Spencer2005-05-041-6/+54
| | | | | | | | | | | | | library function: isdigit(chr) -> 0 or 1 if chr is constant isdigit(chr) -> chr - '0' <= 9 otherwise Although there are many calls to isdigit in llvm-test, most of them are compiled away by macros leaving only this: 2 MultiSource/Applications/hexxagon llvm-svn: 21688
* * Correct the function prototypes for some of the functions to match theReid Spencer2005-05-041-9/+172
| | | | | | | | | | | | | | | | | | | | | | actual spec (int -> uint) * Add the ability to get/cache the strlen function prototype. * Make sure generated values are appropriately named for debugging purposes * Add the SPrintFOptimiation for 4 casts of sprintf optimization: sprintf(str,cstr) -> llvm.memcpy(str,cstr) (if cstr has no %) sprintf(str,"") -> store sbyte 0, str sprintf(str,"%s",src) -> llvm.memcpy(str,src) (if src is constant) sprintf(str,"%c",chr) -> store chr, str ; store sbyte 0, str+1 The sprintf optimization didn't fire as much as I had hoped: 2 MultiSource/Applications/SPASS 5 MultiSource/Benchmarks/McCat/18-imp 22 MultiSource/Benchmarks/Prolangs-C/TimberWolfMC 1 MultiSource/Benchmarks/Prolangs-C/assembler 6 MultiSource/Benchmarks/Prolangs-C/unix-smail 2 MultiSource/Benchmarks/mediabench/mpeg2/mpeg2dec llvm-svn: 21679
* Implement optimizations for the strchr and llvm.memset library calls.Reid Spencer2005-05-031-21/+232
| | | | | | | | | | | | | | | Neither of these activated as many times as was hoped: strchr: 9 MultiSource/Applications/siod 1 MultiSource/Applications/d 2 MultiSource/Prolangs-C/archie-client 1 External/SPEC/CINT2000/176.gcc/176.gcc llvm.memset: no hits llvm-svn: 21669
* Avoid garbage output in the statistics display by ensuring that theReid Spencer2005-05-031-18/+34
| | | | | | | | strings passed to Statistic's constructor are not destructable. The stats are printed during static destruction and the SimplifyLibCalls module was getting destructed before the statistics. llvm-svn: 21661
* Add the StrNCmpOptimization which is similar to strcmp.Reid Spencer2005-05-031-13/+101
| | | | | | Unfortunately, this optimization didn't trigger on any llvm-test tests. llvm-svn: 21660
* Implement the fprintf optimization which converts calls like this:Reid Spencer2005-05-021-7/+126
| | | | | | | | | | | | | | | | fprintf(F,"hello") -> fwrite("hello",strlen("hello"),1,F) fprintf(F,"%s","hello") -> fwrite("hello",strlen("hello"),1,F) fprintf(F,"%c",'x') -> fputc('c',F) This optimization fires severals times in llvm-test: 313 MultiSource/Applications/Burg 302 MultiSource/Benchmarks/Prolangs-C/TimberWolfMC 189 MultiSource/Benchmarks/Prolangs-C/mybison 175 MultiSource/Benchmarks/Prolangs-C/football 130 MultiSource/Benchmarks/Prolangs-C/unix-tbl llvm-svn: 21657
* Fixed a comment.John Criswell2005-05-021-3/+3
| | | | llvm-svn: 21653
* Implement getelementptr.ll:test11Chris Lattner2005-05-011-0/+16
| | | | llvm-svn: 21647
* Check for volatile loads only once.Chris Lattner2005-05-011-9/+35
| | | | | | Implement load.ll:test7 llvm-svn: 21645
* Fix a comment that stated the wrong thing.Reid Spencer2005-04-301-5/+2
| | | | llvm-svn: 21638
* * Don't depend on "guessing" what a FILE* is, just require that the actualReid Spencer2005-04-301-21/+132
| | | | | | | | | | | | | | | | | | | | type be obtained from a CallInst we're optimizing. * Make it possible for getConstantStringLength to return the ConstantArray that it extracts in case the content is needed by an Optimization. * Implement the strcmp optimization * Implement the toascii optimization This pass is now firing several to many times in the following MultiSource tests: Applications/Burg - 7 (strcat,strcpy) Applications/siod - 13 (strcat,strcpy,strlen) Applications/spiff - 120 (exit,fputs,strcat,strcpy,strlen) Applications/treecc - 66 (exit,fputs,strcat,strcpy) Applications/kimwitu++ - 34 (strcmp,strcpy,strlen) Applications/SPASS - 588 (exit,fputs,strcat,strcpy,strlen) llvm-svn: 21626
* Implement the optimizations for "pow" and "fputs" library calls.Reid Spencer2005-04-291-17/+217
| | | | llvm-svn: 21618
* Remove optimizations that don't require both operands to be constant. TheseReid Spencer2005-04-291-10/+0
| | | | | | are moved to simplify-libcalls pass. llvm-svn: 21614
* Consistently use 'class' to silence VC++Jeff Cohen2005-04-291-2/+4
| | | | llvm-svn: 21612
* * Add constant folding for additional floating point library calls such asReid Spencer2005-04-281-26/+90
| | | | | | | | sinh, cosh, etc. * Make the name comparisons for the fp libcalls a little more efficient by switching on the first character of the name before doing comparisons. llvm-svn: 21611
* Remove from the TODO list those optimizations that are already handled byReid Spencer2005-04-281-29/+1
| | | | | | constant folding implemented in lib/Transforms/Utils/Local.cpp. llvm-svn: 21604
* Document additional libcall transformations that need to be written.Reid Spencer2005-04-281-2/+183
| | | | | | | | Help Wanted! There's a lot of them to write. llvm-svn: 21603
* Doxygenate.Reid Spencer2005-04-271-54/+71
| | | | llvm-svn: 21602
* remove 'statement with no effect' warningChris Lattner2005-04-271-1/+1
| | | | llvm-svn: 21600
* More Cleanup:Reid Spencer2005-04-271-28/+26
| | | | | | | * Name the instructions by appending to name of original * Factor common part out of a switch statement. llvm-svn: 21597
* This is a cleanup commit:Reid Spencer2005-04-271-305/+410
| | | | | | | | | | | | | | | | | | | | | | | | * Correct stale documentation in a few places * Re-order the file to better associate things and reduce line count * Make the pass thread safe by caching the Function* objects needed by the optimizers in the pass object instead of globally. * Provide the SimplifyLibCalls pass object to the optimizer classes so they can access cached Function* objects and TargetData info * Make sure the pass resets its cache if the Module passed to runOnModule changes * Rename CallOptimizer LibCallOptimization. All the classes are named *Optimization while the objects are *Optimizer. * Don't cache Function* in the optimizer objects because they could be used by multiple PassManager's running in multiple threads * Add an optimization for strcpy which is similar to strcat * Add a "TODO" list at the end of the file for ideas on additional libcall optimizations that could be added (get ideas from other compilers). Sorry for the huge diff. Its mostly reorganization of code. That won't happen again as I believe the design and infrastructure for this pass is now done or close to it. llvm-svn: 21589
* detect functions that never return, and turn the instruction following aChris Lattner2005-04-271-50/+143
| | | | | | | | | | | | call to them into an 'unreachable' instruction. This triggers a bunch of times, particularly on gcc: gzip: 36 gcc: 601 eon: 12 bzip: 38 llvm-svn: 21587
* Prefix the debug statistics so they group together.Reid Spencer2005-04-271-1/+3
| | | | llvm-svn: 21583
* In debug builds, make a statistic for each kind of call optimization. ThisReid Spencer2005-04-271-21/+35
| | | | | | | helps track down what gets triggered in the pass so its easier to identify good test cases. llvm-svn: 21582
* This analysis doesn't take 'throwing' into consideration, it looks atChris Lattner2005-04-261-13/+13
| | | | | | 'unwinding' llvm-svn: 21581
* Fix up the debug statement to actually use a newline .. radical concept.Reid Spencer2005-04-261-1/+1
| | | | llvm-svn: 21580
* Uh, this isn't argpromotion.Reid Spencer2005-04-261-1/+1
| | | | llvm-svn: 21579
* Add some debugging output so we can tell which calls are getting triggeredReid Spencer2005-04-261-7/+9
| | | | llvm-svn: 21578
* No, seriously folks, memcpy really does return void.Reid Spencer2005-04-261-1/+1
| | | | llvm-svn: 21575
* memcpy returns void!!!!!Reid Spencer2005-04-261-8/+2
| | | | llvm-svn: 21574
* Fix some bugs found by running on llvm-test:Reid Spencer2005-04-261-9/+17
| | | | | | | | | * MemCpyOptimization can only be optimized if the 3rd and 4th arguments are constants and we weren't checking for that. * The result of llvm.memcpy (and llvm.memmove) is void* not sbyte*, put in a cast. llvm-svn: 21570
* Changes From Review Feedback:Reid Spencer2005-04-261-97/+122
| | | | | | | | | | | | | | | | | * Have the SimplifyLibCalls pass acquire the TargetData and pass it down to the optimization classes so they can use it to make better choices for the signatures of functions, etc. * Rearrange the code a little so the utility functions are closer to their usage and keep the core of the pass near the top of the files. * Adjust the StrLen pass to get/use the correct prototype depending on the TargetData::getIntPtrType() result. The result of strlen is size_t which could be either uint or ulong depending on the platform. * Clean up some coding nits (cast vs. dyn_cast, remove redundant items from a switch, etc.) * Implement the MemMoveOptimization as a twin of MemCpyOptimization (they only differ in name). llvm-svn: 21569
* Fix the compile failures from last night.Chris Lattner2005-04-261-0/+2
| | | | llvm-svn: 21565
* * Merge get_GVInitializer and getCharArrayLength into a single functionReid Spencer2005-04-261-136/+151
| | | | | | | | | | | | | | | | named getConstantStringLength. This is the common part of StrCpy and StrLen optimizations and probably several others, yet to be written. It performs all the validity checks for looking at constant arrays that are supposed to be null-terminated strings and then computes the actual length of the string. * Implement the MemCpyOptimization class. This just turns memcpy of 1, 2, 4 and 8 byte data blocks that are properly aligned on those boundaries into a load and a store. Much more could be done here but alignment restrictions and lack of knowledge of the target instruction set prevent use from doing significantly more. That will have to be delegated to the code generators as they lower llvm.memcpy calls. llvm-svn: 21562
* * Implement StrLenOptimizationReid Spencer2005-04-261-52/+137
| | | | | | | * Factor out commonalities between StrLenOptimization and StrCatOptimization * Make sure that signatures return sbyte* not void* llvm-svn: 21559
* Incorporate feedback from Chris:Reid Spencer2005-04-261-102/+184
| | | | | | | | | | | | | | | | | | | | | * Change signatures of OptimizeCall and ValidateCalledFunction so they are non-const, allowing the optimization object to be modified. This is in support of caching things used across multiple calls. * Provide two functions for constructing and caching function types * Modify the StrCatOptimization to cache Function objects for strlen and llvm.memcpy so it doesn't regenerate them on each call site. Make sure these are invalidated each time we start the pass. * Handle both a GEP Instruction and a GEP ConstantExpr * Add additional checks to make sure we really are dealing with an arary of sbyte and that all the element initializers are ConstantInt or ConstantExpr that reduce to ConstantInt. * Make sure the GlobalVariable is constant! * Don't use ConstantArray::getString as it can fail and it doesn't give us the right thing. We must check for null bytes in the middle of the array. * Use llvm.memcpy instead of memcpy so we can factor alignment into it. * Don't use void* types in signatures, replace with sbyte* instead. llvm-svn: 21555
* Changes due to code review and new implementation:Reid Spencer2005-04-251-4/+1
| | | | | | | | | | | | | | | | * Don't use std::string for the function names, const char* will suffice * Allow each CallOptimizer to validate the function signature before doing anything * Repeatedly loop over the functions until an iteration produces no more optimizations. This allows one optimization to insert a call that is optimized by another optimization. * Implement the ConstantArray portion of the StrCatOptimization * Provide a template for the MemCpyOptimization * Make ExitInMainOptimization split the block, not delete everything after the return instruction. (This covers revision 1.3 and 1.4, as the 1.3 comments were botched) llvm-svn: 21548
* Lots of changes based on review and new functionality:Reid Spencer2005-04-251-46/+264
| | | | | | * Use a  llvm-svn: 21546
* implement getelementptr.ll:test10Chris Lattner2005-04-251-1/+19
| | | | llvm-svn: 21541
* Post-Review Cleanup:Reid Spencer2005-04-251-51/+68
| | | | | | | | | | | | | | | | | | | * Fix comments at top of file * Change algorithm for running the call optimizations from n*n to something closer to n. * Use a hash_map to store and lookup the optimizations since there will eventually (or potentially) be a large number of them. This gets lookup based on the name of the function to O(1). Each CallOptimizer now has a std::string member named func_name that tracks the name of the function that it applies to. It is this string that is entered into the hash_map for fast comparison against the function names encountered in the module. * Cleanup some style issues pertaining to iterator invalidation * Don't pass the Function pointer to the OptimizeCall function because if the optimization needs it, it can get it from the CallInst passed in. * Add the skeleton for a new CallOptimizer, StrCatOptimizer which will eventually replace strcat's of constant strings with direct copies. llvm-svn: 21526
* A new pass to provide specific optimizations for certain well-known libraryReid Spencer2005-04-251-0/+167
| | | | | | | | | calls. The pass visits all external functions in the module and determines if such function calls can be optimized. The optimizations are specific to the library calls involved. This initial version only optimizes calls to exit(3) when they occur in main(): it changes them to ret instructions. llvm-svn: 21522
OpenPOWER on IntegriCloud