summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
* Implement the optimizations for "pow" and "fputs" library calls.Reid Spencer2005-04-291-17/+217
| | | | llvm-svn: 21618
* Remove optimizations that don't require both operands to be constant. TheseReid Spencer2005-04-291-10/+0
| | | | | | are moved to simplify-libcalls pass. llvm-svn: 21614
* Consistently use 'class' to silence VC++Jeff Cohen2005-04-291-2/+4
| | | | llvm-svn: 21612
* * Add constant folding for additional floating point library calls such asReid Spencer2005-04-281-26/+90
| | | | | | | | sinh, cosh, etc. * Make the name comparisons for the fp libcalls a little more efficient by switching on the first character of the name before doing comparisons. llvm-svn: 21611
* Add support for FSQRT node, patch contributed by Morten OfstadChris Lattner2005-04-281-4/+8
| | | | llvm-svn: 21610
* These functions can set errno!Chris Lattner2005-04-281-2/+2
| | | | llvm-svn: 21609
* Add some new X86 instrs, patch contributed by Morten OfstadChris Lattner2005-04-281-3/+6
| | | | llvm-svn: 21608
* Codegen fabs/fabsf as FABS. Patch contributed by Morten OfstadChris Lattner2005-04-281-0/+9
| | | | llvm-svn: 21607
* Legalize FSQRT, FSIN, FCOS nodes, patch contributed by Morten OfstadChris Lattner2005-04-281-0/+13
| | | | llvm-svn: 21606
* Add FSQRT, FSIN, FCOS nodes, patch contributed by Morten OfstadChris Lattner2005-04-281-1/+4
| | | | llvm-svn: 21605
* Remove from the TODO list those optimizations that are already handled byReid Spencer2005-04-281-29/+1
| | | | | | constant folding implemented in lib/Transforms/Utils/Local.cpp. llvm-svn: 21604
* Document additional libcall transformations that need to be written.Reid Spencer2005-04-281-2/+183
| | | | | | | | Help Wanted! There's a lot of them to write. llvm-svn: 21603
* Doxygenate.Reid Spencer2005-04-271-54/+71
| | | | llvm-svn: 21602
* remove 'statement with no effect' warningChris Lattner2005-04-271-1/+1
| | | | llvm-svn: 21600
* Implement Value* tracking for loads and stores in the selection DAG. This ↵Andrew Lenharth2005-04-278-61/+93
| | | | | | | | enables one to use alias analysis in the backends. (TRUNK)Stores and (EXT|ZEXT|SEXT)Loads have an extra SDOperand which is a SrcValueSDNode which contains the Value*. Note that if the operation is introduced by the backend, it will still have the operand, but the value* will be null. llvm-svn: 21599
* Unbreak the sparc backend.Chris Lattner2005-04-271-2/+4
| | | | llvm-svn: 21598
* More Cleanup:Reid Spencer2005-04-271-28/+26
| | | | | | | * Name the instructions by appending to name of original * Factor common part out of a switch statement. llvm-svn: 21597
* clean up some warningsDuraid Madina2005-04-271-15/+15
| | | | llvm-svn: 21590
* This is a cleanup commit:Reid Spencer2005-04-271-305/+410
| | | | | | | | | | | | | | | | | | | | | | | | * Correct stale documentation in a few places * Re-order the file to better associate things and reduce line count * Make the pass thread safe by caching the Function* objects needed by the optimizers in the pass object instead of globally. * Provide the SimplifyLibCalls pass object to the optimizer classes so they can access cached Function* objects and TargetData info * Make sure the pass resets its cache if the Module passed to runOnModule changes * Rename CallOptimizer LibCallOptimization. All the classes are named *Optimization while the objects are *Optimizer. * Don't cache Function* in the optimizer objects because they could be used by multiple PassManager's running in multiple threads * Add an optimization for strcpy which is similar to strcat * Add a "TODO" list at the end of the file for ideas on additional libcall optimizations that could be added (get ideas from other compilers). Sorry for the huge diff. Its mostly reorganization of code. That won't happen again as I believe the design and infrastructure for this pass is now done or close to it. llvm-svn: 21589
* detect functions that never return, and turn the instruction following aChris Lattner2005-04-271-50/+143
| | | | | | | | | | | | call to them into an 'unreachable' instruction. This triggers a bunch of times, particularly on gcc: gzip: 36 gcc: 601 eon: 12 bzip: 38 llvm-svn: 21587
* Prefix the debug statistics so they group together.Reid Spencer2005-04-271-1/+3
| | | | llvm-svn: 21583
* In debug builds, make a statistic for each kind of call optimization. ThisReid Spencer2005-04-271-21/+35
| | | | | | | helps track down what gets triggered in the pass so its easier to identify good test cases. llvm-svn: 21582
* This analysis doesn't take 'throwing' into consideration, it looks atChris Lattner2005-04-261-13/+13
| | | | | | 'unwinding' llvm-svn: 21581
* Fix up the debug statement to actually use a newline .. radical concept.Reid Spencer2005-04-261-1/+1
| | | | llvm-svn: 21580
* Uh, this isn't argpromotion.Reid Spencer2005-04-261-1/+1
| | | | llvm-svn: 21579
* Add some debugging output so we can tell which calls are getting triggeredReid Spencer2005-04-261-7/+9
| | | | llvm-svn: 21578
* No, seriously folks, memcpy really does return void.Reid Spencer2005-04-261-1/+1
| | | | llvm-svn: 21575
* memcpy returns void!!!!!Reid Spencer2005-04-261-8/+2
| | | | llvm-svn: 21574
* don't let Reid build void*'s :)Chris Lattner2005-04-261-0/+2
| | | | llvm-svn: 21571
* Fix some bugs found by running on llvm-test:Reid Spencer2005-04-261-9/+17
| | | | | | | | | * MemCpyOptimization can only be optimized if the 3rd and 4th arguments are constants and we weren't checking for that. * The result of llvm.memcpy (and llvm.memmove) is void* not sbyte*, put in a cast. llvm-svn: 21570
* Changes From Review Feedback:Reid Spencer2005-04-261-97/+122
| | | | | | | | | | | | | | | | | * Have the SimplifyLibCalls pass acquire the TargetData and pass it down to the optimization classes so they can use it to make better choices for the signatures of functions, etc. * Rearrange the code a little so the utility functions are closer to their usage and keep the core of the pass near the top of the files. * Adjust the StrLen pass to get/use the correct prototype depending on the TargetData::getIntPtrType() result. The result of strlen is size_t which could be either uint or ulong depending on the platform. * Clean up some coding nits (cast vs. dyn_cast, remove redundant items from a switch, etc.) * Implement the MemMoveOptimization as a twin of MemCpyOptimization (they only differ in name). llvm-svn: 21569
* Make interval partition print correctly, patch contributed byChris Lattner2005-04-261-2/+2
| | | | | | Vladimir Prus! llvm-svn: 21566
* Fix the compile failures from last night.Chris Lattner2005-04-261-0/+2
| | | | llvm-svn: 21565
* constmul bugfix: multiply by 27611 was brokenDuraid Madina2005-04-261-11/+10
| | | | llvm-svn: 21564
* clean up the code! (oops) lots more cleaning left, however.Duraid Madina2005-04-261-22/+0
| | | | llvm-svn: 21563
* * Merge get_GVInitializer and getCharArrayLength into a single functionReid Spencer2005-04-261-136/+151
| | | | | | | | | | | | | | | | named getConstantStringLength. This is the common part of StrCpy and StrLen optimizations and probably several others, yet to be written. It performs all the validity checks for looking at constant arrays that are supposed to be null-terminated strings and then computes the actual length of the string. * Implement the MemCpyOptimization class. This just turns memcpy of 1, 2, 4 and 8 byte data blocks that are properly aligned on those boundaries into a load and a store. Much more could be done here but alignment restrictions and lack of knowledge of the target instruction set prevent use from doing significantly more. That will have to be delegated to the code generators as they lower llvm.memcpy calls. llvm-svn: 21562
* * Add code to reduce multiplies by constant integers to shifts, adds andDuraid Madina2005-04-261-15/+439
| | | | | | | | | | | | | | | | | | | | | | | | subtracts. This is a very rough and nasty implementation of Lefevre's "pattern finding" algorithm. With a few small changes though, it should end up beating most other methods in common use, regardless of the size of the constant (currently, it's often one or two shifts worse) TODO: rewrite it so it's not hideously ugly (this is a translation from perl, which doesn't help ;) bypass most of it for multiplies by 2^n+1 (eventually) teach it that some combinations of shift+add are cheaper than others (e.g. shladd on ia64, scaled adds on alpha) get it to try multiple booth encodings in search of the cheapest routine make it work for negative constants This is hacked up as a DAG->DAG transform, so once I clean it up I hope it'll be pulled out of here and put somewhere else. The only thing backends should really have to worry about for now is where to draw the line between using this code vs. going ahead and doing an integer multiply anyway. llvm-svn: 21560
* * Implement StrLenOptimizationReid Spencer2005-04-261-52/+137
| | | | | | | * Factor out commonalities between StrLenOptimization and StrCatOptimization * Make sure that signatures return sbyte* not void* llvm-svn: 21559
* Incorporate feedback from Chris:Reid Spencer2005-04-261-102/+184
| | | | | | | | | | | | | | | | | | | | | * Change signatures of OptimizeCall and ValidateCalledFunction so they are non-const, allowing the optimization object to be modified. This is in support of caching things used across multiple calls. * Provide two functions for constructing and caching function types * Modify the StrCatOptimization to cache Function objects for strlen and llvm.memcpy so it doesn't regenerate them on each call site. Make sure these are invalidated each time we start the pass. * Handle both a GEP Instruction and a GEP ConstantExpr * Add additional checks to make sure we really are dealing with an arary of sbyte and that all the element initializers are ConstantInt or ConstantExpr that reduce to ConstantInt. * Make sure the GlobalVariable is constant! * Don't use ConstantArray::getString as it can fail and it doesn't give us the right thing. We must check for null bytes in the middle of the array. * Use llvm.memcpy instead of memcpy so we can factor alignment into it. * Don't use void* types in signatures, replace with sbyte* instead. llvm-svn: 21555
* Fold (X > -1) | (Y > -1) --> (X&Y > -1)Chris Lattner2005-04-261-1/+3
| | | | llvm-svn: 21552
* Changes due to code review and new implementation:Reid Spencer2005-04-251-4/+1
| | | | | | | | | | | | | | | | * Don't use std::string for the function names, const char* will suffice * Allow each CallOptimizer to validate the function signature before doing anything * Repeatedly loop over the functions until an iteration produces no more optimizations. This allows one optimization to insert a call that is optimized by another optimization. * Implement the ConstantArray portion of the StrCatOptimization * Provide a template for the MemCpyOptimization * Make ExitInMainOptimization split the block, not delete everything after the return instruction. (This covers revision 1.3 and 1.4, as the 1.3 comments were botched) llvm-svn: 21548
* implement some more logical compares with constants, so that:Chris Lattner2005-04-251-7/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | int foo1(int x, int y) { int t1 = x >= 0; int t2 = y >= 0; return t1 & t2; } int foo2(int x, int y) { int t1 = x == -1; int t2 = y == -1; return t1 & t2; } produces: _foo1: or r2, r4, r3 srwi r2, r2, 31 xori r3, r2, 1 blr _foo2: and r2, r4, r3 addic r2, r2, 1 li r2, 0 addze r3, r2 blr instead of: _foo1: srwi r2, r4, 31 xori r2, r2, 1 srwi r3, r3, 31 xori r3, r3, 1 and r3, r2, r3 blr _foo2: addic r2, r4, 1 li r2, 0 addze r2, r2 addic r3, r3, 1 li r3, 0 addze r3, r3 and r3, r2, r3 blr llvm-svn: 21547
* Lots of changes based on review and new functionality:Reid Spencer2005-04-251-46/+264
| | | | | | * Use a  llvm-svn: 21546
* Codegen x < 0 | y < 0 as (x|y) < 0. This allows us to compile this to:Chris Lattner2005-04-251-1/+4
| | | | | | | | | | | | | | | | | _foo: or r2, r4, r3 srwi r3, r2, 31 blr instead of: _foo: srwi r2, r4, 31 srwi r3, r3, 31 or r3, r2, r3 blr llvm-svn: 21544
* Make dominates(A,B) work with post dominators. Patch contributed byChris Lattner2005-04-251-2/+7
| | | | | | Naveen Neelakantam, thanks! llvm-svn: 21543
* implement getelementptr.ll:test10Chris Lattner2005-04-251-1/+19
| | | | llvm-svn: 21541
* Correctly handle global-argument aliases induced in mainChris Lattner2005-04-251-2/+30
| | | | llvm-svn: 21537
* Don't mess up SCC traversal when a node has null edges out of it.Chris Lattner2005-04-251-5/+6
| | | | llvm-svn: 21536
* Post-Review Cleanup:Reid Spencer2005-04-251-51/+68
| | | | | | | | | | | | | | | | | | | * Fix comments at top of file * Change algorithm for running the call optimizations from n*n to something closer to n. * Use a hash_map to store and lookup the optimizations since there will eventually (or potentially) be a large number of them. This gets lookup based on the name of the function to O(1). Each CallOptimizer now has a std::string member named func_name that tracks the name of the function that it applies to. It is this string that is entered into the hash_map for fast comparison against the function names encountered in the module. * Cleanup some style issues pertaining to iterator invalidation * Don't pass the Function pointer to the OptimizeCall function because if the optimization needs it, it can get it from the CallInst passed in. * Add the skeleton for a new CallOptimizer, StrCatOptimizer which will eventually replace strcat's of constant strings with direct copies. llvm-svn: 21526
* Shut GCC 4.0 up about classes that have virtual functions but a non-virtualReid Spencer2005-04-251-0/+4
| | | | | | destructor. Just add the do-nothing virtual destructor. llvm-svn: 21524
OpenPOWER on IntegriCloud