summaryrefslogtreecommitdiffstats
path: root/llvm/lib/VMCore/AutoUpgrade.cpp
Commit message (Collapse)AuthorAgeFilesLines
* Add auto upgrade support for x86 pcmpgt/pcmpeq intrinics removed in r149367.Craig Topper2012-02-031-3/+40
| | | | llvm-svn: 149678
* Fix unused value warning for value used only in assert.Nick Lewycky2011-12-121-5/+2
| | | | llvm-svn: 146440
* Don't rely in there being one argument before we've actually identifiedChandler Carruth2011-12-121-3/+4
| | | | | | | a function to upgrade. Also, simplify the code a bit at the expense of one line. llvm-svn: 146368
* Switch llvm.cttz and llvm.ctlz to accept a second i1 parameter whichChandler Carruth2011-12-121-6/+33
| | | | | | | | | | | | | | | | | | | | indicates whether the intrinsic has a defined result for a first argument equal to zero. This will eventually allow these intrinsics to accurately model the semantics of GCC's __builtin_ctz and __builtin_clz and the X86 instructions (prior to AVX) which implement them. This patch merely sets the stage by extending the signature of these intrinsics and establishing auto-upgrade logic so that the old spelling still works both in IR and in bitcode. The upgrade logic preserves the existing (inefficient) semantics. This patch should not change any behavior. CodeGen isn't updated because it can use the existing semantics regardless of the flag's value. Note that this will be followed by API updates to Clang and DragonEgg. Reviewed by Nick Lewycky! llvm-svn: 146357
* Eli managed to kill off llvm.membarrier in llvm 3.0 also, this meansChris Lattner2011-11-271-34/+8
| | | | | | that mainline needs no autoupgrade logic for intrinsics yet, woohoo! llvm-svn: 145178
* The llvm.atomic intrinsics *were* removed in LLVM 3.0 (in r141333), remove the Chris Lattner2011-11-271-68/+1
| | | | | | autoupgrade logic for 2.9 and before. llvm-svn: 145176
* remove autoupgrade support for old forms of llvm.prefetch and the oldChris Lattner2011-11-271-101/+1
| | | | | | | trampoline forms. Both of these were correct in LLVM 3.0, and we don't need to support LLVM 2.9 and earlier in mainline. llvm-svn: 145174
* remove autoupgrade support for really old-style debug info intrinsics.Chris Lattner2011-11-271-42/+0
| | | | | | | I think this is the last of autoupgrade that can be removed in 3.1. Can the atomic upgrade stuff also go? llvm-svn: 145169
* remove some old autoupgrade logicChris Lattner2011-11-271-80/+1
| | | | llvm-svn: 145167
* remove autoupgrade support for LLVM 2.9 exception stuff. Mainline supportsChris Lattner2011-11-271-246/+0
| | | | | | LLVM 3.0 and later. llvm-svn: 145165
* Remove the old atomic instrinsics. autoupgrade functionality is included ↵Eli Friedman2011-10-061-0/+91
| | | | | | with this patch. llvm-svn: 141333
* Split the init.trampoline intrinsic, which currently combines GCC'sDuncan Sands2011-09-061-0/+46
| | | | | | | | | | | | | | | | | | | | init.trampoline and adjust.trampoline intrinsics, into two intrinsics like in GCC. While having one combined intrinsic is tempting, it is not natural because typically the trampoline initialization needs to be done in one function, and the result of adjust trampoline is needed in a different (nested) function. To get around this llvm-gcc hacks the nested function lowering code to insert an additional parent variable holding the adjust.trampoline result that can be accessed from the child function. Dragonegg doesn't have the luxury of tweaking GCC code, so it stored the result of adjust.trampoline in the memory GCC set aside for the trampoline itself (this is always available in the child function), and set up some new memory (using an alloca) to hold the trampoline. Unfortunately this breaks Go which allocates trampoline memory on the heap and wants to use it even after the parent has exited (!). Rather than doing even more hacks to get Go working, it seemed best to just use two intrinsics like in GCC. Patch mostly by Sanjoy Das. llvm-svn: 139140
* The insertion point for the loads is right before the llvm.eh.exceptionBill Wendling2011-09-041-1/+1
| | | | | | | | call. The call may be in the same BB as the landingpad instruction. If that's the case, then inserting the loads after the landingpad inst, but before the extractvalues, causes undefined behavior. llvm-svn: 139088
* Don't reload the values that are already there. The llvm.eh.resume uses the sameBill Wendling2011-09-031-7/+4
| | | | | | | values that the resume instruction uses. PR10850 llvm-svn: 139076
* No need to get fancy inserting a PHI node when the values are stored in stackBill Wendling2011-09-021-43/+15
| | | | | | | | slots. This fixes a bug where the number of nodes coming into the PHI node may not equal the number of predecessors. E.g., two or more landingpad instructions may require a PHI before reaching the eh.exception and eh.selector instructions. llvm-svn: 139035
* Perform the upgrading of the old EH to the new EH in a more sane manner.Bill Wendling2011-09-021-34/+113
| | | | | | | | | | | | | | | | | | | | | | | | | | Perform the upgrading in steps. * First, create a map of the invokes to the EH intrinsics. * Next, take that mapping and determine if the invoke's unwind destination has a single predecessor. If not, then create a new empty block to hold the new landingpad instruction. * Create a landingpad instruction into the uwnind destination. Fill it with the values from the old selector. Map the old intrinsic calls to the new landingpad values (there may be multiple landingpad instructions per instrinic call pairs). * Go through the old intrinsic calls, create a PHI node when necessary, and then replace their values with the new values from the landingpad instructions. * Delete all dead instructions. * ??? * Profit! llvm-svn: 138990
* Only delete instructions once.Bill Wendling2011-08-271-5/+6
| | | | llvm-svn: 138700
* Initial check in that will auto-upgrade the old EH scheme to the new EH scheme.Bill Wendling2011-08-251-0/+201
| | | | | | | | | | This upgrade suffers from the problems of the old EH scheme - i.e., that the calls to llvm.eh.exception() and llvm.eh.selector() can wander off and get lost. It makes a valiant effort to reclaim these little lost lambs. This is a first draft, so it hasn't yet been hooked up to the parser. llvm-svn: 138602
* land David Blaikie's patch to de-constify Type, with a few tweaks.Chris Lattner2011-07-181-4/+4
| | | | llvm-svn: 135375
* Convert CallInst and InvokeInst APIs to use ArrayRef.Jay Foad2011-07-151-1/+1
| | | | llvm-svn: 135265
* rework the remaining autoupgrade logic to use a StringRef instead of creating aChris Lattner2011-06-181-49/+34
| | | | | | temporary std::string for every function being checked. llvm-svn: 133355
* rip out a ton of intrinsic modernization logic from AutoUpgrade.cpp, which isChris Lattner2011-06-181-1197/+23
| | | | | | | | | for pre-2.9 bitcode files. We keep x86 unaligned loads, movnt, crc32, and the target indep prefetch change. As usual, updating the testsuite is a PITA. llvm-svn: 133337
* Add one more argument to the prefetch intrinsic to indicate whether it's a dataBruno Cardoso Lopes2011-06-141-0/+47
| | | | | | | or instruction cache access. Update the targets to match it and also teach autoupgrade. llvm-svn: 132976
* CRC32 intrinsics were renamed at revision 132163. This submissionChad Rosier2011-05-271-5/+5
| | | | | | | | fixes aliasing issues with the old and new names as well as adds test cases for the auto-upgrader. Fixes rdar 9472944. llvm-svn: 132207
* Renamed llvm.x86.sse42.crc32 intrinsics; crc64 doesn't exist. Chad Rosier2011-05-261-1/+27
| | | | | | | crc32.[8|16|32] have been renamed to .crc32.32.[8|16|32] and crc64.[8|16|32] have been renamed to .crc32.64.[8|64]. llvm-svn: 132163
* Replace the "movnt" intrinsics with a native store + nontemporal metadata bit.Bill Wendling2011-05-031-0/+32
| | | | | | <rdar://problem/8460511> llvm-svn: 130791
* Reapply r129401 with patch for clang.Bill Wendling2011-04-131-1/+29
| | | | llvm-svn: 129419
* Revert r129401 for now. Clang is using the old way of doing things.Bill Wendling2011-04-121-29/+1
| | | | llvm-svn: 129403
* Remove the unaligned load intrinsics in favor of using native unaligned loads.Bill Wendling2011-04-121-1/+29
| | | | | | | | | Now that we have a first-class way to represent unaligned loads, the unaligned load intrinsics are superfluous. First part of <rdar://problem/8460511>. llvm-svn: 129401
* Remove dead code.Bill Wendling2011-03-301-68/+0
| | | | llvm-svn: 128519
* Add intrinsics @llvm.arm.neon.vmulls and @llvm.arm.neon.vmullu.* back. FrontendsEvan Cheng2011-03-291-1/+0
| | | | | | | | | | | | | | | was lowering them to sext / uxt + mul instructions. Unfortunately the optimization passes may hoist the extensions out of the loop and separate them. When that happens, the long multiplication instructions can be broken into several scalar instructions, causing significant performance issue. Note the vmla and vmls intrinsics are not added back. Frontend will codegen them as intrinsics vmull* + add / sub. Also note the isel optimizations for catching mul + sext / zext are not changed either. First part of rdar://8832507, rdar://9203134 llvm-svn: 128502
* convert ConstantVector::get to use ArrayRef.Chris Lattner2011-02-151-2/+2
| | | | llvm-svn: 125537
* revert my ConstantVector patch, it seems to have made the llvm-gccChris Lattner2011-02-141-2/+2
| | | | | | builders unhappy. llvm-svn: 125504
* Switch ConstantVector::get to use ArrayRef instead of a pointer+sizeChris Lattner2011-02-141-2/+2
| | | | | | idiom. Change various clients to simplify their code. llvm-svn: 125487
* The pshufw instruction came about in MMX2 when SSE was introduced. Don't placeBill Wendling2010-10-041-7/+37
| | | | | | | | | it in with the SSSE3 instructions. Steward! Could you place this chair by the aft sun deck? I'm trying to get away from the Astors. They are such boors! llvm-svn: 115552
* Massive rewrite of MMX: Dale Johannesen2010-09-301-55/+480
| | | | | | | | | | | | | | | | | | | The x86_mmx type is used for MMX intrinsics, parameters and return values where these use MMX registers, and is also supported in load, store, and bitcast. Only the above operations generate MMX instructions, and optimizations do not operate on or produce MMX intrinsics. MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into smaller pieces. Optimizations may occur on these forms and the result casted back to x86_mmx, provided the result feeds into a previous existing x86_mmx operation. The point of all this is prevent optimizations from introducing MMX operations, which is unsafe due to the EMMS problem. llvm-svn: 115243
* Use StringRef which performs the "early exit" when compared against a constantBill Wendling2010-09-101-6/+1
| | | | | | string. llvm-svn: 113615
* Early exit with simple checks.Bill Wendling2010-09-101-3/+6
| | | | llvm-svn: 113603
* Auto-upgrade the magic ".llvm.eh.catch.all.value" global toBill Wendling2010-09-101-0/+14
| | | | | | "llvm.eh.catch.all.value". Only the name needs to be changed. llvm-svn: 113600
* Replace NEON vabdl, vaba, and vabal intrinsics with combinations of theBob Wilson2010-09-031-12/+50
| | | | | | | | vabd intrinsic and add and/or zext operations. In the case of vaba, this also avoids the need for a DAG combine pattern to combine vabd with add. Update tests. Auto-upgrade the old intrinsics. llvm-svn: 112941
* Remove NEON vmull, vmlal, and vmlsl intrinsics, replacing them with multiply,Bob Wilson2010-09-011-21/+52
| | | | | | | add, and subtract operations with zero-extended or sign-extended vectors. Update tests. Add auto-upgrade support for the old intrinsics. llvm-svn: 112773
* Remove NEON vmovn intrinsic, replacing it with vector truncate operations.Bob Wilson2010-08-301-1/+6
| | | | | | Auto-upgrade the old intrinsic and update tests. llvm-svn: 112507
* Remove NEON vaddl, vaddw, vsubl, and vsubw intrinsics. Instead, use llvmBob Wilson2010-08-291-2/+32
| | | | | | | IR add/sub operations with one or both operands sign- or zero-extended. Auto-upgrade the old intrinsics. llvm-svn: 112416
* Add alignment arguments to all the NEON load/store intrinsics.Bob Wilson2010-08-271-1/+66
| | | | | | | Update all the tests using those intrinsics and add support for auto-upgrading bitcode files with the old versions of the intrinsics. llvm-svn: 112271
* Replace the arm.neon.vmovls and vmovlu intrinsics with vector sign-extend andBob Wilson2010-08-201-0/+29
| | | | | | zero-extend operations. llvm-svn: 111614
* undo 80 column trespassing I causedGabor Greif2010-07-221-1/+2
| | | | llvm-svn: 109092
* use ArgOperand APIGabor Greif2010-06-291-6/+6
| | | | llvm-svn: 107145
* use helper to neatly access argumentsGabor Greif2010-06-231-5/+6
| | | | llvm-svn: 106622
* use high-level accessorsGabor Greif2010-06-221-12/+13
| | | | llvm-svn: 106573
* Remove the palignr intrinsics now that we lower them to vector shuffles,Eric Christopher2010-04-201-1/+119
| | | | | | | | shifts and null vectors. Autoupgrade these to what we'd lower them to. Add a testcase to exercise this. llvm-svn: 101851
OpenPOWER on IntegriCloud