summaryrefslogtreecommitdiffstats
path: root/llvm/test/Assembler/AutoUpgradeIntrinsics.ll
Commit message (Collapse)AuthorAgeFilesLines
* remove autoupgrade support for old forms of llvm.prefetch and the oldChris Lattner2011-11-271-24/+0
| | | | | | | trampoline forms. Both of these were correct in LLVM 3.0, and we don't need to support LLVM 2.9 and earlier in mainline. llvm-svn: 145174
* remove some old autoupgrade logicChris Lattner2011-11-271-31/+0
| | | | llvm-svn: 145167
* Split the init.trampoline intrinsic, which currently combines GCC'sDuncan Sands2011-09-061-0/+13
| | | | | | | | | | | | | | | | | | | | init.trampoline and adjust.trampoline intrinsics, into two intrinsics like in GCC. While having one combined intrinsic is tempting, it is not natural because typically the trampoline initialization needs to be done in one function, and the result of adjust trampoline is needed in a different (nested) function. To get around this llvm-gcc hacks the nested function lowering code to insert an additional parent variable holding the adjust.trampoline result that can be accessed from the child function. Dragonegg doesn't have the luxury of tweaking GCC code, so it stored the result of adjust.trampoline in the memory GCC set aside for the trampoline itself (this is always available in the child function), and set up some new memory (using an alloca) to hold the trampoline. Unfortunately this breaks Go which allocates trampoline memory on the heap and wants to use it even after the parent has exited (!). Rather than doing even more hacks to get Go working, it seemed best to just use two intrinsics like in GCC. Patch mostly by Sanjoy Das. llvm-svn: 139140
* rip out a ton of intrinsic modernization logic from AutoUpgrade.cpp, which isChris Lattner2011-06-181-81/+4
| | | | | | | | | for pre-2.9 bitcode files. We keep x86 unaligned loads, movnt, crc32, and the target indep prefetch change. As usual, updating the testsuite is a PITA. llvm-svn: 133337
* Add one more argument to the prefetch intrinsic to indicate whether it's a dataBruno Cardoso Lopes2011-06-141-0/+8
| | | | | | | or instruction cache access. Update the targets to match it and also teach autoupgrade. llvm-svn: 132976
* Replace the "movnt" intrinsics with a native store + nontemporal metadata bit.Bill Wendling2011-05-031-0/+18
| | | | | | <rdar://problem/8460511> llvm-svn: 130791
* Reapply r129401 with patch for clang.Bill Wendling2011-04-131-0/+12
| | | | llvm-svn: 129419
* Revert r129401 for now. Clang is using the old way of doing things.Bill Wendling2011-04-121-12/+0
| | | | llvm-svn: 129403
* Remove the unaligned load intrinsics in favor of using native unaligned loads.Bill Wendling2011-04-121-0/+12
| | | | | | | | | Now that we have a first-class way to represent unaligned loads, the unaligned load intrinsics are superfluous. First part of <rdar://problem/8460511>. llvm-svn: 129401
* Massive rewrite of MMX: Dale Johannesen2010-09-301-1/+1
| | | | | | | | | | | | | | | | | | | The x86_mmx type is used for MMX intrinsics, parameters and return values where these use MMX registers, and is also supported in load, store, and bitcast. Only the above operations generate MMX instructions, and optimizations do not operate on or produce MMX intrinsics. MMX-sized vectors <2 x i32> etc. are lowered to XMM or split into smaller pieces. Optimizations may occur on these forms and the result casted back to x86_mmx, provided the result feeds into a previous existing x86_mmx operation. The point of all this is prevent optimizations from introducing MMX operations, which is unsafe due to the EMMS problem. llvm-svn: 115243
* Fix some escaping and quoting in RUN lines, mainly involving { and <. In twoMatthijs Kooijman2008-06-101-1/+1
| | | | | | | | | cases quoting of <{ didn't work out, so I changed the grep to check for }> instead. This fixes 7 testcases that were not properly running before. llvm-svn: 52182
* All MMX shift instructions took a <2 x i32> vector as the shift amount ↵Anders Carlsson2007-12-141-0/+29
| | | | | | parameter. Change this to be <1 x i64> instead, which matches the assembler instruction. llvm-svn: 45027
* This is the patch to provide clean intrinsic function overloading support in ↵Chandler Carruth2007-08-041-0/+52
LLVM. It cleans up the intrinsic definitions and generally smooths the process for more complicated intrinsic writing. It will be used by the upcoming atomic intrinsics as well as vector and float intrinsics in the future. This also changes the syntax for llvm.bswap, llvm.part.set, llvm.part.select, and llvm.ct* intrinsics. They are automatically upgraded by both the LLVM ASM reader and the bitcode reader. The test cases have been updated, with special tests added to ensure the automatic upgrading is supported. llvm-svn: 40807
OpenPOWER on IntegriCloud