summaryrefslogtreecommitdiffstats
path: root/llvm
Commit message (Collapse)AuthorAgeFilesLines
...
* I understand now. Shoot.Andrew Lenharth2006-04-181-21/+3
| | | | llvm-svn: 27819
* - PEXTRW cannot take a memory location as its first source operand.Evan Cheng2006-04-182-9/+1
| | | | | | - PINSRWrmi encoding bug. llvm-svn: 27818
* SHUFP{S|D}, PSHUF* encoding bugs. Left out the mask immediate operand.Evan Cheng2006-04-181-5/+5
| | | | llvm-svn: 27817
* Name change for clarity sakeEvan Cheng2006-04-181-9/+9
| | | | llvm-svn: 27816
* Encoding bug: CMPPSrmi, CMPPDrmi dropped operand 2 (condtion immediate).Evan Cheng2006-04-181-2/+2
| | | | llvm-svn: 27815
* Name change for clarity sakeEvan Cheng2006-04-181-4/+4
| | | | llvm-svn: 27814
* Left a pattern outEvan Cheng2006-04-181-0/+4
| | | | llvm-svn: 27813
* llvm.memc* improvements. helps PA a lot in some specmarksAndrew Lenharth2006-04-181-2/+7
| | | | llvm-svn: 27812
* llvm.memc* improvements. helps PA a lot in some specmarksAndrew Lenharth2006-04-181-4/+11
| | | | llvm-svn: 27811
* These are correctly encoded by the JIT. I checked :)Chris Lattner2006-04-181-2/+0
| | | | llvm-svn: 27810
* add a noteChris Lattner2006-04-181-0/+23
| | | | llvm-svn: 27809
* Fix a crash on:Chris Lattner2006-04-181-2/+24
| | | | | | | | | | | void foo2(vector float *A, vector float *B) { vector float C = (vector float)vec_cmpeq(*A, *B); if (!vec_any_eq(*A, *B)) *B = (vector float){0,0,0,0}; *A = C; } llvm-svn: 27808
* Fixed an encoding bug: movd from XMM to R32.Evan Cheng2006-04-181-1/+1
| | | | llvm-svn: 27807
* pretty print node nameChris Lattner2006-04-181-0/+1
| | | | llvm-svn: 27806
* Implement an important entry from README_ALTIVEC:Chris Lattner2006-04-184-24/+106
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If an altivec predicate compare is used immediately by a branch, don't use a (serializing) MFCR instruction to read the CR6 register, which requires a compare to get it back to CR's. Instead, just branch on CR6 directly. :) For example, for: void foo2(vector float *A, vector float *B) { if (!vec_any_eq(*A, *B)) *B = (vector float){0,0,0,0}; } We now generate: _foo2: mfspr r2, 256 oris r5, r2, 12288 mtspr 256, r5 lvx v2, 0, r4 lvx v3, 0, r3 vcmpeqfp. v2, v3, v2 bne cr6, LBB1_2 ; UnifiedReturnBlock LBB1_1: ; cond_true vxor v2, v2, v2 stvx v2, 0, r4 mtspr 256, r2 blr LBB1_2: ; UnifiedReturnBlock mtspr 256, r2 blr instead of: _foo2: mfspr r2, 256 oris r5, r2, 12288 mtspr 256, r5 lvx v2, 0, r4 lvx v3, 0, r3 vcmpeqfp. v2, v3, v2 mfcr r3, 2 rlwinm r3, r3, 27, 31, 31 cmpwi cr0, r3, 0 beq cr0, LBB1_2 ; UnifiedReturnBlock LBB1_1: ; cond_true vxor v2, v2, v2 stvx v2, 0, r4 mtspr 256, r2 blr LBB1_2: ; UnifiedReturnBlock mtspr 256, r2 blr This implements CodeGen/PowerPC/vec_br_cmp.ll. llvm-svn: 27804
* new testcaseChris Lattner2006-04-181-0/+22
| | | | llvm-svn: 27803
* move some stuff around, clean things upChris Lattner2006-04-181-14/+11
| | | | llvm-svn: 27802
* Teach the codegen about instructions used for SSE spill code, allowing itChris Lattner2006-04-181-0/+4
| | | | | | to optimize cases where it has to spill a lot llvm-svn: 27801
* Fix a copy & paste error from long ago.Nate Begeman2006-04-181-1/+1
| | | | llvm-svn: 27800
* Add some more notes, many still missingChris Lattner2006-04-181-1/+30
| | | | llvm-svn: 27799
* Have the AutoRegen.sh script prompt the user for the LLVM src and objReid Spencer2006-04-182-4/+29
| | | | | | | | | | directories if it can't find them. Then, replace those values into the configure.ac script and pass them to the LLVM_CONFIG_PROJECT so that the values become the default for llvm_src and llvm_obj variables. In this way the user is required to input this exactly once, and the scripts take it from there. llvm-svn: 27798
* Make it possible to default the llvm_src and llvm_obj variables based onReid Spencer2006-04-181-2/+2
| | | | | | | the arguments to the macro. This better supports the AutoRegen.sh script in projects/sample/autoconf. llvm-svn: 27797
* add a bunch of stuff, pieces still missingChris Lattner2006-04-181-46/+170
| | | | llvm-svn: 27796
* Add a warning.Chris Lattner2006-04-181-0/+3
| | | | llvm-svn: 27795
* Add a warningChris Lattner2006-04-181-0/+1
| | | | llvm-svn: 27794
* Use vmladduhm to do v8i16 multiplies which is faster and simpler than doingChris Lattner2006-04-181-18/+3
| | | | | | even/odd halves. Thanks to Nate telling me what's what. llvm-svn: 27793
* Implement v16i8 multiply with this code:Chris Lattner2006-04-182-11/+25
| | | | | | | | | | | | | | | | vmuloub v5, v3, v2 vmuleub v2, v3, v2 vperm v2, v2, v5, v4 This implements CodeGen/PowerPC/vec_mul.ll. With this, v16i8 multiplies are 6.79x faster than before. Overall, UnitTests/Vector/multiplies.c is now 2.45x faster with LLVM than with GCC. Remove the 'integer multiplies' todo from the README file. llvm-svn: 27792
* Add tests for v8i16 and v16i8Chris Lattner2006-04-181-2/+16
| | | | llvm-svn: 27791
* Correct commentsEvan Cheng2006-04-181-6/+6
| | | | llvm-svn: 27790
* Lower v8i16 multiply into this code:Chris Lattner2006-04-181-25/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | li r5, lo16(LCPI1_0) lis r6, ha16(LCPI1_0) lvx v4, r6, r5 vmulouh v5, v3, v2 vmuleuh v2, v3, v2 vperm v2, v2, v5, v4 where v4 is: LCPI1_0: ; <16 x ubyte> .byte 2 .byte 3 .byte 18 .byte 19 .byte 6 .byte 7 .byte 22 .byte 23 .byte 10 .byte 11 .byte 26 .byte 27 .byte 14 .byte 15 .byte 30 .byte 31 This is 5.07x faster on the G5 (measured) than lowering to scalar code + loads/stores. llvm-svn: 27789
* Custom lower v4i32 multiplies into a cute sequence, instead of having legalizeChris Lattner2006-04-181-10/+53
| | | | | | | | | scalarize the sequence into 4 mullw's and a bunch of load/store traffic. This speeds up v4i32 multiplies 4.1x (measured) on a G5. This implements PowerPC/vec_mul.ll llvm-svn: 27788
* new testcaseChris Lattner2006-04-181-0/+11
| | | | llvm-svn: 27787
* Another entryEvan Cheng2006-04-181-0/+35
| | | | llvm-svn: 27786
* Fix a build failure on Vladimir's tester.Chris Lattner2006-04-181-0/+1
| | | | llvm-svn: 27785
* Another entry.Evan Cheng2006-04-181-0/+151
| | | | llvm-svn: 27784
* Use movss to insert_vector_elt(v, s, 0).Evan Cheng2006-04-172-19/+37
| | | | llvm-svn: 27782
* Turn x86 unaligned load/store intrinsics into aligned load/store instructionsChris Lattner2006-04-171-1/+16
| | | | | | if the pointer is known aligned. llvm-svn: 27781
* Fix handling of calls in functions that use vectors. This fixes a crash onChris Lattner2006-04-171-13/+1
| | | | | | the code in GCC PR26546. llvm-svn: 27780
* Use two pinsrw to insert an element into v4i32 / v4f32 vector.Evan Cheng2006-04-171-3/+30
| | | | llvm-svn: 27779
* remove done itemChris Lattner2006-04-171-19/+2
| | | | llvm-svn: 27778
* Don't diddle VRSAVE if no registers need to be added/removed from it. ThisChris Lattner2006-04-171-4/+53
| | | | | | | | | | | | | | | | | | | | | | | | allows us to codegen functions as: _test_rol: vspltisw v2, -12 vrlw v2, v2, v2 blr instead of: _test_rol: mfvrsave r2, 256 mr r3, r2 mtvrsave r3 vspltisw v2, -12 vrlw v2, v2, v2 mtvrsave r2 blr Testcase here: CodeGen/PowerPC/vec_vrsave.ll llvm-svn: 27777
* New testcase, shouldn't touch vrsaveChris Lattner2006-04-171-0/+7
| | | | llvm-svn: 27776
* Add a MachineInstr::eraseFromParent convenience method.Chris Lattner2006-04-171-0/+9
| | | | llvm-svn: 27775
* Add some convenience methods.Chris Lattner2006-04-171-0/+10
| | | | llvm-svn: 27774
* Encoding bugEvan Cheng2006-04-171-1/+1
| | | | llvm-svn: 27773
* Vectors that are known live-in and live-out are clearly already marked inChris Lattner2006-04-171-0/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | the vrsave register for the caller. This allows us to codegen a function as: _test_rol: mfspr r2, 256 mr r3, r2 mtspr 256, r3 vspltisw v2, -12 vrlw v2, v2, v2 mtspr 256, r2 blr instead of: _test_rol: mfspr r2, 256 oris r3, r2, 40960 mtspr 256, r3 vspltisw v0, -12 vrlw v2, v0, v0 mtspr 256, r2 blr llvm-svn: 27772
* Prefer to allocate V2-V5 before V0,V1. This lets us generate code like this:Chris Lattner2006-04-171-1/+1
| | | | | | | | | | | | | | vspltisw v2, -12 vrlw v2, v2, v2 instead of: vspltisw v0, -12 vrlw v2, v0, v0 when a function is returning a value. llvm-svn: 27771
* Move some knowledge about registers out of the code emitter into the ↵Chris Lattner2006-04-173-41/+47
| | | | | | register info. llvm-svn: 27770
* Use a small table instead of macros to do this conversion.Chris Lattner2006-04-171-10/+13
| | | | llvm-svn: 27769
* Implement v8i16, v16i8 splat using unpckl + pshufd.Evan Cheng2006-04-171-16/+56
| | | | llvm-svn: 27768
OpenPOWER on IntegriCloud