summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
* pcmpeq* and pcmpgt* intrinsics.Evan Cheng2006-04-141-2/+68
| | | | llvm-svn: 27685
* psll*, psrl*, and psra* intrinsics.Evan Cheng2006-04-141-1/+99
| | | | llvm-svn: 27684
* Remove the .cvsignore file so this directory can be pruned.Reid Spencer2006-04-131-1/+0
| | | | llvm-svn: 27683
* Remove .cvsignore so that this directory can be pruned.Reid Spencer2006-04-131-2/+0
| | | | llvm-svn: 27682
* Handle some kernel code than ends in [0 x sbyte]. I think this is safeAndrew Lenharth2006-04-131-2/+11
| | | | llvm-svn: 27672
* Expand some code with temporary variables to rid ourselves of the warningReid Spencer2006-04-131-7/+21
| | | | | | about "dereferencing type-punned pointer will break strict-aliasing rules" llvm-svn: 27671
* Doh. PANDrm, etc. are not commutable.Evan Cheng2006-04-131-9/+7
| | | | llvm-svn: 27668
* Force non-darwin targets to use a static relo model. This fixes PR734,Chris Lattner2006-04-131-7/+8
| | | | | | tested by CodeGen/Generic/vector.ll llvm-svn: 27657
* add a note, move an altivec todo to the altivec list.Chris Lattner2006-04-132-7/+16
| | | | llvm-svn: 27654
* linear -> constant timeAndrew Lenharth2006-04-131-3/+3
| | | | llvm-svn: 27652
* Add the README files to the distribution.Reid Spencer2006-04-136-1/+6
| | | | llvm-svn: 27651
* psad, pmax, pmin intrinsics.Evan Cheng2006-04-131-1/+54
| | | | llvm-svn: 27647
* Various SSE2 packed integer intrinsics: pmulhuw, pavgw, etc.Evan Cheng2006-04-131-4/+71
| | | | llvm-svn: 27645
* X86 SSE2 supports v8i16 multiplicationEvan Cheng2006-04-131-0/+1
| | | | llvm-svn: 27644
* UpdateEvan Cheng2006-04-131-0/+12
| | | | llvm-svn: 27643
* padds{b|w}, paddus{b|w}, psubs{b|w}, psubus{b|w} intrinsics.Evan Cheng2006-04-131-8/+78
| | | | llvm-svn: 27639
* Naming inconsistency.Evan Cheng2006-04-131-1/+1
| | | | llvm-svn: 27638
* SSE / SSE2 conversion intrinsics.Evan Cheng2006-04-122-33/+99
| | | | llvm-svn: 27637
* All "integer" logical ops (pand, por, pxor) are now promoted to v2i64.Evan Cheng2006-04-123-148/+73
| | | | | | Clean up and fix various logical ops issues. llvm-svn: 27633
* Promote vector AND, OR, and XOREvan Cheng2006-04-121-0/+27
| | | | llvm-svn: 27632
* Make sure CVS versions of yacc and lex files get distributed.Reid Spencer2006-04-121-0/+2
| | | | llvm-svn: 27630
* Get rid of a signed/unsigned compare warning.Reid Spencer2006-04-121-1/+1
| | | | llvm-svn: 27625
* Add a new way to match vector constants, which make it easier to bang bits ofChris Lattner2006-04-122-7/+91
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | different types. Codegen spltw(0x7FFFFFFF) and spltw(0x80000000) without a constant pool load, implementing PowerPC/vec_constants.ll:test1. This compiles: typedef float vf __attribute__ ((vector_size (16))); typedef int vi __attribute__ ((vector_size (16))); void test(vi *P1, vi *P2, vf *P3) { *P1 &= (vi){0x80000000,0x80000000,0x80000000,0x80000000}; *P2 &= (vi){0x7FFFFFFF,0x7FFFFFFF,0x7FFFFFFF,0x7FFFFFFF}; *P3 = vec_abs((vector float)*P3); } to: _test: mfspr r2, 256 oris r6, r2, 49152 mtspr 256, r6 vspltisw v0, -1 vslw v0, v0, v0 lvx v1, 0, r3 vand v1, v1, v0 stvx v1, 0, r3 lvx v1, 0, r4 vandc v1, v1, v0 stvx v1, 0, r4 lvx v1, 0, r5 vandc v0, v1, v0 stvx v0, 0, r5 mtspr 256, r2 blr instead of (with two constant pool entries): _test: mfspr r2, 256 oris r6, r2, 49152 mtspr 256, r6 li r6, lo16(LCPI1_0) lis r7, ha16(LCPI1_0) li r8, lo16(LCPI1_1) lis r9, ha16(LCPI1_1) lvx v0, r7, r6 lvx v1, 0, r3 vand v0, v1, v0 stvx v0, 0, r3 lvx v0, r9, r8 lvx v1, 0, r4 vand v1, v1, v0 stvx v1, 0, r4 lvx v1, 0, r5 vand v0, v1, v0 stvx v0, 0, r5 mtspr 256, r2 blr GCC produces (with 2 cp entries): _test: mfspr r0,256 stw r0,-4(r1) oris r0,r0,0xc00c mtspr 256,r0 lis r2,ha16(LC0) lis r9,ha16(LC1) la r2,lo16(LC0)(r2) lvx v0,0,r3 lvx v1,0,r5 la r9,lo16(LC1)(r9) lwz r12,-4(r1) lvx v12,0,r2 lvx v13,0,r9 vand v0,v0,v12 stvx v0,0,r3 vspltisw v0,-1 vslw v12,v0,v0 vandc v1,v1,v12 stvx v1,0,r5 lvx v0,0,r4 vand v0,v0,v13 stvx v0,0,r4 mtspr 256,r12 blr llvm-svn: 27624
* Turn casts into getelementptr's when possible. This enables SROA to be moreChris Lattner2006-04-121-0/+23
| | | | | | | | aggressive in some cases where LLVMGCC 4 is inserting casts for no reason. This implements InstCombine/cast.ll:test27/28. llvm-svn: 27620
* Don't emit useless warning messages.Reid Spencer2006-04-121-2/+3
| | | | llvm-svn: 27617
* Rename get_VSPLI_elt -> get_VSPLTI_eltChris Lattner2006-04-123-32/+40
| | | | | | | | | Canonicalize BUILD_VECTOR's that match VSPLTI's into a single type for each form, eliminating a bunch of Pat patterns in the .td file and allowing us to CSE stuff more aggressively. This implements PowerPC/buildvec_canonicalize.ll:VSPLTI llvm-svn: 27614
* Promote v4i32, v8i16, v16i8 load to v2i64 load.Evan Cheng2006-04-122-57/+41
| | | | llvm-svn: 27612
* Ensure that zero vectors are always v4i32, which forces them to CSE withChris Lattner2006-04-122-8/+13
| | | | | | each other. This implements CodeGen/PowerPC/vxor-canonicalize.ll llvm-svn: 27609
* Vector type promotion for ISD::LOAD and ISD::SELECTEvan Cheng2006-04-121-9/+23
| | | | llvm-svn: 27606
* Implement support for the formal_arguments node. To get this, targets ↵Chris Lattner2006-04-123-3/+128
| | | | | | shouldcustom legalize it and remove their XXXTargetLowering::LowerArguments overload llvm-svn: 27604
* Various SSE2 conversion intrinsicsEvan Cheng2006-04-121-39/+94
| | | | llvm-svn: 27603
* Don't memoize vloads in the load map! Don't memoize them anywhere here, letChris Lattner2006-04-121-2/+0
| | | | | | getNode do it. This fixes CodeGen/Generic/2006-04-11-vecload.ll llvm-svn: 27602
* Added __builtin_ia32_storelv4si, __builtin_ia32_movqv4si,Evan Cheng2006-04-111-2/+21
| | | | | | __builtin_ia32_loadlv4si, __builtin_ia32_loaddqu, __builtin_ia32_storedqu. llvm-svn: 27599
* Fix SingleSource/UnitTests/Vector/sumarray-dblNate Begeman2006-04-111-4/+3
| | | | llvm-svn: 27594
* Fix PR727, correctly handling large stack aligments on ppcNate Begeman2006-04-111-32/+28
| | | | llvm-svn: 27593
* we have a shuffle instr, add an example.Chris Lattner2006-04-111-5/+6
| | | | llvm-svn: 27592
* gcc lower SSE prefetch into generic prefetch intrinsic. Need to add supportEvan Cheng2006-04-111-8/+4
| | | | | | later. llvm-svn: 27591
* Misc. intrinsics.Evan Cheng2006-04-111-13/+13
| | | | llvm-svn: 27590
* Suppress debug label when not debug.Jim Laskey2006-04-111-1/+1
| | | | llvm-svn: 27588
* movnt* and maskmovdqu intrinsicsEvan Cheng2006-04-112-16/+44
| | | | llvm-svn: 27587
* Only get Tmp2 for cases where number of operands is > 1. Fixed return void.Evan Cheng2006-04-111-1/+1
| | | | llvm-svn: 27586
* add some todosChris Lattner2006-04-111-0/+8
| | | | llvm-svn: 27580
* Vector function results go into V2 according to GCC. The darwin ABI docChris Lattner2006-04-111-1/+10
| | | | | | doesn't say where they go :-/ llvm-svn: 27579
* Add basic support for legalizing returns of vectorsChris Lattner2006-04-111-9/+36
| | | | llvm-svn: 27578
* Move some return-handling code from lowerarguments to the ISD::RET handling ↵Chris Lattner2006-04-111-20/+9
| | | | | | | | stuff. No functionality change. llvm-svn: 27577
* Added support for _mm_move_ss and _mm_move_sd.Evan Cheng2006-04-113-2/+46
| | | | llvm-svn: 27575
* Use existing information.Jim Laskey2006-04-102-10/+14
| | | | llvm-svn: 27574
* Implement vec_shuffle.ll:test3Chris Lattner2006-04-101-2/+17
| | | | llvm-svn: 27573
* Implement InstCombine/vec_shuffle.ll:test[12]Chris Lattner2006-04-101-0/+62
| | | | llvm-svn: 27571
* Remove some bogus patterns; clean up.Evan Cheng2006-04-101-53/+20
| | | | llvm-svn: 27569
OpenPOWER on IntegriCloud