| Commit message (Collapse) | Author | Age | Files | Lines | ||
|---|---|---|---|---|---|---|
| ... | ||||||
| * | hsubp{s|d} encoding bug | Evan Cheng | 2006-04-15 | 1 | -4/+4 | |
| | | | | | llvm-svn: 27720 | |||||
| * | Silly bug | Evan Cheng | 2006-04-15 | 3 | -18/+11 | |
| | | | | | llvm-svn: 27719 | |||||
| * | Do not use movs{h|l}dup for a shuffle with a single non-undef node. | Evan Cheng | 2006-04-15 | 1 | -2/+14 | |
| | | | | | llvm-svn: 27718 | |||||
| * | significant cleanups to code that uses insert/extractelt heavily. This builds | Chris Lattner | 2006-04-15 | 1 | -0/+126 | |
| | | | | | | | maximal shuffles out of them where possible. llvm-svn: 27717 | |||||
| * | Added SSE (and other) entries to foldMemoryOperand(). | Evan Cheng | 2006-04-14 | 1 | -19/+155 | |
| | | | | | llvm-svn: 27716 | |||||
| * | Some clean up | Evan Cheng | 2006-04-14 | 1 | -78/+81 | |
| | | | | | llvm-svn: 27715 | |||||
| * | Allow undef in a shuffle mask | Chris Lattner | 2006-04-14 | 1 | -0/+1 | |
| | | | | | llvm-svn: 27714 | |||||
| * | Move these ctors out of line | Chris Lattner | 2006-04-14 | 1 | -0/+13 | |
| | | | | | llvm-svn: 27713 | |||||
| * | Last few SSE3 intrinsics. | Evan Cheng | 2006-04-14 | 3 | -32/+189 | |
| | | | | | llvm-svn: 27711 | |||||
| * | Teach scalarrepl to promote unions of vectors and floats, producing | Chris Lattner | 2006-04-14 | 1 | -46/+101 | |
| | | | | | | | | insert/extractelement operations. This implements Transforms/ScalarRepl/vector_promote.ll llvm-svn: 27710 | |||||
| * | Misc. SSE2 intrinsics: clflush, lfench, mfence | Evan Cheng | 2006-04-14 | 1 | -2/+11 | |
| | | | | | llvm-svn: 27699 | |||||
| * | We were not adjusting the frame size to ensure proper alignment when alloca / | Evan Cheng | 2006-04-14 | 1 | -30/+23 | |
| | | | | | | | | | vla are present in the function. This causes a crash when a leaf function allocates space on the stack used to store / load with 128-bit SSE instructions. llvm-svn: 27698 | |||||
| * | New entry | Evan Cheng | 2006-04-14 | 1 | -0/+5 | |
| | | | | | llvm-svn: 27697 | |||||
| * | Don't print out the install command for Intrinsics.gen unless VERBOSE mode. | Reid Spencer | 2006-04-14 | 1 | -1/+2 | |
| | | | | | llvm-svn: 27696 | |||||
| * | Make this assertion better | Chris Lattner | 2006-04-14 | 1 | -1/+1 | |
| | | | | | llvm-svn: 27695 | |||||
| * | Move the rest of the PPCTargetLowering::LowerOperation cases out into | Chris Lattner | 2006-04-14 | 1 | -468/+529 | |
| | | | | | | | separate functions, for simplicity and code clarity. llvm-svn: 27693 | |||||
| * | Pull the VECTOR_SHUFFLE and BUILD_VECTOR lowering code out into separate | Chris Lattner | 2006-04-14 | 1 | -147/+155 | |
| | | | | | | | functions, which makes the code much cleaner :) llvm-svn: 27692 | |||||
| * | Implement value #'ing for vector operations, implementing | Chris Lattner | 2006-04-14 | 1 | -32/+38 | |
| | | | | | | | Regression/Transforms/GCSE/vectorops.ll llvm-svn: 27691 | |||||
| * | pcmpeq* and pcmpgt* intrinsics. | Evan Cheng | 2006-04-14 | 1 | -2/+68 | |
| | | | | | llvm-svn: 27685 | |||||
| * | psll*, psrl*, and psra* intrinsics. | Evan Cheng | 2006-04-14 | 1 | -1/+99 | |
| | | | | | llvm-svn: 27684 | |||||
| * | Remove the .cvsignore file so this directory can be pruned. | Reid Spencer | 2006-04-13 | 1 | -1/+0 | |
| | | | | | llvm-svn: 27683 | |||||
| * | Remove .cvsignore so that this directory can be pruned. | Reid Spencer | 2006-04-13 | 1 | -2/+0 | |
| | | | | | llvm-svn: 27682 | |||||
| * | Handle some kernel code than ends in [0 x sbyte]. I think this is safe | Andrew Lenharth | 2006-04-13 | 1 | -2/+11 | |
| | | | | | llvm-svn: 27672 | |||||
| * | Expand some code with temporary variables to rid ourselves of the warning | Reid Spencer | 2006-04-13 | 1 | -7/+21 | |
| | | | | | | | about "dereferencing type-punned pointer will break strict-aliasing rules" llvm-svn: 27671 | |||||
| * | Doh. PANDrm, etc. are not commutable. | Evan Cheng | 2006-04-13 | 1 | -9/+7 | |
| | | | | | llvm-svn: 27668 | |||||
| * | Force non-darwin targets to use a static relo model. This fixes PR734, | Chris Lattner | 2006-04-13 | 1 | -7/+8 | |
| | | | | | | | tested by CodeGen/Generic/vector.ll llvm-svn: 27657 | |||||
| * | add a note, move an altivec todo to the altivec list. | Chris Lattner | 2006-04-13 | 2 | -7/+16 | |
| | | | | | llvm-svn: 27654 | |||||
| * | linear -> constant time | Andrew Lenharth | 2006-04-13 | 1 | -3/+3 | |
| | | | | | llvm-svn: 27652 | |||||
| * | Add the README files to the distribution. | Reid Spencer | 2006-04-13 | 6 | -1/+6 | |
| | | | | | llvm-svn: 27651 | |||||
| * | psad, pmax, pmin intrinsics. | Evan Cheng | 2006-04-13 | 1 | -1/+54 | |
| | | | | | llvm-svn: 27647 | |||||
| * | Various SSE2 packed integer intrinsics: pmulhuw, pavgw, etc. | Evan Cheng | 2006-04-13 | 1 | -4/+71 | |
| | | | | | llvm-svn: 27645 | |||||
| * | X86 SSE2 supports v8i16 multiplication | Evan Cheng | 2006-04-13 | 1 | -0/+1 | |
| | | | | | llvm-svn: 27644 | |||||
| * | Update | Evan Cheng | 2006-04-13 | 1 | -0/+12 | |
| | | | | | llvm-svn: 27643 | |||||
| * | padds{b|w}, paddus{b|w}, psubs{b|w}, psubus{b|w} intrinsics. | Evan Cheng | 2006-04-13 | 1 | -8/+78 | |
| | | | | | llvm-svn: 27639 | |||||
| * | Naming inconsistency. | Evan Cheng | 2006-04-13 | 1 | -1/+1 | |
| | | | | | llvm-svn: 27638 | |||||
| * | SSE / SSE2 conversion intrinsics. | Evan Cheng | 2006-04-12 | 2 | -33/+99 | |
| | | | | | llvm-svn: 27637 | |||||
| * | All "integer" logical ops (pand, por, pxor) are now promoted to v2i64. | Evan Cheng | 2006-04-12 | 3 | -148/+73 | |
| | | | | | | | Clean up and fix various logical ops issues. llvm-svn: 27633 | |||||
| * | Promote vector AND, OR, and XOR | Evan Cheng | 2006-04-12 | 1 | -0/+27 | |
| | | | | | llvm-svn: 27632 | |||||
| * | Make sure CVS versions of yacc and lex files get distributed. | Reid Spencer | 2006-04-12 | 1 | -0/+2 | |
| | | | | | llvm-svn: 27630 | |||||
| * | Get rid of a signed/unsigned compare warning. | Reid Spencer | 2006-04-12 | 1 | -1/+1 | |
| | | | | | llvm-svn: 27625 | |||||
| * | Add a new way to match vector constants, which make it easier to bang bits of | Chris Lattner | 2006-04-12 | 2 | -7/+91 | |
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | different types. Codegen spltw(0x7FFFFFFF) and spltw(0x80000000) without a constant pool load, implementing PowerPC/vec_constants.ll:test1. This compiles: typedef float vf __attribute__ ((vector_size (16))); typedef int vi __attribute__ ((vector_size (16))); void test(vi *P1, vi *P2, vf *P3) { *P1 &= (vi){0x80000000,0x80000000,0x80000000,0x80000000}; *P2 &= (vi){0x7FFFFFFF,0x7FFFFFFF,0x7FFFFFFF,0x7FFFFFFF}; *P3 = vec_abs((vector float)*P3); } to: _test: mfspr r2, 256 oris r6, r2, 49152 mtspr 256, r6 vspltisw v0, -1 vslw v0, v0, v0 lvx v1, 0, r3 vand v1, v1, v0 stvx v1, 0, r3 lvx v1, 0, r4 vandc v1, v1, v0 stvx v1, 0, r4 lvx v1, 0, r5 vandc v0, v1, v0 stvx v0, 0, r5 mtspr 256, r2 blr instead of (with two constant pool entries): _test: mfspr r2, 256 oris r6, r2, 49152 mtspr 256, r6 li r6, lo16(LCPI1_0) lis r7, ha16(LCPI1_0) li r8, lo16(LCPI1_1) lis r9, ha16(LCPI1_1) lvx v0, r7, r6 lvx v1, 0, r3 vand v0, v1, v0 stvx v0, 0, r3 lvx v0, r9, r8 lvx v1, 0, r4 vand v1, v1, v0 stvx v1, 0, r4 lvx v1, 0, r5 vand v0, v1, v0 stvx v0, 0, r5 mtspr 256, r2 blr GCC produces (with 2 cp entries): _test: mfspr r0,256 stw r0,-4(r1) oris r0,r0,0xc00c mtspr 256,r0 lis r2,ha16(LC0) lis r9,ha16(LC1) la r2,lo16(LC0)(r2) lvx v0,0,r3 lvx v1,0,r5 la r9,lo16(LC1)(r9) lwz r12,-4(r1) lvx v12,0,r2 lvx v13,0,r9 vand v0,v0,v12 stvx v0,0,r3 vspltisw v0,-1 vslw v12,v0,v0 vandc v1,v1,v12 stvx v1,0,r5 lvx v0,0,r4 vand v0,v0,v13 stvx v0,0,r4 mtspr 256,r12 blr llvm-svn: 27624 | |||||
| * | Turn casts into getelementptr's when possible. This enables SROA to be more | Chris Lattner | 2006-04-12 | 1 | -0/+23 | |
| | | | | | | | | | aggressive in some cases where LLVMGCC 4 is inserting casts for no reason. This implements InstCombine/cast.ll:test27/28. llvm-svn: 27620 | |||||
| * | Don't emit useless warning messages. | Reid Spencer | 2006-04-12 | 1 | -2/+3 | |
| | | | | | llvm-svn: 27617 | |||||
| * | Rename get_VSPLI_elt -> get_VSPLTI_elt | Chris Lattner | 2006-04-12 | 3 | -32/+40 | |
| | | | | | | | | | | Canonicalize BUILD_VECTOR's that match VSPLTI's into a single type for each form, eliminating a bunch of Pat patterns in the .td file and allowing us to CSE stuff more aggressively. This implements PowerPC/buildvec_canonicalize.ll:VSPLTI llvm-svn: 27614 | |||||
| * | Promote v4i32, v8i16, v16i8 load to v2i64 load. | Evan Cheng | 2006-04-12 | 2 | -57/+41 | |
| | | | | | llvm-svn: 27612 | |||||
| * | Ensure that zero vectors are always v4i32, which forces them to CSE with | Chris Lattner | 2006-04-12 | 2 | -8/+13 | |
| | | | | | | | each other. This implements CodeGen/PowerPC/vxor-canonicalize.ll llvm-svn: 27609 | |||||
| * | Vector type promotion for ISD::LOAD and ISD::SELECT | Evan Cheng | 2006-04-12 | 1 | -9/+23 | |
| | | | | | llvm-svn: 27606 | |||||
| * | Implement support for the formal_arguments node. To get this, targets ↵ | Chris Lattner | 2006-04-12 | 3 | -3/+128 | |
| | | | | | | | shouldcustom legalize it and remove their XXXTargetLowering::LowerArguments overload llvm-svn: 27604 | |||||
| * | Various SSE2 conversion intrinsics | Evan Cheng | 2006-04-12 | 1 | -39/+94 | |
| | | | | | llvm-svn: 27603 | |||||
| * | Don't memoize vloads in the load map! Don't memoize them anywhere here, let | Chris Lattner | 2006-04-12 | 1 | -2/+0 | |
| | | | | | | | getNode do it. This fixes CodeGen/Generic/2006-04-11-vecload.ll llvm-svn: 27602 | |||||

