summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/X86/X86Instr64bit.td
Commit message (Collapse)AuthorAgeFilesLines
...
* Also LXCHG64 -> XCHG64rm.Evan Cheng2008-04-191-3/+3
| | | | llvm-svn: 49948
* - Fix atomic operation JIT encoding.Evan Cheng2008-04-181-12/+3
| | | | | | - Remove unused instructions. llvm-svn: 49921
* Allow certain lea instructions to be rematerialized.Evan Cheng2008-03-271-0/+1
| | | | llvm-svn: 48855
* Don't loose incoming argument registers. Fix documentation style.Arnold Schwaighofer2008-03-191-2/+2
| | | | llvm-svn: 48545
* Make insert_subreg a two-address instruction, vastly simplifying ↵Christopher Lamb2008-03-161-14/+14
| | | | | | LowerSubregs pass. Add a new TII, subreg_to_reg, which is like insert_subreg except that it takes an immediate implicit value to insert into rather than a register. llvm-svn: 48412
* Replace all target specific implicit def instructions with a target ↵Evan Cheng2008-03-151-5/+0
| | | | | | independent one: TargetInstrInfo::IMPLICIT_DEF. llvm-svn: 48380
* Fix a number of encoding bugs. SSE 4.1 instructions MPSADBWrri, PINSRDrr, ↵Evan Cheng2008-03-141-4/+4
| | | | | | etc. have 8-bits immediate field (ImmT == Imm8). llvm-svn: 48360
* Get rid of a pseudo instruction and replace it with subreg based operation ↵Christopher Lamb2008-03-131-19/+24
| | | | | | | on real instructions, ridding the asm printers of the hack used to do this previously. In the process, update LowerSubregs to be careful about eliminating copies that have side affects. Note: the coalescer will have to be careful about this too, when it starts coalescing insert_subreg nodes. llvm-svn: 48329
* Revert 48125, 48126, and 48130 for now to unbreak some x86-64 tests.Evan Cheng2008-03-101-5/+6
| | | | llvm-svn: 48167
* Allow insert_subreg into implicit, target-specific values. Christopher Lamb2008-03-101-6/+5
| | | | | | | Change insert/extract subreg instructions to be able to be used in TableGen patterns. Use the above features to reimplement an x86-64 pseudo instruction as a pattern. llvm-svn: 48130
* x86-64 atomicsAndrew Lenharth2008-03-041-0/+31
| | | | llvm-svn: 47903
* Compile x86-64-and-mask.ll into:Chris Lattner2008-02-271-0/+13
| | | | | | | | | | | | | | | | | | | | _test: movl %edi, %eax ret instead of: _test: movl $4294967295, %ecx movq %rdi, %rax andq %rcx, %rax ret It would be great to write this as a Pat pattern that used subregs instead of a 'pseudo' instruction, but I don't know how to do that in td files. llvm-svn: 47658
* SSE4.1 64b integer insert/extract pattern supportNate Begeman2008-02-121-5/+38
| | | | | | Move formats into the formats file llvm-svn: 47035
* Fix a x86-64 codegen deficiency. Allow gv + offset when using rip addressing ↵Evan Cheng2008-02-071-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mode. Before: _main: subq $8, %rsp leaq _X(%rip), %rax movsd 8(%rax), %xmm1 movss _X(%rip), %xmm0 call _t xorl %ecx, %ecx movl %ecx, %eax addq $8, %rsp ret Now: _main: subq $8, %rsp movsd _X+8(%rip), %xmm1 movss _X(%rip), %xmm0 call _t xorl %ecx, %ecx movl %ecx, %eax addq $8, %rsp ret Notice there is another idiotic codegen issue that needs to be fixed asap: xorl %ecx, %ecx movl %ecx, %eax llvm-svn: 46850
* SSE 4.1 Intrinsics and detectionNate Begeman2008-02-031-0/+10
| | | | llvm-svn: 46681
* Work in progress. This patch *fixes* x86-64 calls which are modelled as ↵Evan Cheng2008-01-291-1/+1
| | | | | | | | StructRet but really should be return in registers, e.g. _Complex long double, some 128-bit aggregates. This is a short term solution that is necessary only because llvm, for now, cannot model i128 nor call's with multiple results. Status: This only works for direct calls, and only the caller side is done. Disabled for now. llvm-svn: 46527
* The last pieces needed for loading arbitraryDuncan Sands2008-01-231-3/+1
| | | | | | | | | | | | | | | precision integers. This won't actually work (and most of the code is dead) unless the new legalization machinery is turned on. While there, I rationalized the handling of i1, and removed some bogus (and unused) sextload patterns. For i1, this could result in microscopically better code for some architectures (not X86). It might also result in worse code if annotating with AssertZExt nodes turns out to be more harmful than helpful. llvm-svn: 46280
* remove xchg and shift-reg-by-1 instructions, which are dead.Chris Lattner2008-01-111-14/+2
| | | | llvm-svn: 45870
* Rename Int_CVTSI642SSr* to Int_CVTSI2SS64r* for naming consistency and ↵Evan Cheng2008-01-111-26/+16
| | | | | | remove unused instructions. llvm-svn: 45861
* more flags set rightChris Lattner2008-01-111-0/+1
| | | | llvm-svn: 45860
* Start inferring side effect information more aggressively, and fix many bugs ↵Chris Lattner2008-01-101-15/+30
| | | | | | | | | | | | | | in the x86 backend where instructions were not marked maystore/mayload, and perf issues where instructions were not marked neverHasSideEffects. It would be really nice if we could write patterns for copy instructions. I have audited all the x86 instructions down to MOVDQAmr. The flags on others and on other targets are probably not right in all cases, but no clients currently use this info that are enabled by default. llvm-svn: 45829
* rename X86InstrX86-64.td -> X86Instr64bit.tdChris Lattner2008-01-101-0/+1276
llvm-svn: 45826
OpenPOWER on IntegriCloud