summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/X86/X86InstrCompiler.td
Commit message (Collapse)AuthorAgeFilesLines
...
* Revert "Mark vastart_save_xmm_regs as changing EFLAGS"Duncan P. N. Exon Smith2013-12-171-1/+1
| | | | | | | | | This reverts commit r197469. The sanitizer and dragonegg buildbots are failing, I think because of this change. Reverting until I figure out why. llvm-svn: 197481
* Mark vastart_save_xmm_regs as changing EFLAGSDuncan P. N. Exon Smith2013-12-171-1/+1
| | | | | | | | | | | | | The vastart_save_xmm_regs pseudo-instruction expands to a test and a branch, so it modifies EFLAGS. Mark it so, or else the scheduler might place it in the middle of another test+branch. This fixes a bug exposed by r192750, which turned on the MI Scheduler for X86. <rdar://problem/15627766> llvm-svn: 197469
* AVX-512: Implemented CMOV for 512-bit vectorsElena Demikhovsky2013-10-311-0/+18
| | | | llvm-svn: 193747
* Revert part of a fix from 2010, changes since then:Eric Christopher2013-10-141-1/+1
| | | | | | | | | | | | a) x86-64 TLS has been documented b) the code path should use movq for the correct relocation to be generated. I've also added a fixme for the test case that we should improve the code generated, it should look something like is documented in the tls abi document. llvm-svn: 192631
* Remove some extraneous whitespace.Eric Christopher2013-10-141-4/+0
| | | | llvm-svn: 192629
* Mark that the _ftol2 function used by windows on x86 to handle fptoui ↵Craig Topper2013-07-211-3/+4
| | | | | | modifies ECX. llvm-svn: 186787
* X86: change MOV64ri64i32 into MOV32ri64Tim Northover2013-06-011-18/+12
| | | | | | | | | | | | | | The MOV64ri64i32 instruction required hacky MCInst lowering because it was allocated as setting a GR64, but the eventual instruction ("movl") only set a GR32. This converts it into a so-called "MOV32ri64" which still accepts a (appropriate) 64-bit immediate but defines a GR32. This is then converted to the full GR64 by a SUBREG_TO_REG operation, thus keeping everyone happy. This fixes a typo in the opcode field of the original patch, which should make the legact JIT work again (& adds test for that problem). llvm-svn: 183068
* Temporarily Revert "X86: change MOV64ri64i32 into MOV32ri64" as itEric Christopher2013-05-311-12/+18
| | | | | | seems to have caused PR16192 and other JIT related failures. llvm-svn: 183059
* X86: change MOV64ri64i32 into MOV32ri64Tim Northover2013-05-311-18/+12
| | | | | | | | | | The MOV64ri64i32 instruction required hacky MCInst lowering because it was allocated as setting a GR64, but the eventual instruction ("movl") only set a GR32. This converts it into a so-called "MOV32ri64" which still accepts a (appropriate) 64-bit immediate but defines a GR32. This is then converted to the full GR64 by a SUBREG_TO_REG operation, thus keeping everyone happy. llvm-svn: 182991
* X86: use sub-register sequences for MOV*r0 operationsTim Northover2013-05-301-27/+9
| | | | | | | | | | | | Instead of having a bunch of separate MOV8r0, MOV16r0, ... pseudo-instructions, it's better to use a single MOV32r0 (which will expand to "xorl %reg, %reg") and obtain other sizes with EXTRACT_SUBREG and SUBREG_TO_REG. The encoding is smaller and partial register updates can sometimes be avoided. Until recently, this sequence was a barrier to rematerialization though. That should now be fixed so it's an appropriate time to make the change. llvm-svn: 182928
* X86: change zext moves to use sub-register infrastructure.Tim Northover2013-05-301-11/+22
| | | | | | | | | | | | 32-bit writes on amd64 zero out the high bits of the corresponding 64-bit register. LLVM makes use of this for zero-extension, but until now relied on custom MCLowering and other code to fixup instructions. Now we have proper handling of sub-registers, this can be done by creating SUBREG_TO_REG instructions at selection-time. Should be no change in functionality. llvm-svn: 182921
* Annotate X86InstrCompiler.td with SchedRW lists.Jakob Stoklund Olesen2013-03-251-10/+20
| | | | llvm-svn: 177936
* Annotate X86InstrCompiler.td with SchedRW lists.Jakob Stoklund Olesen2013-03-191-8/+9
| | | | | | | Add a new WriteZero SchedWrite type for the common dependency-breaking instructions that clear a register. llvm-svn: 177442
* Remove an invalid and unnecessary Pat pattern from the X86 backend:Ulrich Weigand2013-03-191-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | def : Pat<(load (i64 (X86Wrapper tglobaltlsaddr :$dst))), (MOV64rm tglobaltlsaddr :$dst)>; This pattern is invalid because the MOV64rm instruction expects a source operand of type "i64mem", which is a subclass of X86MemOperand and thus actually consists of five MI operands, but the Pat provides only a single MI operand ("tglobaltlsaddr" matches an SDnode of type ISD::TargetGlobalTLSAddress and provides a single output). Thus, if the pattern were ever matched, subsequent uses of the MOV64rm instruction pattern would access uninitialized memory. In addition, with the TableGen patch I'm about to check in, this would actually be reported as a build-time error. Fortunately, the pattern does in fact never match, for at least two independent reasons. First, the code generator actually never generates a pattern of the form (load (X86Wrapper (tglobaltlsaddr))). For most combinations of TLS and code models, (tglobaltlsaddr) represents just an offset that needs to be added to some base register, so it is never directly dereferenced. The only exception is the initial-exec model, where (tglobaltlsaddr) refers to the (pc-relative) address of a GOT slot, which *is* in fact directly dereferenced: but in that case, the X86WrapperRIP node is used, not X86Wrapper, so the Pat doesn't match. Second, even if some patterns along those lines *were* ever generated, we should not need an extra Pat pattern to match it. Instead, the original MOV64rm instruction pattern ought to match directly, since it uses an "addr" operand, which is implemented via the SelectAddr C++ routine; this routine is supposed to accept the full range of input DAGs that may be implemented by a single mov instruction, including those cases involving ISD::TargetGlobalTLSAddress (and actually does so e.g. in the initial-exec case as above). To avoid build breaks (due to the above-mentioned error) after the TableGen patch is checked in, I'm removing this Pat here. llvm-svn: 177426
* X86: Disable cmov-memory patterns on subtargets without cmov.Benjamin Kramer2013-02-231-6/+8
| | | | | | Fixes PR15115. llvm-svn: 175962
* Fix an issue of pseudo atomic instruction DAG scheduleMichael Liao2013-01-221-1/+6
| | | | | | | | | | - Add list of physical registers clobbered in pseudo atomic insts Physical registers are clobbered when pseudo atomic instructions are expanded. Add them in clobber list to prevent DAG scheduler to mis-schedule them after these insns are declared side-effect free. - Add test case from Michael Kuperstein <michael.m.kuperstein@intel.com> llvm-svn: 173200
* Remove # from the beginning and end of def names.Craig Topper2013-01-071-128/+128
| | | | llvm-svn: 171696
* Add hasSideEffects=0 to some atomic instructions.Craig Topper2012-12-261-1/+1
| | | | llvm-svn: 171122
* Add __builtin_setjmp/_longjmp supprt in X86 backendMichael Liao2012-10-151-0/+27
| | | | | | | | | | | - Besides used in SjLj exception handling, __builtin_setjmp/__longjmp is also used as a light-weight replacement of setjmp/longjmp which are used to implementation continuation, user-level threading, and etc. The support added in this patch ONLY addresses this usage and is NOT intended to support SjLj exception handling as zero-cost DWARF exception handling is used by default in X86. llvm-svn: 165989
* X86: fcmov doesn't handle all possible EFLAGS, fall back to a branch for the ↵Benjamin Kramer2012-10-071-1/+8
| | | | | | | | | others. Otherwise it will try to use SSE patterns and fail horribly if sse is disabled. Fixes PR14035. llvm-svn: 165377
* Remove some encoding bits I forgot to remove from SETB_C16r and SETB_C64r in ↵Craig Topper2012-10-051-3/+2
| | | | | | r165302. llvm-svn: 165303
* Move expansion of SETB_C(8/16/32/64)r from MCInstLower to ↵Craig Topper2012-10-051-15/+9
| | | | | | ExpandPostRAPseudos and mark them as pseudos in the td file. llvm-svn: 165302
* Add 'lock' prefix output support in assembly printerMichael Liao2012-09-261-33/+24
| | | | | | | | - Instead of embedding 'lock' into each mnemonic of atomic instructions except 'xchg', we teach X86 assembly printer to output 'lock' prefix similar to or consistent with code emitter. llvm-svn: 164659
* Fix 16-bit atomic inst encoding and keep pseudo-inst starting with '#'Michael Liao2012-09-221-14/+14
| | | | llvm-svn: 164453
* Fix typo in r164357Michael Liao2012-09-221-1/+1
| | | | llvm-svn: 164452
* Fix a typo in r164357Michael Liao2012-09-211-8/+8
| | | | llvm-svn: 164372
* Revise td of X86 atomic instructionsMichael Liao2012-09-211-199/+166
| | | | | | | - Rewirte most atomic instructions in templates for both better maintenance and future extensions, such as HLE in TSX. llvm-svn: 164357
* Re-work X86 code generation of atomic ops with spin-loopMichael Liao2012-09-201-7/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Rewrite/merge pseudo-atomic instruction emitters to address the following issue: * Reduce one unnecessary load in spin-loop previously the spin-loop looks like thisMBB: newMBB: ld t1 = [bitinstr.addr] op t2 = t1, [bitinstr.val] not t3 = t2 (if Invert) mov EAX = t1 lcs dest = [bitinstr.addr], t3 [EAX is implicit] bz newMBB fallthrough -->nextMBB the 'ld' at the beginning of newMBB should be lift out of the loop as lcs (or CMPXCHG on x86) will load the current memory value into EAX. This loop is refined as: thisMBB: EAX = LOAD [MI.addr] mainMBB: t1 = OP [MI.val], EAX LCMPXCHG [MI.addr], t1, [EAX is implicitly used & defined] JNE mainMBB sinkMBB: * Remove immopc as, so far, all pseudo-atomic instructions has all-register form only, there is no immedidate operand. * Remove unnecessary attributes/modifiers in pseudo-atomic instruction td * Fix issues in PR13458 - Add comprehensive tests on atomic ops on various data types. NOTE: Some of them are turned off due to missing functionality. - Revise tests due to the new spin-loop generated. llvm-svn: 164281
* Fix the TCRETURNmi64 bug differently.Jakob Stoklund Olesen2012-09-131-2/+21
| | | | | | | | | | Add a PatFrag to match X86tcret using 6 fixed registers or less. This avoids folding loads into TCRETURNmi64 using 7 or more volatile registers. <rdar://problem/12282281> llvm-svn: 163819
* Revert r163761 "Don't fold indexed loads into TCRETURNmi64."Jakob Stoklund Olesen2012-09-131-7/+1
| | | | | | The patch caused "Wrong topological sorting" assertions. llvm-svn: 163810
* Don't fold indexed loads into TCRETURNmi64.Jakob Stoklund Olesen2012-09-131-1/+7
| | | | | | | | | | | | | We don't have enough GR64_TC registers when calling a varargs function with 6 arguments. Since %al holds the number of vector registers used, only %r11 is available as a scratch register. This means that addressing modes using both base and index registers can't be folded into TCRETURNmi64. <rdar://problem/12282281> llvm-svn: 163761
* Implement the local-dynamic TLS model for x86 (PR3985)Hans Wennborg2012-06-011-2/+12
| | | | | | | | | This implements codegen support for accesses to thread-local variables using the local-dynamic model, and adds a clean-up pass so that the base address for the TLS block can be re-used between local-dynamic access on an execution path. llvm-svn: 157818
* Use ptr_rc_tailcall instead of GR32_TC.Jakob Stoklund Olesen2012-05-091-2/+2
| | | | | | | | | The getPointerRegClass() hook will return GR32_TC, or whatever is appropriate for the current function. Patch by Yiannis Tsiouris! llvm-svn: 156459
* X86: optimization for -(x != 0)Manman Ren2012-05-071-0/+6
| | | | | | | | | | | | | | | | | This patch will optimize -(x != 0) on X86 FROM cmpl $0x01,%edi sbbl %eax,%eax notl %eax TO negl %edi sbbl %eax %eax In order to generate negl, I added patterns in Target/X86/X86InstrCompiler.td: def : Pat<(X86sub_flag 0, GR32:$src), (NEG32r GR32:$src)>; rdar: 10961709 llvm-svn: 156312
* Always compute all the bits in ComputeMaskedBits.Rafael Espindola2012-04-041-4/+2
| | | | | | | | This allows us to keep passing reduced masks to SimplifyDemandedBits, but know about all the bits if SimplifyDemandedBits fails. This allows instcombine to simplify cases like the one in the included testcase. llvm-svn: 154011
* Make x86 REP_MOV* and REP_STO instructions use the correct operand sizes in ↵Lang Hames2012-03-291-23/+56
| | | | | | 64-bit mode. llvm-svn: 153680
* This patch adds X86 instruction itineraries for non-pseudo opcodes inPreston Gurd2012-03-191-49/+58
| | | | | | | | | | X86InstrCompiler.td. It also adds –mcpu-generic to the legalize-shift-64.ll test so the test will pass if run on an Intel Atom CPU, which would otherwise produce an instruction schedule which differs from that which the test expects. llvm-svn: 153033
* Add WIN_FTOL_* psudo-instructions to model the unique calling conventionMichael J. Spencer2012-02-241-1/+17
| | | | | | used by the Win32 _ftol2 runtime function. Patch by Joe Groff! llvm-svn: 151382
* Use the same CALL instructions for Windows as for everything else.Jakob Stoklund Olesen2012-02-161-7/+2
| | | | | | | The different calling conventions and call-preserved registers are represented with regmask operands that are added dynamically. llvm-svn: 150708
* Make sure the non-SSE lowering for fences correctly clobbers EFLAGS. PR11768.Eli Friedman2012-01-161-1/+1
| | | | llvm-svn: 148240
* Get rid of unused codegen-only instruction.Eli Friedman2012-01-161-9/+0
| | | | llvm-svn: 148239
* X86: Generalize the x << (y & const) optimization to also catch masks with ↵Benjamin Kramer2012-01-121-21/+25
| | | | | | more set bits set than 31 or 63. llvm-svn: 148024
* Switch the lowering of CTLZ_ZERO_UNDEF from a .td pattern back to theChandler Carruth2011-12-241-9/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | X86ISelLowering C++ code. Because this is lowered via an xor wrapped around a bsr, we want the dagcombine which runs after isel lowering to have a chance to clean things up. In particular, it is very common to see code which looks like: (sizeof(x)*8 - 1) ^ __builtin_clz(x) Which is trying to compute the most significant bit of 'x'. That's actually the value computed directly by the 'bsr' instruction, but if we match it too late, we'll get completely redundant xor instructions. The more naive code for the above (subtracting rather than using an xor) still isn't handled correctly due to the dagcombine getting confused. Also, while here fix an issue spotted by inspection: we should have been expanding the zero-undef variants to the normal variants when there is an 'lzcnt' instruction. Do so, and test for this. We don't want to generate unnecessary 'bsr' instructions. These two changes fix some regressions in encoding and decoding benchmarks. However, there is still a *lot* to be improve on in this type of code. llvm-svn: 147244
* Begin teaching the X86 target how to efficiently codegen patterns thatChandler Carruth2011-12-201-0/+17
| | | | | | | | | | | | | | | use the zero-undefined variants of CTTZ and CTLZ. These are just simple patterns for now, there is more to be done to make real world code using these constructs be optimized and codegen'ed properly on X86. The existing tests are spiffed up to check that we no longer generate unnecessary cmov instructions, and that we generate the very important 'xor' to transform bsr which counts the index of the most significant one bit to the number of leading (most significant) zero bits. Also they now check that when the variant with defined zero result is used, the cmov is still produced. llvm-svn: 146974
* Fixes an issue reported by -verify-machineinstrs.Rafael Espindola2011-10-261-2/+2
| | | | | | Patch by Sanjoy Das. llvm-svn: 143064
* This commit introduces two fake instructions MORESTACK_RET andRafael Espindola2011-10-261-0/+18
| | | | | | | | | | | | MORESTACK_RET_RESTORE_R10; which are lowered to a RET and a RET followed by a MOV respectively. Having a fake instruction prevents the verifier from seeing a MachineBasicBlock end with a non-terminator (MOV). It also prevents the rather eccentric case of a MachineBasicBlock ending with RET but having successors nevertheless. Patch by Sanjoy Das. llvm-svn: 143062
* Fix the assembler strings for a couple of atomic instructions. Doesn't ↵Eli Friedman2011-09-131-2/+2
| | | | | | really matter much in practice, but it's a bit cleaner. llvm-svn: 139563
* Fix atomic load and store on x86 to pass -verify-machineinstrs (and possibly ↵Eli Friedman2011-09-071-14/+26
| | | | | | | | fix some subtle bugs involving passes which check mayStore()). This isn't exactly ideal, but it is good enough for the moment. llvm-svn: 139245
* Pseudo CMOV instructions don't clobber EFLAGS.Jakob Stoklund Olesen2011-09-021-13/+3
| | | | | | | | | | | | | | The explanation about a 0 argument being materialized as xor is no longer valid. Rematerialization will check if EFLAGS is live before clobbering it. The code produced by X86TargetLowering::EmitLoweredSelect does not clobber EFLAGS. This causes one less testb instruction to be generated in the cmov.ll test case. llvm-svn: 139057
* Adds a SelectionDAG node X86SegAlloca which will be custom loweredRafael Espindola2011-08-301-0/+20
| | | | | | | | | | | | from DYNAMIC_STACKALLOC. Two new pseudo instructions (SEG_ALLOCA_32 and SEG_ALLOCA_64) which will match X86SegAlloca (based on word size) are also added. They will be custom emitted to inject the actual stack handling code. Patch by Sanjoy Das. llvm-svn: 138814
OpenPOWER on IntegriCloud