summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/X86/X86InstrCompiler.td
Commit message (Collapse)AuthorAgeFilesLines
...
* Add support for generating CMPXCHG16B on x86-64 for the cmpxchg IR instruction.Eli Friedman2011-08-261-2/+10
| | | | llvm-svn: 138660
* Basic x86 code generation for atomic load and store instructions.Eli Friedman2011-08-241-0/+14
| | | | llvm-svn: 138478
* Add 256-bit support for v8i32, v4i64 and v4f64 ISD::SELECT. Fix PR10556Bruno Cardoso Lopes2011-08-091-0/+18
| | | | llvm-svn: 137179
* Fix a couple ridiculous copy-paste errors. rdar://9914773 .Eli Friedman2011-08-091-2/+2
| | | | llvm-svn: 137160
* X86ISD::MEMBARRIER does not require SSE2; it doesn't actually generate any ↵Eli Friedman2011-07-271-1/+1
| | | | | | code, and all x86 processors will honor the required semantics. llvm-svn: 136249
* Add a comment describing why transforming (shl x, 1) to (add x, x) is to beDan Gohman2011-06-161-0/+5
| | | | | | considered safe enough in this context. llvm-svn: 133159
* X86: smulo -> add is now done target-independently in DAGCombiner, remove ↵Benjamin Kramer2011-05-211-6/+0
| | | | | | the patterns. llvm-svn: 131801
* Re-commit 131641 with fixes; de-pseudoize MOVSX16rr8 and friends.Stuart Hastings2011-05-201-9/+22
| | | | | | rdar://problem/8614450 llvm-svn: 131746
* Reverting 131641 to investigate 'bot complaint.Stuart Hastings2011-05-191-13/+10
| | | | llvm-svn: 131654
* Revise MOVSX16rr8/MOVZX16rr8 (and rm variants) to no longer beStuart Hastings2011-05-191-10/+13
| | | | | | pseudos. rdar://problem/8614450 llvm-svn: 131641
* Support XOR and AND optimization with no return value.Eric Christopher2011-05-171-0/+2
| | | | | | Finishes off rdar://8470697 llvm-svn: 131458
* Optimize atomic lock or that doesn't use the result value.Eric Christopher2011-05-101-1/+2
| | | | | | | | Next up: xor and and. Part of rdar://8470697 llvm-svn: 131171
* Refactor lock versions of binary operators to be a little lessEric Christopher2011-05-101-73/+83
| | | | | | cut and paste. llvm-svn: 131139
* X86: Add a bunch of peeps for add and sub of SETB.Benjamin Kramer2011-05-081-0/+24
| | | | | | | | | | | | | | | | "b + ((a < b) ? 1 : 0)" compiles into cmpl %esi, %edi adcl $0, %esi instead of cmpl %esi, %edi sbbl %eax, %eax andl $1, %eax addl %esi, %eax This saves a register, a false dependency on %eax (Intel's CPUs still don't ignore it) and it's shorter. llvm-svn: 131070
* The labyrinthine X86 backend no longer appears to requireDan Gohman2011-02-171-37/+0
| | | | | | these patterns. llvm-svn: 125759
* Target/X86: Tweak win64's tailcall.NAKAMURA Takumi2011-01-261-2/+2
| | | | llvm-svn: 124272
* Fix whitespace.NAKAMURA Takumi2011-01-261-9/+8
| | | | llvm-svn: 124270
* The stub routine that we're calling uses test and so clobbersEric Christopher2011-01-181-2/+2
| | | | | | the flags. llvm-svn: 123712
* We lower setb to sbb with the hope that the and will go away, when it Chris Lattner2010-12-201-0/+6
| | | | | | | | | | | | | | | | | | | | | doesn't, match it back to setb. On a 64-bit version of the testcase before we'd get: movq %rdi, %rax addq %rsi, %rax sbbb %dl, %dl andb $1, %dl ret now we get: movq %rdi, %rax addq %rsi, %rax setb %dl ret llvm-svn: 122217
* improve the setcc -> setcc_carry optimization to happen moreChris Lattner2010-12-191-0/+11
| | | | | | | | consistently by moving it out of lowering into dag combine. Add some missing patterns for matching away extended versions of setcc_c. llvm-svn: 122201
* Only rr forms of ADD*_DB are commutable.Evan Cheng2010-12-151-1/+3
| | | | llvm-svn: 121908
* Add rsp to the uses for the same reason as 32-bit.Eric Christopher2010-12-091-1/+1
| | | | llvm-svn: 121328
* Move lowering of TLS_addr32 and TLS_addr64 to X86MCInstLower.Rafael Espindola2010-11-281-4/+2
| | | | llvm-svn: 120263
* Lower TLS_addr32 and TLS_addr64.Rafael Espindola2010-11-271-9/+6
| | | | llvm-svn: 120225
* reject instructions that contain a \n in their asmstring. MarkChris Lattner2010-11-011-8/+10
| | | | | | | various X86 and ARM instructions that are bitten by this as isCodeGenOnly, as they are. llvm-svn: 117884
* two changes: make the asmmatcher generator ignore ARM pseudos properly,Chris Lattner2010-10-311-3/+3
| | | | | | | and make it a hard error for instructions to not have an asm string. These instructions should be marked isCodeGenOnly. llvm-svn: 117861
* X86: Add alloca probing to dynamic alloca on Windows. Fixes PR8424.Michael J. Spencer2010-10-211-8/+8
| | | | llvm-svn: 116984
* Fix Whitespace.Michael J. Spencer2010-10-201-64/+64
| | | | llvm-svn: 116972
* Fix another case where we were preferring instructions with largeRafael Espindola2010-10-131-14/+18
| | | | | | immediates instead of 8 bits ones. llvm-svn: 116410
* Fix PR8365 by adding a more specialized Pat that checks if an 'and' withRafael Espindola2010-10-131-3/+18
| | | | | | 8 bit constants can be used. llvm-svn: 116403
* Initial va_arg support for x86-64. Patch by David Meyer!Dan Gohman2010-10-121-0/+11
| | | | llvm-svn: 116319
* reapply: Use the new TB_NOT_REVERSABLE flag instead of specialChris Lattner2010-10-081-14/+27
| | | | | | | | | reapply: reimplement the second half of the or/add optimization. We should now with no changes. Turns out that one missing "Defs = [EFLAGS]" can upset things a bit. llvm-svn: 116040
* reapply the patch reverted in r116033:Chris Lattner2010-10-081-21/+59
| | | | | | | | "Reimplement (part of) the or -> add optimization. Matching 'or' into 'add'" With a critical fix: the add pseudos clobber EFLAGS. llvm-svn: 116039
* Revert "Reimplement (part of) the or -> add optimization. Matching 'or' intoDaniel Dunbar2010-10-081-59/+21
| | | | | | 'add'", which seems to have broken just about everything. llvm-svn: 116033
* Revert "reimplement the second half of the or/add optimization. We should now",Daniel Dunbar2010-10-081-27/+14
| | | | | | which depends on r116007, which I am about to revert. llvm-svn: 116031
* reimplement the second half of the or/add optimization. We should nowChris Lattner2010-10-081-14/+27
| | | | | | | | | | only end up emitting LEA instead of OR. If we aren't able to promote something into an LEA, we should never be emitting it as an ADD. Add some testcases that we emit "or" in cases where we used to produce an "add". llvm-svn: 116026
* Reimplement (part of) the or -> add optimization. Matching 'or' into 'add'Chris Lattner2010-10-071-21/+59
| | | | | | | | | | | | | | | | | | | is general goodness because it allows ORs to be converted to LEA to avoid inserting copies. However, this is bad because it makes the generated .s file less obvious and gives valgrind heartburn (tons of false positives in bitfield code). While the general fix should be in valgrind, we can at least try to avoid emitting ADD instructions that *don't* get promoted to LEA. This is more work because it requires introducing pseudo instructions to represents "add that knows the bits are disjoint", but hey, people really love valgrind. This fixes this testcase: https://bugs.kde.org/show_bug.cgi?id=242137#c20 the add r/i cases are coming next. llvm-svn: 116007
* Move cmov pseudo instructions to InstrCompiler,Chris Lattner2010-10-051-0/+61
| | | | | | | | | | | convert all the rest of the cmovs to the multiclass, with good results: X86InstrCMovSetCC.td | 598 +-------------------------------------------------- X86InstrCompiler.td | 61 +++++ 2 files changed, 77 insertions(+), 582 deletions(-) llvm-svn: 115707
* Use #NAME# to have the CMOV multiclass define things with the same names as ↵Chris Lattner2010-10-051-1/+1
| | | | | | | | before (e.g. CMOVBE16rr instead of CMOVBErr16). llvm-svn: 115705
* enhance tblgen to support anonymous defm's, use this toChris Lattner2010-10-051-16/+16
| | | | | | simplify the X86 CMOVmr's. llvm-svn: 115702
* convert cmov mr patterns to use a multipattern. Death to redundancyChris Lattner2010-10-051-97/+25
| | | | | | and verbosity llvm-svn: 115701
* switch CMOVBE to the multipattern:Chris Lattner2010-10-051-3/+3
| | | | | | | | 21 insertions(+), 53 deletions(-) Moar change coming before I switch the rest. llvm-svn: 115697
* move SETB pseudos into the same place in InstrCompiler.tdChris Lattner2010-10-051-4/+13
| | | | llvm-svn: 115686
* move some instructions from Instr64Bit -> InstrInfo.Chris Lattner2010-10-051-0/+18
| | | | | | bswap32 doesn't read eflags. llvm-svn: 115604
* move CMOV_FR32 and friends to InstrCompiler, since they are Chris Lattner2010-10-051-0/+38
| | | | | | | | pseudo instructions. Move POPCNT to InstrSSE since they are SSE4 instructions. llvm-svn: 115603
* move various pattern matching support goop out of X86Instr64Bit, to liveChris Lattner2010-10-051-0/+12
| | | | | | with the 32-bit stuff. llvm-svn: 115602
* split conditional moves and setcc's out to their own file.Chris Lattner2010-10-051-0/+14
| | | | llvm-svn: 115601
* move string pseudo instructions to InstrCompiler consolidate 64-bit and ↵Chris Lattner2010-10-051-0/+31
| | | | | | 32-bit together. llvm-svn: 115600
* move the atomic pseudo instructions out to X86InstrCompiler.tdChris Lattner2010-10-051-3/+345
| | | | llvm-svn: 115599
* move more pseudo instructions out to X86InstrCompiler.tdChris Lattner2010-10-051-0/+153
| | | | llvm-svn: 115598
OpenPOWER on IntegriCloud