summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/X86/X86FastISel.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* [AVX-512] Punt on fast-isel of truncates to i1 when AVX512 is enabled.Craig Topper2017-03-281-1/+2
| | | | | | | | | | We should be masking the value and emitting a register copy like we do in non-fast isel. Instead we were just updating the value map and emitting nothing. After r298928 we started seeing cases where we would create a copy from GR8 to GR32 because the source register in a VK1 to GR32 copy was replaced by the GR8 going into a truncate. This fixes PR32451. llvm-svn: 298957
* [AVX-512] Fix accidental uses of AH/BH/CH/DH after copies to/from mask registersCraig Topper2017-03-281-13/+45
| | | | | | | | | | | | | | | | We've had several bugs(PR32256, PR32241) recently that resulted from usages of AH/BH/CH/DH either before or after a copy to/from a mask register. This ultimately occurs because we create COPY_TO_REGCLASS with VK1 and GR8. Then in CopyToFromAsymmetricReg in X86InstrInfo we find a 32-bit super register for the GR8 to emit the KMOV with. But as these tests are demonstrating, its possible for the GR8 register to be a high register and we end up doing an accidental extra or insert from bits 15:8. I think the best way forward is to stop making copies directly between mask registers and GR8/GR16. Instead I think we should restrict to only copies between mask registers and GR32/GR64 and use EXTRACT_SUBREG/INSERT_SUBREG to handle the conversion from GR32 to GR16/8 or vice versa. Unfortunately, this complicates fastisel a bit more now to create the subreg extracts where we used to create GR8 copies. We can probably make a helper function to bring down the repitition. This does result in KMOVD being used for copies when BWI is available because we don't know the original mask register size. This caused a lot of deltas on tests because we have to split the checks for KMOVD vs KMOVW based on BWI. Differential Revision: https://reviews.llvm.org/D30968 llvm-svn: 298928
* [AVX-512] Pre-emptively fix more places in fastisel where we might copy a ↵Craig Topper2017-03-141-9/+28
| | | | | | VK1 register into a AH/BH/CH/DH register. llvm-svn: 297704
* [AVX-512] Fix another case where we are copying from a mask register using ↵Craig Topper2017-03-131-1/+2
| | | | | | | | AH/BH/CH/DH with fastisel. Fixes PR32256. Still planning to do an audit for other possible cases. llvm-svn: 297678
* [AVX-512] Fix a bad use of a high GR8 register after copying from a mask ↵Craig Topper2017-03-121-0/+11
| | | | | | | | | | register during fast isel. This ends up extracting from bits 15:8 instead of the lower bits of the mask. I'm pretty sure there are more problems lurking here. But I think this fixes PR32241. I've added the test case from that bug and added asserts that will fail if we ever try to copy between high registers and mask registers again. llvm-svn: 297574
* [X86] Fix creating vreg def after use. Ayman Musa2017-03-011-5/+10
| | | | llvm-svn: 296601
* [X86][AVX] Disable VCVTSS2SD & VCVTSD2SS memory folding and fix the register ↵Ayman Musa2017-02-231-2/+7
| | | | | | | | class of their first input when creating node in fast-isel. (Quick fix to buildbot failure after rL295940 commit). llvm-svn: 295970
* [X86] Remove scalar logical op alias instructions. Just use ↵Craig Topper2016-12-061-6/+10
| | | | | | | | | | | | | | | | | | | COPY_FROM/TO_REGCLASS and the normal packed instructions instead Summary: This patch removes the scalar logical operation alias instructions. We can just use reg class copies and use the normal packed instructions instead. This removes the need for putting these instructions in the execution domain fixing tables as was done recently. I removed the loadf64_128 and loadf32_128 patterns as DAG combine creates a narrower load for (extractelt (loadv4f32)) before we ever get to isel. I plan to add similar patterns for AVX512DQ in a future commit to allow use of the larger register class when available. Reviewers: spatel, delena, zvi, RKSimon Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D27401 llvm-svn: 288771
* [X86] Remove unnecessary explicit uses of .SimpleTy just to do an equality ↵Craig Topper2016-12-051-11/+11
| | | | | | comparison. MVT's operator== already takes care of this. NFCI llvm-svn: 288646
* [AVX-512] Teach fast isel to handle 512-bit vector bitcasts.Craig Topper2016-12-051-2/+8
| | | | llvm-svn: 288641
* [AVX-512] Teach fast isel to use masked compare and movss for handling ↵Craig Topper2016-12-051-4/+69
| | | | | | scalar cmp and select sequence when AVX-512 is enabled. This matches the behavior of normal isel. llvm-svn: 288636
* IR: Change the gep_type_iterator API to avoid always exposing the "current" ↵Peter Collingbourne2016-12-021-1/+1
| | | | | | | | | | | | | type. Instead, expose whether the current type is an array or a struct, if an array what the upper bound is, and if a struct the struct type itself. This is in preparation for a later change which will make PointerType derive from Type rather than SequentialType. Differential Revision: https://reviews.llvm.org/D26594 llvm-svn: 288458
* [X86][FastISel] Assert that we are dealing with arithmetic with overflow ↵Zvi Rackover2016-11-151-0/+3
| | | | | | intrinsics. NFC llvm-svn: 286961
* [X86][FastISel] Fix lowering of overflow result on AVX512 targetsZvi Rackover2016-11-151-2/+2
| | | | | | | | | | | | | | | | Summary: Fix a case where the overflow value of type i1, which is legal on AVX512, was assigned to a VK1 register class. We always want this value to be assigned to a GPR since the overflow return value is lowered to a SETO instruction. Fixes pr30981. Reviewers: mkuper, igorb, craig.topper, guyblank, qcolombet Subscribers: qcolombet, llvm-commits Differential Revision: https://reviews.llvm.org/D26620 llvm-svn: 286958
* [X86][FastISel] Use a COPY from K register to a GPR instead of a K operationGuy Blank2016-09-281-27/+31
| | | | | | | | | | | The KORTEST was introduced due to a bug where a TEST instruction used a K register. but, turns out that the opposite case of KORTEST using a GPR is now happening The change removes the KORTEST flow and adds a COPY instruction from the K reg to a GPR. Differential Revision: https://reviews.llvm.org/D24953 llvm-svn: 282580
* [AVX-512] Teach fastisel load/store handling to use EVEX encoded ↵Craig Topper2016-09-051-42/+81
| | | | | | | | instructions for 128/256-bit vectors and scalar single/double. Still need to fix the register classes to allow the extended range of registers. llvm-svn: 280682
* [X86] Make some static arrays of opcodes const and shrink to uint16_t. NFCCraig Topper2016-09-051-6/+6
| | | | llvm-svn: 280649
* [AVX512][FastISel] Do not use K registers in TEST instructionsGuy Blank2016-08-211-6/+31
| | | | | | | | | In some cases, FastIsel was emitting TEST instruction with K reg input, which is illegal. Changed to using KORTEST when dealing with K regs. Differential Revision: https://reviews.llvm.org/D23163 llvm-svn: 279393
* Replace a few more "fall through" comments with LLVM_FALLTHROUGHJustin Bogner2016-08-171-2/+2
| | | | | | Follow up to r278902. I had missed "fall through", with a space. llvm-svn: 278970
* Replace "fallthrough" comments with LLVM_FALLTHROUGHJustin Bogner2016-08-171-12/+15
| | | | | | | This is a mechanical change of comments in switches like fallthrough, fall-through, or fall-thru to use the LLVM_FALLTHROUGH macro instead. llvm-svn: 278902
* MachineFunction: Return reference for getFrameInfo(); NFCMatthias Braun2016-07-281-2/+2
| | | | | | | getFrameInfo() never returns nullptr so we should use a reference instead of a pointer. llvm-svn: 277017
* Teach fast isel about the win64 calling convention.Nico Weber2016-07-151-4/+2
| | | | | | | | | | | | | | | This mostly just works. Vectorcall rets are still not supported. The win64_eh test change is because fast isel doesn't use rsi for temporary computations, so it doesn't need to be pushed. The test case I'm changing was originally added to test pushes, but by now there are other test cases in that file exercising that code path. https://reviews.llvm.org/D22422 llvm-svn: 275607
* Teach fast isel calls and rets about stdcall.Nico Weber2016-07-141-0/+2
| | | | | | | stdcall is callee-pop like thiscall, so the thiscall changes already did most of the work for this. This change only opts stdcall in and adds tests. llvm-svn: 275414
* Teach fast isel about thiscall (and callee-pop) calls.Nico Weber2016-07-141-9/+8
| | | | | | http://reviews.llvm.org/D22315 llvm-svn: 275360
* Teach FastISel about thiscall (and, hence, about callee-pop).Nico Weber2016-07-121-5/+12
| | | | | | http://reviews.llvm.org/D22115 llvm-svn: 275135
* Re-commit of 274613.Elena Demikhovsky2016-07-061-0/+3
| | | | | | | The prev commit failed on compilation. A minor change in one pattern in lib/Target/X86/X86InstrAVX512.td fixes the failure. llvm-svn: 274626
* Reverted 274613 due to compilation failue. Elena Demikhovsky2016-07-061-3/+0
| | | | llvm-svn: 274615
* AVX-512: Optimization for patterns with i1 scalar typeElena Demikhovsky2016-07-061-0/+3
| | | | | | | | | | | | | | The patch removes redundant kmov instructions (not all, we still have a lot of work here) and redundant "and" instructions after "setcc". I use "AssertZero" marker between X86ISD::SETCC node and "truncate" to eliminate extra "and $1" instruction. I also changed zext, aext and trunc patterns in the .td file. It allows to remove extra "kmov" instruictions. This patch fixes https://llvm.org/bugs/show_bug.cgi?id=28173. Fast ISEL mode is not supported correctly for AVX-512. ICMP/FCMP scalar instruction should return result in k-reg. It will be fixed in one of the next patches. I redirected handling of "cmp" to the DAG builder mode. (The code looks worse in one specific test case, but without this fix the new patch fails). Differential revision: http://reviews.llvm.org/D21956 llvm-svn: 274613
* Delete unused includes. NFC.Rafael Espindola2016-06-301-1/+0
| | | | llvm-svn: 274225
* CodeGen: Use MachineInstr& in TargetInstrInfo, NFCDuncan P. N. Exon Smith2016-06-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is mostly a mechanical change to make TargetInstrInfo API take MachineInstr& (instead of MachineInstr* or MachineBasicBlock::iterator) when the argument is expected to be a valid MachineInstr. This is a general API improvement. Although it would be possible to do this one function at a time, that would demand a quadratic amount of churn since many of these functions call each other. Instead I've done everything as a block and just updated what was necessary. This is mostly mechanical fixes: adding and removing `*` and `&` operators. The only non-mechanical change is to split ARMBaseInstrInfo::getOperandLatencyImpl out from ARMBaseInstrInfo::getOperandLatency. Previously, the latter took a `MachineInstr*` which it updated to the instruction bundle leader; now, the latter calls the former either with the same `MachineInstr&` or the bundle leader. As a side effect, this removes a bunch of MachineInstr* to MachineBasicBlock::iterator implicit conversions, a necessary step toward fixing PR26753. Note: I updated WebAssembly, Lanai, and AVR (despite being off-by-default) since it turned out to be easy. I couldn't run tests for AVR since llc doesn't link with it turned on. llvm-svn: 274189
* Switch more loops to be range-basedDavid Majnemer2016-06-241-3/+2
| | | | | | | This makes the code a little more concise, no functional change is intended. llvm-svn: 273644
* Pass DebugLoc and SDLoc by const ref.Benjamin Kramer2016-06-121-3/+4
| | | | | | | | This used to be free, copying and moving DebugLocs became expensive after the metadata rewrite. Passing by reference eliminates a ton of track/untrack operations. No functionality change intended. llvm-svn: 272512
* [X86][SSE] Add general lowering of nontemporal vector loads (fixed bad merge)Simon Pilgrim2016-06-071-9/+37
| | | | | | | | | | | | | | Currently the only way to use the (V)MOVNTDQA nontemporal vector loads instructions is through the int_x86_sse41_movntdqa style builtins. This patch adds support for lowering nontemporal loads from general IR, allowing us to remove the movntdqa builtins in a future patch. We currently still fold nontemporal loads into suitable instructions, we should probably look at removing this (and nontemporal stores as well) or at least make the target's folding implementation aware that its dealing with a nontemporal memory transaction. There is also an issue that VMOVNTDQA only acts on 128-bit vectors on pre-AVX2 hardware - so currently a normal ymm load is still used on AVX1 targets. Differential Review: http://reviews.llvm.org/D20965 llvm-svn: 272011
* [AVX512] Fix load opcode for fast isel.Igor Breger2016-06-071-1/+1
| | | | | | Differential Revision: http://reviews.llvm.org/D21067 llvm-svn: 272006
* [AVX512] Add 512-bit load/stores to fast isel.Craig Topper2016-06-021-0/+46
| | | | llvm-svn: 271486
* [X86] Add AVX 256-bit load and stores to fast isel.Craig Topper2016-06-021-9/+52
| | | | | | | | I'm not sure why this was missing for so long. This also exposed that we were picking floating point 256-bit VMOVNTPS for some integer types in normal isel for AVX1 even though VMOVNTDQ is available. In practice it doesn't matter due to the execution dependency fix pass, but it required extra isel patterns. Fixing that in a follow up commit. llvm-svn: 271481
* [X86] Use uint16_t for a couple arrays of instruction opcodes. NFCCraig Topper2016-06-021-2/+2
| | | | llvm-svn: 271480
* Refactor X86 symbol access classification.Rafael Espindola2016-05-201-12/+6
| | | | | | | | | | | | This refactors the logic in X86 to avoid code duplication. It also splits it in two steps: it first decides if a symbol is local to the DSO and then uses that information to decide how to access it. The first part is implemented by shouldAssumeDSOLocal. It is not in any way specific to X86. In a followup patch I intend to move it to somewhere common and reused it in other backends. llvm-svn: 270209
* Record a TargetMachine instead of a Reloc::Model.Rafael Espindola2016-05-191-1/+1
| | | | | | Addresses r270095's code review. llvm-svn: 270147
* Remember the relocation model. NFC.Rafael Espindola2016-05-191-1/+1
| | | | | | This avoids passing a TargetMachine in a few places. llvm-svn: 270095
* Style fixes. NFC.Rafael Espindola2016-05-191-1/+1
| | | | llvm-svn: 270093
* [X86] Lower zext i1 argumentsDavid Majnemer2016-05-041-0/+15
| | | | | | | | | | | | | i1 is now a legal type for X86 with AVX512. There were some paths in X86FastISel which were not quite ready to see an i1 value: they were not quite sure how to deal with sign/zero extends for call arguments. DTRT by extending to i8 for zeroext and bailing out of FastISel for signext. This fixes PR27591. llvm-svn: 268470
* [X86][FastISel] Make sure we use the right register class when we select stores.Quentin Colombet2016-04-271-1/+9
| | | | llvm-svn: 267806
* Swift Calling Convention: use %RAX for sret.Manman Ren2016-04-261-1/+4
| | | | | | | We don't need to copy the sret argument into %rax upon return. rdar://25671494 llvm-svn: 267579
* [X86] enable PIE for functionsAsaf Badouh2016-04-201-19/+4
| | | | | | | | Call locally defined function directly for PIE/fPIE Differential Revision: http://reviews.llvm.org/D19226 llvm-svn: 266863
* Sink DI metadata usage out of MachineInstr.h and MachineInstrBuilder.hReid Kleckner2016-04-141-0/+1
| | | | | | | | | | | MachineInstr.h and MachineInstrBuilder.h are very popular headers, widely included across all LLVM backends. It turns out that there only a handful of TUs that actually care about DI operands on MachineInstrs. After this change, touching DebugInfoMetadata.h and rebuilding llc only needs 112 actions instead of 542. llvm-svn: 266351
* Swift Calling Convention: swifterror target support.Manman Ren2016-04-111-0/+39
| | | | | | Differential Revision: http://reviews.llvm.org/D18716 llvm-svn: 265997
* Swift Calling Convention: add swiftcc.Manman Ren2016-04-051-0/+1
| | | | | | Differential Revision: http://reviews.llvm.org/D17863 llvm-svn: 265480
* Swift Calling Convention: add swiftself attribute.Manman Ren2016-03-291-0/+1
| | | | | | Differential Revision: http://reviews.llvm.org/D17866 llvm-svn: 264754
* [X86] Use MCPhysReg and uint16_t for static arrays of registers and opcodes ↵Craig Topper2016-03-021-4/+4
| | | | | | respectively should reduce size tiny bit. NFC llvm-svn: 262458
OpenPOWER on IntegriCloud