summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target
Commit message (Collapse)AuthorAgeFilesLines
...
* This adds a new field isAdd to MCInstrDesc. The ARM and Hexagon instructionSjoerd Meijer2016-09-135-49/+52
| | | | | | | | | | | descriptions now tag add instructions, and the Hexagon backend is using this to identify loop induction statements. Patch by Sam Parker and Sjoerd Meijer. Differential Revision: https://reviews.llvm.org/D23601 llvm-svn: 281304
* AVX-512: Fix for PR28175 - Scalar code optimization.Elena Demikhovsky2016-09-132-7/+17
| | | | | | | | | Optimized (truncate (assertzext x) to i1) and anyext i1 to i8/16/32. Optimization of this patterns is a one more step towards i1 optimization on AVX-512. Differential Revision: https://reviews.llvm.org/D24456 llvm-svn: 281302
* [AArch64] Support stackmap/patchpoint in getInstSizeInBytesDiana Picus2016-09-131-4/+21
| | | | | | | | | | | | | | | | | We currently return 4 for stackmaps and patchpoints, which is very optimistic and can in rare cases cause the branch relaxation pass to fail to relax certain branches. This patch causes getInstSizeInBytes to return a pessimistic estimate of the size as the number of bytes requested in the stackmap/patchpoint. In the future, we could provide a more accurate estimate by sharing some of the logic in AArch64::LowerSTACKMAP/PATCHPOINT. Fixes part of https://llvm.org/bugs/show_bug.cgi?id=28750 Differential Revision: https://reviews.llvm.org/D24073 llvm-svn: 281301
* [X86] Remove masked shufpd/shufps intrinsics and autoupgrade to native ↵Craig Topper2016-09-131-12/+0
| | | | | | vector shuffles. They were removed from clang previously but accidentally left in the backend. llvm-svn: 281300
* X86: Conditional tail calls should not have isBarrier = 1Hans Wennborg2016-09-131-18/+31
| | | | | | | | | | That confuses e.g. machine basic block placement, which then doesn't realize that control can fall through a block that ends with a conditional tail call. Instead, isBranch=1 should be set. Also, mark EFLAGS as used by these instructions. llvm-svn: 281281
* Temporarily Revert "[MC] Defer asm errors to post-statement failure" as it's ↵Eric Christopher2016-09-137-106/+223
| | | | | | | | causing errors on the sanitizer bots. This reverts commit r281249. llvm-svn: 281280
* Revert r281215, it caused PR30358.Nico Weber2016-09-122-139/+5
| | | | llvm-svn: 281263
* [MC] Defer asm errors to post-statement failureNirav Dave2016-09-127-223/+106
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow errors to be deferred and emitted as part of clean up to simplify and shorten Assembly parser code. This will allow error messages to be emitted in helper functions and be modified by the caller which has better context. As part of this many minor cleanups to the Parser: * Unify parser cleanup on error * Add Workaround for incorrect return values in ParseDirective instances * Tighten checks on error-signifying return values for parser functions and fix in-tree TargetParsers to be more consistent with the changes. * Fix AArch64 test cases checking for spurious error messages that are now fixed. These changes should be backwards compatible with current Target Parsers so long as the error status are correctly returned in appropriate functions. Reviewers: rnk, majnemer Subscribers: aemerson, jyknight, llvm-commits Differential Revision: https://reviews.llvm.org/D24047 llvm-svn: 281249
* AMDGPU: Do not clobber SCC in SIWholeQuadModeNicolai Haehnle2016-09-122-75/+210
| | | | | | | | | | Reviewers: arsenm, tstellarAMD, mareko Subscribers: arsenm, llvm-commits, kzhuravl Differential Revision: http://reviews.llvm.org/D22198 llvm-svn: 281230
* Revert "[ARM] Promote small global constants to constant pools"James Molloy2016-09-124-123/+1
| | | | | | This reverts commit r281213. It made a bot go bang: http://lab.llvm.org:8011/builders/clang-cmake-armv7-a15-full/builds/14625 llvm-svn: 281228
* [X86] Copy imp-uses when folding tailcall into conditional branch.Ahmed Bougacha2016-09-121-1/+1
| | | | | | | | | | | r280832 added 32-bit support for emitting conditional tail-calls, but dropped imp-used parameter registers. This went unnoticed until r281113, which added 64-bit support, as this is only exposed with parameter passing via registers. Don't drop the imp-used parameters. llvm-svn: 281223
* [AMDGPU] Assembler: Move disabled SDWA and DPP instruction into Disable asm ↵Sam Kolton2016-09-122-0/+12
| | | | | | | | | | | | | | variant Summary: This removes disabled instructions from match tables so we will not match them at all. Reviewers: tstellarAMD, vpykhtin, artem.tamazov Subscribers: wdng, nhaehnle, arsenm Differential Revision: https://reviews.llvm.org/D24452 llvm-svn: 281216
* [Thumb] Teach ISel how to lower compares of AND bitmasks efficientlyJames Molloy2016-09-122-5/+139
| | | | | | | | | | | | | For the common pattern (CMPZ (AND x, #bitmask), #0), we can do some more efficient instruction selection if the bitmask is one consecutive sequence of set bits (32 - clz(bm) - ctz(bm) == popcount(bm)). 1) If the bitmask touches the LSB, then we can remove all the upper bits and set the flags by doing one LSLS. 2) If the bitmask touches the MSB, then we can remove all the lower bits and set the flags with one LSRS. 3) If the bitmask has popcount == 1 (only one set bit), we can shift that bit into the sign bit with one LSLS and change the condition query from NE/EQ to MI/PL (we could also implement this by shifting into the carry bit and branching on BCC/BCS). 4) Otherwise, we can emit a sequence of LSLS+LSRS to remove the upper and lower zero bits of the mask. 1-3 require only one 16-bit instruction and can elide the CMP. 4 requires two 16-bit instructions but can elide the CMP and doesn't require materializing a complex immediate, so is also a win. llvm-svn: 281215
* [ARM] Promote small global constants to constant poolsJames Molloy2016-09-124-1/+123
| | | | | | | | | | | | | | | | | | | | | | | | If a constant is unamed_addr and is only used within one function, we can save on the code size and runtime cost of an indirection by changing the global's storage to inside the constant pool. For example, instead of: ldr r0, .CPI0 bl printf bx lr .CPI0: &format_string format_string: .asciz "hello, world!\n" We can emit: adr r0, .CPI0 bl printf bx lr .CPI0: .asciz "hello, world!\n" This can cause significant code size savings when many small strings are used in one function (4 bytes per string). llvm-svn: 281213
* Fix WebAssembly broken build related to interface change in r281172.Eric Liu2016-09-121-2/+1
| | | | | | | | | | Reviewers: bkramer Subscribers: jfb, llvm-commits, dschuff Differential Revision: https://reviews.llvm.org/D24449 llvm-svn: 281201
* CodeGen: Give MachineBasicBlock::reverse_iterator a handle to the current MIDuncan P. N. Exon Smith2016-09-118-39/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that MachineBasicBlock::reverse_instr_iterator knows when it's at the end (since r281168 and r281170), implement MachineBasicBlock::reverse_iterator directly on top of an ilist::reverse_iterator by adding an IsReverse template parameter to MachineInstrBundleIterator. This replaces another hard-to-reason-about use of std::reverse_iterator on list iterators, matching the changes for ilist::reverse_iterator from r280032 (see the "out of scope" section at the end of that commit message). MachineBasicBlock::reverse_iterator now has a handle to the current node and has obvious invalidation semantics. r280032 has a more detailed explanation of how list-style reverse iterators (invalidated when the pointed-at node is deleted) are different from vector-style reverse iterators like std::reverse_iterator (invalidated on every operation). A great motivating example is this commit's changes to lib/CodeGen/DeadMachineInstructionElim.cpp. Note: If your out-of-tree backend deletes instructions while iterating on a MachineBasicBlock::reverse_iterator or converts between MachineBasicBlock::iterator and MachineBasicBlock::reverse_iterator, you'll need to update your code in similar ways to r280032. The following table might help: [Old] ==> [New] delete &*RI, RE = end() delete &*RI++ RI->erase(), RE = end() RI++->erase() reverse_iterator(I) std::prev(I).getReverse() reverse_iterator(I) ++I.getReverse() --reverse_iterator(I) I.getReverse() reverse_iterator(std::next(I)) I.getReverse() RI.base() std::prev(RI).getReverse() RI.base() ++RI.getReverse() --RI.base() RI.getReverse() std::next(RI).base() RI.getReverse() (For more details, have a look at r280032.) llvm-svn: 281172
* [AVX512] Fix pattern for vgetmantsd and all other instructions that use same ↵Igor Breger2016-09-111-8/+1
| | | | | | | | class. Fix memory operand size, remove unnecessary pattern. Differential Revision: http://reviews.llvm.org/D24443 llvm-svn: 281164
* [AVX-512] Add VPTERNLOG to load folding tables.Craig Topper2016-09-111-0/+18
| | | | llvm-svn: 281156
* [X86] Make a helper method into a static function local to the cpp file.Craig Topper2016-09-112-11/+10
| | | | llvm-svn: 281154
* [NVPTX] Use ldg for explicitly invariant loads.Justin Lebar2016-09-111-13/+22
| | | | | | | | | | | | | | | | | | Summary: With this change (plus some changes to prevent !invariant from being clobbered within llvm), clang will be able to model the __ldg CUDA builtin as an invariant load, rather than as a target-specific llvm intrinsic. This will let the optimizer play with these loads -- specifically, we should be able to vectorize them in the load-store vectorizer. Reviewers: tra Subscribers: jholewinski, hfinkel, llvm-commits, chandlerc Differential Revision: https://reviews.llvm.org/D23477 llvm-svn: 281152
* [CodeGen] Split out the notions of MI invariance and MI dereferenceability.Justin Lebar2016-09-1111-41/+67
| | | | | | | | | | | | | | | | | | | Summary: An IR load can be invariant, dereferenceable, neither, or both. But currently, MI's notion of invariance is IR-invariant && IR-dereferenceable. This patch splits up the notions of invariance and dereferenceability at the MI level. It's NFC, so adds some probably-unnecessary "is-dereferenceable" checks, which we can remove later if desired. Reviewers: chandlerc, tstellarAMD Subscribers: jholewinski, arsenm, nemanjai, llvm-commits Differential Revision: https://reviews.llvm.org/D23371 llvm-svn: 281151
* We also need to pass swifterror in R12 under swiftcc not only under cccArnold Schwaighofer2016-09-101-0/+3
| | | | | | rdar://28190687 llvm-svn: 281138
* [AMDGPU] Refactor MUBUF/MTBUF instructionsValery Pykhtin2016-09-106-1168/+1306
| | | | | | Differential revision: https://reviews.llvm.org/D24295 llvm-svn: 281137
* [WebAssembly] Fix typos in commentsHeejin Ahn2016-09-101-11/+14
| | | | llvm-svn: 281131
* AMDGPU: Implement is{LoadFrom|StoreTo}FrameIndexMatt Arsenault2016-09-106-21/+90
| | | | llvm-svn: 281128
* AMDGPU: Fix scheduling info for spill pseudosMatt Arsenault2016-09-101-2/+3
| | | | | | | These defaulted to Write32Bit. I don't think this actually matters since these don't exist during scheduling. llvm-svn: 281127
* [CodeGen] Rename MachineInstr::isInvariantLoad to ↵Justin Lebar2016-09-102-2/+2
| | | | | | | | | | | | | | | | | | | | isDereferenceableInvariantLoad. NFC Summary: I want to separate out the notions of invariance and dereferenceability at the MI level, so that they correspond to the equivalent concepts at the IR level. (Currently an MI load is MI-invariant iff it's IR-invariant and IR-dereferenceable.) First step is renaming this function. Reviewers: chandlerc Subscribers: MatzeB, jfb, llvm-commits Differential Revision: https://reviews.llvm.org/D23370 llvm-svn: 281125
* AMDGPU: Fix immediate folding logic when shrinking instructionsMatt Arsenault2016-09-093-16/+10
| | | | | | | | | | If the literal is being folded into src0, it doesn't matter if it's an SGPR because it's being replaced with the literal. Also fixes initially selecting 32-bit versions of some instructions which also confused commuting. llvm-svn: 281117
* X86: Fold tail calls into conditional branches also for 64-bit (PR26302)Hans Wennborg2016-09-094-12/+40
| | | | | | | | | This extends the optimization in r280832 to also work for 64-bit. The only quirk is that we can't do this for 64-bit Windows (yet). Differential Revision: https://reviews.llvm.org/D24423 llvm-svn: 281113
* AMDGPU: Run LoadStoreVectorizer pass by defaultMatt Arsenault2016-09-092-1/+4
| | | | llvm-svn: 281112
* [X86][XOP] Fix VPERMIL2PD mask creation on 32-bit targetsSimon Pilgrim2016-09-091-5/+5
| | | | | | Use getConstVector helper to correctly create v2i64/v4i64 constants on 32-bit targets llvm-svn: 281105
* [Hexagon] Fix disassembler crash after r279255Krzysztof Parzyszek2016-09-091-0/+3
| | | | | | | When p0 was added as an explicit operand to the duplex subinstructions, the disassembler was not updated to reflect this. llvm-svn: 281104
* [NVPTX] Implement llvm.fabs.f32, llvm.max.f32, etc.Justin Lebar2016-09-092-16/+132
| | | | | | | | | | | | | | | | | | | | Summary: Previously these only worked via NVPTX-specific intrinsics. This change will allow us to convert these target-specific intrinsics into the general LLVM versions, allowing existing LLVM passes to reason about their behavior. It also gets us some minor codegen improvements as-is, from situations where we canonicalize code into one of these llvm intrinsics. Reviewers: majnemer Subscribers: llvm-commits, jholewinski, tra Differential Revision: https://reviews.llvm.org/D24300 llvm-svn: 281092
* ARM: move the builtins libcall CC setupSaleem Abdulrasool2016-09-092-0/+169
| | | | | | | | | Move the target specific setup into the target specific lowering setup. As pointed out by Anton, the initial change was moving this too high up the stack resulting in a violation of the layering (the target generic code path setup target specific bits). Sink this into the ARM specific setup. NFC. llvm-svn: 281088
* AMDGPU : Fix mqsad_u32_u8 instruction incorrect data type.Wei Ding2016-09-093-9/+17
| | | | | | Differential Revision: http://reviews.llvm.org/D23700 llvm-svn: 281081
* AMDGPU/SI: Make sure llvm.amdgcn.implicitarg.ptr() is 8-byte aligned for HSATom Stellard2016-09-092-1/+6
| | | | | | | | | | Reviewers: arsenm Subscribers: arsenm, wdng, nhaehnle, llvm-commits Differential Revision: https://reviews.llvm.org/D24405 llvm-svn: 281080
* AMDGPU] Assembler: better support for immediate literals in assembler.Sam Kolton2016-09-0914-351/+708
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Prevously assembler parsed all literals as either 32-bit integers or 32-bit floating-point values. Because of this we couldn't support f64 literals. E.g. in instruction "v_fract_f64 v[0:1], 0.5", literal 0.5 was encoded as 32-bit literal 0x3f000000, which is incorrect and will be interpreted as 3.0517578125E-5 instead of 0.5. Correct encoding is inline constant 240 (optimal) or 32-bit literal 0x3FE00000 at least. With this change the way immediate literals are parsed is changed. All literals are always parsed as 64-bit values either integer or floating-point. Then we convert parsed literals to correct form based on information about type of operand parsed (was literal floating or binary) and type of expected instruction operands (is this f32/64 or b32/64 instruction). Here are rules how we convert literals: - We parsed fp literal: - Instruction expects 64-bit operand: - If parsed literal is inlinable (e.g. v_fract_f64_e32 v[0:1], 0.5) - then we do nothing this literal - Else if literal is not-inlinable but instruction requires to inline it (e.g. this is e64 encoding, v_fract_f64_e64 v[0:1], 1.5) - report error - Else literal is not-inlinable but we can encode it as additional 32-bit literal constant - If instruction expect fp operand type (f64) - Check if low 32 bits of literal are zeroes (e.g. v_fract_f64 v[0:1], 1.5) - If so then do nothing - Else (e.g. v_fract_f64 v[0:1], 3.1415) - report warning that low 32 bits will be set to zeroes and precision will be lost - set low 32 bits of literal to zeroes - Instruction expects integer operand type (e.g. s_mov_b64_e32 s[0:1], 1.5) - report error as it is unclear how to encode this literal - Instruction expects 32-bit operand: - Convert parsed 64 bit fp literal to 32 bit fp. Allow lose of precision but not overflow or underflow - Is this literal inlinable and are we required to inline literal (e.g. v_trunc_f32_e64 v0, 0.5) - do nothing - Else report error - Do nothing. We can encode any other 32-bit fp literal (e.g. v_trunc_f32 v0, 10000000.0) - Parsed binary literal: - Is this literal inlinable (e.g. v_trunc_f32_e32 v0, 35) - do nothing - Else, are we required to inline this literal (e.g. v_trunc_f32_e64 v0, 35) - report error - Else, literal is not-inlinable and we are not required to inline it - Are high 32 bit of literal zeroes or same as sign bit (32 bit) - do nothing (e.g. v_trunc_f32 v0, 0xdeadbeef) - Else - report error (e.g. v_trunc_f32 v0, 0x123456789abcdef0) For this change it is required that we know operand types of instruction (are they f32/64 or b32/64). I added several new register operands (they extend previous register operands) and set operand types to corresponding types: ''' enum OperandType { OPERAND_REG_IMM32_INT, OPERAND_REG_IMM32_FP, OPERAND_REG_INLINE_C_INT, OPERAND_REG_INLINE_C_FP, } ''' This is not working yet: - Several tests are failing - Problems with predicate methods for inline immediates - LLVM generated assembler parts try to select e64 encoding before e32. More changes are required for several AsmOperands. Reviewers: vpykhtin, tstellarAMD Subscribers: arsenm, kzhuravl, artem.tamazov Differential Revision: https://reviews.llvm.org/D22922 llvm-svn: 281050
* [Sparc][LEON] Removed the parts of the errata fixes implemented using inline ↵Chris Dewhurst2016-09-091-76/+0
| | | | | | assembly as this is not the desired behaviour for end-users. Small change to a unit test to implement this without requiring the inline assembly. llvm-svn: 281047
* [ARM] ADD with a negative offset can become SUB for freeJames Molloy2016-09-091-0/+4
| | | | | | So model that directly in TTI::getIntImmCost(). llvm-svn: 281044
* [ARM] icmp %x, -C can be lowered to a simple ADDS or CMNJames Molloy2016-09-091-0/+11
| | | | | | Tell TargetTransformInfo about this so ConstantHoisting is informed. llvm-svn: 281043
* [Thumb] Select (CMPZ X, -C) -> (CMPZ (ADDS X, C), 0)James Molloy2016-09-091-0/+42
| | | | | | The CMPZ #0 disappears during peepholing, leaving just a tADDi3, tADDi8 or t2ADDri. This avoids having to materialize the expensive negative constant in Thumb-1, and allows a shrinking from a 32-bit CMN to a 16-bit ADDS in Thumb-2. llvm-svn: 281040
* GlobalISel: remove G_TYPE and G_PHITim Northover2016-09-091-10/+0
| | | | | | | | These instructions were only necessary when type information was stored in the MachineInstr (because only generic MachineInstrs possessed a type). Now that it's in MachineRegisterInfo, COPY and PHI work fine. llvm-svn: 281037
* GlobalISel: move type information to MachineRegisterInfo.Tim Northover2016-09-093-18/+16
| | | | | | | | | | | | | | | | | We want each register to have a canonical type, which means the best place to store this is in MachineRegisterInfo rather than on every MachineInstr that happens to use or define that register. Most changes following from this are pretty simple (you need an MRI anyway if you're going to be doing any transformations, so just check the type there). But legalization doesn't really want to check redundant operands (when, for example, a G_ADD only ever has one type) so I've made use of MCInstrDesc's operand type field to encode these constraints and limit legalization's work. As an added bonus, more validation is possible, both in MachineVerifier and MachineIRBuilder (coming soon). llvm-svn: 281035
* Revert "[mips] Fix c.<cc>.<fmt> instruction definition."Simon Dardis2016-09-0915-539/+209
| | | | | | | This reverts commit r281022. Mips buildbot broke, due to unhandled register class FCC. llvm-svn: 281033
* [AMDGPU] Assembler: rename amd_kernel_code_t asm names according to specSam Kolton2016-09-093-242/+85
| | | | | | | | | | | | | | Summary: Also removed duplicate code from AMDGPUTargetAsmStreamer. This change only change how amd_kernel_code_t is parsed and printed. No variable names are changed. Reviewers: vpykhtin, tstellarAMD Subscribers: arsenm, wdng, nhaehnle Differential Revision: https://reviews.llvm.org/D24296 llvm-svn: 281028
* [Thumb1] Teach optimizeCompareInstr about thumb1 comparesJames Molloy2016-09-091-4/+21
| | | | | | | | This avoids us doing a completely unneeded "cmp r0, #0" after a flag-setting instruction if we only care about the Z or C flags. Add LSL/LSR to the whitelist while we're here and add testing. This code could really do with a spring clean. llvm-svn: 281027
* [AMDGPU] Assembler: match e32 VOP instructions before e64.Sam Kolton2016-09-097-32/+126
| | | | | | | | | | | | | | | | | | | Summary: Split assembler match table in 4 tables with assembler variants: Default - all instructions except VOP3, SDWA and DPP - VOP3 - SDWA - DPP First match Default table then VOP3, SDWA and DPP. Reviewers: tstellarAMD, artem.tamazov, vpykhtin Subscribers: arsenm, wdng, nhaehnle, AMDGPU Differential Revision: https://reviews.llvm.org/D24252 llvm-svn: 281023
* [mips] Fix c.<cc>.<fmt> instruction definition.Simon Dardis2016-09-0915-209/+539
| | | | | | | | | | | | | | | As part of this effort, remove MipsFCmp nodes and use tablegen patterns rather than custom lowering through C++. Unexpectedly, this improves codesize for microMIPS as previous floating point setcc expansions would materialize 0 and 1 into GPRs before using the relevant mov[tf].[sd] instruction. Now $zero is used directly. Reviewers: dsanders, vkalintiris, zoran.jovanovic Differential Review: https://reviews.llvm.org/D23118 llvm-svn: 281022
* [AVX-512] Add VPCMP instructions to the load folding tables and make them ↵Craig Topper2016-09-092-1/+57
| | | | | | commutable. llvm-svn: 281013
* [X86] Tighten up a comment which confused x64 ABI terminology.David Majnemer2016-09-091-3/+3
| | | | | | | | | | | | | | | | | | | | | | The x64 ABI has two major function types: - frame functions - leaf functions A frame function is one which requires a stack frame. A leaf function is one which does not. A frame function may or may not have a frame pointer. A leaf function does not require a stack frame and may never modify SP except via a return (RET, tail call via JMP). A frame function which has a frame pointer is permitted to use the LEA instruction in the epilogue, a frame function without which doesn't establish a frame pointer must use ADD to adjust the stack pointer epilogue. Fun fact: Leaf functions don't require a function table entry (associated PDATA/XDATA). llvm-svn: 281006
OpenPOWER on IntegriCloud