summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target
Commit message (Collapse)AuthorAgeFilesLines
* [X86] Do not use AND8ri8 in AVX512 patternMichael Kuperstein2016-07-211-1/+1
| | | | | | | This variant is (as documented in the TD) for disassembler use only, and should not be used in patterns - it is longer, and is broken on 64-bit. llvm-svn: 276347
* [AArch64][Inline-Asm] Return the 32-bit floating point register classAkira Hatanaka2016-07-211-1/+1
| | | | | | | | | | | | | | | when constraint "w" is used on a 32-bit operand. This enables compiling the following code, which used to error out in the backend: void foo1(int a) { asm volatile ("sqxtn h0, %s0\n" : : "w"(a):); } Fixes PR28633. llvm-svn: 276344
* [AMDGPU] Emit read-only data to .rodata for hsaKonstantin Zhuravlyov2016-07-211-1/+2
| | | | | | Differential Revision: https://reviews.llvm.org/D22538 llvm-svn: 276298
* AMDGPU/SI: Add support for R_AMDGPU_ABS32Konstantin Zhuravlyov2016-07-211-0/+1
| | | | | | Differential Revision: https://reviews.llvm.org/D21646 llvm-svn: 276294
* [AArch64] Load/store opt: Don't count transient instructions towards search ↵Geoff Berry2016-07-211-15/+14
| | | | | | | | | | | | | | | | | | limits. Summary: This change also changes findMatchingInsn and findMatchingUpdateInsnForward to take DBG_VALUE opcodes into account when tracking register defs and uses, which could potentially inhibit these optimizations in the presence of debug information. Reviewers: mcrosier Subscribers: aemerson, rengolin, mcrosier, llvm-commits Differential Revision: https://reviews.llvm.org/D22582 llvm-svn: 276293
* [X86][SSE] Allow folding of store/zext with PEXTRW of 0'th elementSimon Pilgrim2016-07-211-6/+15
| | | | | | | | | | | | | | Under normal circumstances we prefer the higher performance MOVD to extract the 0'th element of a v8i16 vector instead of PEXTRW. But as detailed on PR27265, this prevents the SSE41 implementation of PEXTRW from folding the store of the 0'th element. Additionally it prevents us from making use of the fact that the (SSE2) reg-reg version of PEXTRW implicitly zero-extends the i16 element to the i32/i64 destination register. This patch only preferentially lowers to MOVD if we will not be zero-extending the extracted i16, nor prevent a store from being folded (on SSSE41). Fix for PR27265. Differential Revision: https://reviews.llvm.org/D22509 llvm-svn: 276289
* [X86][SSE] Pull out duplicate EXTRW lowering code. NFCI.Simon Pilgrim2016-07-211-26/+16
| | | | | | As requested on D22509, I've pulled out the v8i16 extraction lowering as the SSE41 and pre-SSE41 implementations are effectively the same. llvm-svn: 276285
* [X86][AVX] Added support for lowering to VBROADCASTF128/VBROADCASTI128Simon Pilgrim2016-07-213-5/+58
| | | | | | | | | | | | As reported on PR26235, we don't currently make use of the VBROADCASTF128/VBROADCASTI128 instructions (or the AVX512 equivalents) to load+splat a 128-bit vector to both lanes of a 256-bit vector. This patch enables lowering from subvector insertion/concatenation patterns and auto-upgrades the llvm.x86.avx.vbroadcastf128.pd.256 / llvm.x86.avx.vbroadcastf128.ps.256 intrinsics to match. We could possibly investigate using VBROADCASTF128/VBROADCASTI128 to load repeated constants as well (similar to how we already do for scalar broadcasts). Differential Revision: https://reviews.llvm.org/D22460 llvm-svn: 276281
* [AMDGPU] Some code cleaning in SIRegisterInfo.tdSam Kolton2016-07-211-33/+23
| | | | | | | | | | Reviewers: tstellarAMD, vpykhtin Subscribers: arsenm, kzhuravl Differential Revision: https://reviews.llvm.org/D22620 llvm-svn: 276274
* AMDGPU: Fix phis from blocks split due to register indexingMatt Arsenault2016-07-211-15/+22
| | | | llvm-svn: 276257
* X86InstrInfo: No need for liveness analysis in classifyLEAReg()Matthias Braun2016-07-211-18/+2
| | | | | | | | | | | | | | | | | | | | classifyLEAReg() deals with switching operands from 32bit to 64bit in order to use a LEA64_32 instruction (for three address code goodness). It currently performs a liveness analysis to determine the kill/undef flag for the newly added operand. This should not be necessary: - If the previous operand had a kill flag, then the 32bit part of the register gets killed, this will kill the super register as well. - If the previous operand had an undef flag then we didn't care what value we read, just use the same flag on the new operand. (No matter what an operand with an undef flag won't affect liveness) This makes the code independent of the presence of kill flags because it avoids a call to MachineBasicBlock::computeRegisterLiveness(). Differential Revision: http://reviews.llvm.org/D22283 llvm-svn: 276222
* [NVPTX] Enable the load-store vectorizer on nvptx.Justin Lebar2016-07-202-1/+11
| | | | | | | | | | Reviewers: tra Subscribers: jholewinski, arsenm, asbirlea Differential Revision: https://reviews.llvm.org/D22592 llvm-svn: 276196
* [AArch64] Register AArch64LoadStoreOptimizer so it can be run by llc ↵Geoff Berry2016-07-204-8/+2
| | | | | | -run-pass. NFCI. llvm-svn: 276193
* [NVPTX] Renamed NVPTXLowerKernelArgs -> NVPTXLowerArgs. NFC.Artem Belevich2016-07-204-21/+21
| | | | | | | | After r276153 the pass applies to both kernels and regular functions. Differential Revision: https://reviews.llvm.org/D22583 llvm-svn: 276189
* [AArch64][FastISel] Select -O0 legal cmpxchg.Ahmed Bougacha2016-07-201-0/+55
| | | | | | | | | | | At -O0, cmpxchg survives AtomicExpand: it's mostly straightforward to select it in fast-isel, and let the pseudo be expanded later. extractvalues on the result are the tricky part: the generic logic only works for legal types (and it would be painful to make it support illegal types), so we can only support i32/i64 cmpxchg. llvm-svn: 276183
* [AArch64][FastISel] Select atomic stores into STLR.Ahmed Bougacha2016-07-201-3/+40
| | | | llvm-svn: 276182
* GlobalISel: implement low-level type with just size & vector lanes.Tim Northover2016-07-202-0/+27
| | | | | | | | This should be all the low-level instruction selection needs to determine how to implement an operation, with the remaining context taken from the opcode (e.g. G_ADD vs G_FADD) or other flags not based on type (e.g. fast-math). llvm-svn: 276158
* [NVPTX] deal with all aggregate return types.Artem Belevich2016-07-201-6/+6
| | | | | | | | Fixes a crash in llvm_unreachable when a function has array return type. Differential Revision: https://reviews.llvm.org/D22524 llvm-svn: 276154
* [NVPTX] Improve lowering of byval args of device functions.Artem Belevich2016-07-203-32/+45
| | | | | | | | | | | | Avoid unnecessary spills of byval arguments of device functions to local space on SASS level and subsequent pointer conversion to generic address space that follows. Instead, make a local copy in IR, provide a way to access arguments directly, and let LLVM optimize the copy away when possible. Differential Review: https://reviews.llvm.org/D21421 llvm-svn: 276153
* AMDGPU: Fix bug causing crash due to invalid opencl version metadata.Yaxun Liu2016-07-201-9/+13
| | | | | | Differential Revision: https://reviews.llvm.org/D22526 llvm-svn: 276119
* [X86][SSE] Add cost model values for CTPOP of vectorsSimon Pilgrim2016-07-202-4/+29
| | | | | | | | This patch adds costs for the vectorized implementations of CTPOP, the default values were seriously underestimating the cost of these and was encouraging vectorization on targets where serialized use of POPCNT would be much better. Differential Revision: https://reviews.llvm.org/D22456 llvm-svn: 276104
* [ARM] Skip inline asm memory operands in DAGToDAGISelDiana Picus2016-07-201-0/+11
| | | | | | | | | | | | | | | | | | | | | | Retry r275776 (no changes, we suspect the issue was with another commit). The current logic for handling inline asm operands in DAGToDAGISel interprets the operands by looking for constants, which should represent the flags describing the kind of operand we're dealing with (immediate, memory, register def etc). The operands representing actual data are skipped only if they are non-const, with the exception of immediate operands which are skipped explicitly when a flag describing an immediate is found. The oversight is that memory operands may be const too (e.g. for device drivers reading a fixed address), so we should explicitly skip the operand following a flag describing a memory operand. If we don't, we risk interpreting that constant as a flag, which is definitely not intended. Fixes PR26038 Differential Revision: https://reviews.llvm.org/D22103 llvm-svn: 276101
* [AVX512] Add a missing NoVLX to give priority to the AVX512 version of the ↵Craig Topper2016-07-201-1/+1
| | | | | | pattern. llvm-svn: 276088
* [X86] Use 'HasAVX1Only' to properly give priority to the AVX2 version ↵Craig Topper2016-07-201-1/+1
| | | | | | without relying on file ordering. llvm-svn: 276087
* [X86] Create some multiclasses to reduce the repeated patterns for ↵Craig Topper2016-07-201-171/+43
| | | | | | VEXTRACT(F/I)128/VINSERT(I/F)128. NFC llvm-svn: 276086
* [X86] Create some wrapper multiclasses to create AVX and SSE shift ↵Craig Topper2016-07-201-172/+90
| | | | | | instructions with less repeated code. NFC llvm-svn: 276085
* Revert "Disable this-return argument forwarding on ARM/AArch64"David Majnemer2016-07-202-16/+2
| | | | | | | | | Inference of the 'returned' attribute was fixed in r276008, lets try turning the backend support back on. This reverts commit r275677. llvm-svn: 276081
* AMDGPU: Change fdiv lowering based on !fpmath metadataMatt Arsenault2016-07-198-49/+227
| | | | | | | | | | | If 2.5 ulp is acceptable, denormals are not required, and isn't a reciprocal which will already be handled, replace with a faster fdiv. Simplify the lowering tests by using per function subtarget features. llvm-svn: 276051
* [AArch64] Properly validate the reciprocal estimation.Evandro Menezes2016-07-191-0/+6
| | | | | | | | Add check for legal data types when expanding into a Newton series. Differential Revision: https://reviews.llvm.org/D22267 llvm-svn: 276041
* [AMDGPU] Remove spurious line (should've been removed in r276029).Davide Italiano2016-07-191-3/+0
| | | | llvm-svn: 276030
* [AMDGPU] Remove dead code.Davide Italiano2016-07-191-25/+0
| | | | | | LGTM'd by Matt Arsenault. llvm-svn: 276029
* ARM: move feature for Thumb2 pkhbt/pkhtb onto architectures.Tim Northover2016-07-191-32/+14
| | | | | | | | | | | There's not much functional change, but it really is an architectural feature (on v6T2, v7A, v7R and v7EM) rather than something each CPU implements individually. The main functional change is the default behaviour you get when specifying only "-triple". llvm-svn: 276013
* AMDGPU: Only use legal inline immediates with kill pseudoMatt Arsenault2016-07-195-3/+15
| | | | | | | | | | | Only if the value is negative or positive is what matters, so use a constant that doesn't require an instruction to materialize. These should really just emit the write exec directly, but for stick with the kill pseudo-terminator. llvm-svn: 275988
* [X86][SSE] Reimplement SSE fp2si conversion intrinsics instead of using ↵Simon Pilgrim2016-07-191-8/+23
| | | | | | | | | | | | | | | | | | generic IR D20859 and D20860 attempted to replace the SSE (V)CVTTPS2DQ and VCVTTPD2DQ truncating conversions with generic IR instead. It turns out that the behaviour of these intrinsics is different enough from generic IR that this will cause problems, INF/NAN/out of range values are guaranteed to result in a 0x80000000 value - which plays havoc with constant folding which converts them to either zero or UNDEF. This is also an issue with the scalar implementations (which were already generic IR and what I was trying to match). This patch changes both scalar and packed versions back to using x86-specific builtins. It also deals with the other scalar conversion cases that are runtime rounding mode dependent and can have similar issues with constant folding. A companion clang patch is at D22105 Differential Revision: https://reviews.llvm.org/D22106 llvm-svn: 275981
* [ARM] Refactor Thumb2 Mul and Mla instr descsSam Parker2016-07-191-327/+143
| | | | | | | | | | | Recommitting after r274347 was reverted. This patch introduces some classes to refactor the 3 and 4 register Thumb2 multiplication instruction descriptions, plus improved tests for some of those instructions. Differential Revision: https://reviews.llvm.org/D21929 llvm-svn: 275979
* [AArch64] PredictableSelectIsExpensive for Vulcan.Pankaj Gode2016-07-191-0/+1
| | | | | | | | Adding PredictableSelectIsExpensive for Vulcan Differential Revision: https://reviews.llvm.org/D22448 llvm-svn: 275978
* Add support for tlsldm assembler operator to ARM targetPeter Smith2016-07-191-0/+3
| | | | | | | | | | | | | | | | | | | | The standard local dynamic model for TLS on ARM systems needs two relocations: - R_ARM_TLS_LDM32 (module idx) - R_ARM_TLS_LDO32 (offset of object from origin of module TLS block) In GNU style assembler we use symbol(tlsldm) and symbol(tlsldo) to produce these relocations. llvm-mc for ARM supports symbol(tlsldo) but does not support symbol(tlsldm). This patch wires up the existing symbol(tlsldm) to R_ARM_TLS_LDM32. TLS for ARM is defined in Addenda to, and Errata in, the ABI for the ARM Architecture Differential Revision: https://reviews.llvm.org/D22461 llvm-svn: 275977
* [mips][ias] R_MIPS_GOT_(PAGE|OFST) do not need symbolsDaniel Sanders2016-07-191-4/+4
| | | | | | | | | | Reviewers: sdardis Subscribers: dsanders, llvm-commits, sdardis Differential Revision: https://reviews.llvm.org/D22458 llvm-svn: 275968
* [mips] Correct label prefixes for N32 and N64.Daniel Sanders2016-07-192-3/+13
| | | | | | | | | | | | | | | | | Summary: N32 and N64 follow the standard ELF conventions (.L) whereas O32 uses its own ($). This fixes the majority of object differences between -fintegrated-as and -fno-integrated-as. Reviewers: sdardis Subscribers: dsanders, sdardis, llvm-commits Differential Revision: https://reviews.llvm.org/D22412 llvm-svn: 275967
* AVX-512: Fixed BT instruction selection.Elena Demikhovsky2016-07-191-25/+55
| | | | | | | | | | | The following condition expression ( a >> n) & 1 is converted to "bt a, n" instruction. It works on all intel targets. But on AVX-512 it was broken because the expression is modified to (truncate (a >>n) to i1). I added the new sequence (truncate (a >>n) to i1) to the BT pattern. Differential Revision: https://reviews.llvm.org/D22354 llvm-svn: 275950
* [AVX512] Give priority to EVEX encoded PSHUFB over the VEX versions.Craig Topper2016-07-191-6/+16
| | | | llvm-svn: 275942
* [X86] Remove superfluous parameter from a multiclass. All instantiations ↵Craig Topper2016-07-191-9/+8
| | | | | | passed the same value. llvm-svn: 275941
* [X86] Rename VINSERTzrr to use a capital Z to match other instructions. NFCCraig Topper2016-07-192-4/+4
| | | | llvm-svn: 275939
* AMDGPU/SI: Fix SI scheduler refcount issueMatt Arsenault2016-07-191-0/+3
| | | | | | | | | Without this fix, releaseSuccessors when InOrOutBlock is false could release SUs outside the schedule BasicBlock. Patch by Axel Davy llvm-svn: 275935
* AMDGPU: Expand register indexing pseudos in custom inserterMatt Arsenault2016-07-198-300/+451
| | | | | | | | | | | | | | | | | | | | | | | This is to help moveSILowerControlFlow to before regalloc. There are a couple of tradeoffs with this. The complete CFG is visible to more passes, the loop body avoids an extra copy of m0, vcc isn't required, and immediate offsets can be shrunk into s_movk_i32. The disadvantage is the register allocator doesn't understand that the single lane's vector is dead within the loop body, so an extra register is used to outlive the loop block when expanding the VGPR -> m0 loop. This also now results in worse waitcnt insertion before the loop instead of after for pending operations at the point of the indexing, but that should be fixed by future improvements to cross block waitcnt insertion. v_movreld_b32's operands are now modeled more correctly since vdst is not a true output. This is kind of a hack to treat vdst as a use operand. Extra checking is required in the verifier since I can't seem to get tablegen to emit an implicit operand for a virtual register. llvm-svn: 275934
* [NVPTX] Make sure we adjust alignment at all call sitesArtem Belevich2016-07-181-2/+1
| | | | | | | .. including calls from kernel functions that were ignored by mistake before. llvm-svn: 275920
* [NVPTX] Force minimum alignment of 4 for byval arguments of device-side ↵Artem Belevich2016-07-182-6/+23
| | | | | | | | | | | | | | | | functions. Taking address of a byval variable in PTX is legal, but currently runs into miscompilation by ptxas on sm_50+ (NVIDIA issue 1789042). Work around the issue by enforcing minimum alignment on byval arguments of device functions. The change is a no-op on SASS level for sm_3x where ptxas already aligns local copy by at least 4. Differential Revision: https://reviews.llvm.org/D22428 llvm-svn: 275893
* Revert "[ARM] Skip inline asm memory operands in DAGToDAGISel"Vitaly Buka2016-07-181-11/+0
| | | | | | | | Breaks asan, see https://reviews.llvm.org/D22103 This reverts commit r275776. llvm-svn: 275890
* AMDGPU: Remove pointless dyn_cast_or_nullMatt Arsenault2016-07-181-4/+3
| | | | | | This is already casted above so non-null llvm-svn: 275881
* AMDGPU: Fix missing switch case warningMatt Arsenault2016-07-181-0/+1
| | | | llvm-svn: 275873
OpenPOWER on IntegriCloud