summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target
Commit message (Collapse)AuthorAgeFilesLines
* [ARM] Refactor the prologue/epilogue emission to be more robust.Quentin Colombet2015-07-204-118/+219
| | | | | | | | | | | | | | | | This is the first step toward supporting shrink-wrapping for this target. The changes could be summarized by these items: - Expand the tail-call return as part of the expand pseudo pass. - Get rid of the assumptions that the epilogue is the exit block: * Do not assume which registers are free in the epilogue. (This indirectly improve the lowering of the code for the segmented stacks, see the test cases.) * Take into account that the basic block can be empty. Related to <rdar://problem/20821730> llvm-svn: 242714
* [NVPTX] make load on global readonly memory to use ldgJingyue Wu2015-07-201-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: [NVPTX] make load on global readonly memory to use ldg Summary: As describe in [1], ld.global.nc may be used to load memory by nvcc when __restrict__ is used and compiler can detect whether read-only data cache is safe to use. This patch will try to check whether ldg is safe to use and use them to replace ld.global when possible. This change can improve the performance by 18~29% on affected kernels (ratt*_kernel and rwdot*_kernel) in S3D benchmark of shoc [2]. Patched by Xuetian Weng. [1] http://docs.nvidia.com/cuda/kepler-tuning-guide/#read-only-data-cache [2] https://github.com/vetter/shoc Test Plan: test/CodeGen/NVPTX/load-with-non-coherent-cache.ll Reviewers: jholewinski, jingyue Subscribers: jholewinski, llvm-commits Differential Revision: http://reviews.llvm.org/D11314 llvm-svn: 242713
* [Hexagon] Generate MUX from conditional transfers when dot-new not possibleKrzysztof Parzyszek2015-07-203-0/+328
| | | | llvm-svn: 242711
* [ImplicitNullChecks] Work with implicit defs.Sanjoy Das2015-07-201-1/+4
| | | | | | | | | | | | | | | Summary: This change generalizes the implicit null checks pass to work with instructions that don't have any explicit register defs. This lets us use X86's `cmp` against memory as faulting load instructions. Reviewers: reames, JosephTremoulet Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11286 llvm-svn: 242703
* [AArch64] Change EON pattern to match more often.Chad Rosier2015-07-201-1/+1
| | | | | | | Phabricator: http://reviews.llvm.org/D11359 Patch by Geoff Berry <gberry@codeaurora.org> llvm-svn: 242694
* AMDGPU/SI: Add VI patterns to select FLAT instructions for global memory opsTom Stellard2015-07-204-6/+70
| | | | | | | | | | | | | | Summary: The MUBUF addr64 bit has been removed on VI, so we must use FLAT instructions when the pointer is stored in VGPRs. Reviewers: arsenm Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11067 llvm-svn: 242673
* [mips] Added support for the ERETNC instruction.Vasileios Kalintiris2015-07-203-6/+10
| | | | | | | | | | | | | | Summary: This required adding the instruction predicate HasMips32r5. Patch by Scott Egerton. Reviewers: dsanders, vkalintiris Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11136 llvm-svn: 242666
* [X86][SSE] Reordered cast vectorization costs. NFCI.Simon Pilgrim2015-07-191-47/+48
| | | | | | Reordered the data tables at the top and placed the lookups after. The first stage in the yak shaving necessary to get more accurate costs for a variety of targets given the recent improvements to SINT_TO_FP/UINT_TO_FP/SIGN_EXTEND vector lowering. llvm-svn: 242643
* [X86] Add support for tbyte memory operand size for Intel-syntax x86 assemblyMichael Kuperstein2015-07-191-0/+1
| | | | | | | Differential Revision: http://reviews.llvm.org/D11257 Patch by: marina.yatsina@intel.com llvm-svn: 242639
* Remove TargetInstrInfo::canFoldMemoryOperandSimon Pilgrim2015-07-194-68/+0
| | | | | | | | | | canFoldMemoryOperand is not actually used anywhere in the codebase - all existing users instead call foldMemoryOperand directly when they wish to fold and can correctly deduce what they need from the return value. This patch removes the canFoldMemoryOperand base function and the target implementations; only x86 had a real (bit-rotted) implementation, although AMDGPU had a preparatory stub that had never needed to be completed. Differential Revision: http://reviews.llvm.org/D11331 llvm-svn: 242638
* AVX-512: Floating point conversions for SKX - DAG Lowering.Elena Demikhovsky2015-07-192-7/+36
| | | | | | | | | | SKX supports conversion for all FP types. Integer types include doublewords and quardwords. I added "Legal" status for these nodes and a bunch of tests. I added "NoVLX" for AVX DAG selection to force VLX instructions selection when VLX is supported. Differential Revision: http://reviews.llvm.org/D11255 llvm-svn: 242637
* [X86][SSE] Updated SHL/LSHR i64 vectorization costs.Simon Pilgrim2015-07-181-3/+3
| | | | | | This was missed in D8416. llvm-svn: 242621
* [Hexagon] Use composition instead of inheritance from STL typesBenjamin Kramer2015-07-184-50/+27
| | | | | | | | The standard containers are not designed to be inherited from, as illustrated by the MSVC hacks for NodeOrdering. No functional change intended. llvm-svn: 242616
* ARM: Enable MachineScheduler and disable PostRAScheduler for swift.Matthias Braun2015-07-173-1038/+14
| | | | | | | | | | | | | | | | | | | | | | | Reapply r242500 now that the swift schedmodel includes LDRLIT. This is mostly done to disable the PostRAScheduler which optimizes for instruction latencies which isn't a good fit for out-of-order architectures. This also allows to leave out the itinerary table in swift in favor of the SchedModel ones. This change leads to performance improvements/regressions by as much as 10% in some benchmarks, in fact we loose 0.4% performance over the llvm-testsuite for reasons that appear to be unknown or out of the compilers control. rdar://20803802 documents the investigation of these effects. While it is probably a good idea to perform the same switch for the other ARM out-of-order CPUs, I limited this change to swift as I cannot perform the benchmark verification on the other CPUs. Differential Revision: http://reviews.llvm.org/D10513 llvm-svn: 242588
* ARM: Add scheduling information for LDRLIT instructions to swift scheduling ↵Matthias Braun2015-07-171-0/+7
| | | | | | | | | | | model These pseudo instructions are only lowered after register allocation and are therefore still present when the machine scheduler runs. Add a run: line to a testcase that uses the uncommon flags necessary to actually produce a LDRLIT instruction on swift. llvm-svn: 242587
* Revert "ARM: Enable MachineScheduler and disable PostRAScheduler for swift."Adam Nemet2015-07-173-14/+1038
| | | | | | | | | This reverts commit r242500. It broke some internal tests and Matthias asked me to revert it while he is investigating. llvm-svn: 242553
* [ARM] Use [SU]ABSDIFF nodes instead of intrinsics for VABD/VABAJames Molloy2015-07-172-8/+22
| | | | | | | No functional change, but it preps codegen for the future when SABSDIFF will start getting generated in anger. llvm-svn: 242546
* [AArch64] Use [SU]ABSDIFF nodes instead of intrinsics for ABD/ABAJames Molloy2015-07-172-26/+37
| | | | | | | No functional change, but it preps codegen for the future when SABSDIFF will start getting generated in anger. llvm-svn: 242545
* Use inbounds GEPs for memcpy and memset loweringEli Bendersky2015-07-171-8/+10
| | | | | | Follow-up on discussion in http://reviews.llvm.org/D11220 llvm-svn: 242542
* AArch64: add comment missed out from earlier patch.Tim Northover2015-07-171-0/+4
| | | | | | Helps explain some of the background behind this bit of code. llvm-svn: 242503
* ARM: Enable MachineScheduler and disable PostRAScheduler for swift.Matthias Braun2015-07-173-1038/+14
| | | | | | | | | | | | | | | | | | | | | This is mostly done to disable the PostRAScheduler which optimizes for instruction latencies which isn't a good fit for out-of-order architectures. This also allows to leave out the itinerary table in swift in favor of the SchedModel ones. This change leads to performance improvements/regressions by as much as 10% in some benchmarks, in fact we loose 0.4% performance over the llvm-testsuite for reasons that appear to be unknown or out of the compilers control. rdar://20803802 documents the investigation of these effects. While it is probably a good idea to perform the same switch for the other ARM out-of-order CPUs, I limited this change to swift as I cannot perform the benchmark verification on the other CPUs. Differential Revision: http://reviews.llvm.org/D10513 llvm-svn: 242500
* Use small encodings for constants when possible.Rafael Espindola2015-07-171-3/+3
| | | | llvm-svn: 242493
* Arm: Don't define a label twice with two setjmps in a function.Matthias Braun2015-07-162-14/+3
| | | | | | | | | | Constructing a name based on the function name didn't give us a unique symbol if we had more than one setjmp in a function. Using MCContext::createTempSymbol() always gives us a unique name. Differential Revision: http://reviews.llvm.org/D9314 llvm-svn: 242482
* Fix __builtin_setjmp in combination with sjlj exception handling.Matthias Braun2015-07-163-5/+27
| | | | | | | | | | | | | | | | | | | llvm.eh.sjlj.setjmp was used as part of the SjLj exception handling style but is also used in clang to implement __builtin_setjmp. The ARM backend needs to output additional dispatch tables for the SjLj exception handling style, these tables however can't be emitted if llvm.eh.sjlj.setjmp is simply used for __builtin_setjmp and no actual landing pad blocks exist. To solve this issue a new llvm.eh.sjlj.setup_dispatch intrinsic is introduced which is used instead of llvm.eh.sjlj.setjmp in the SjLj exception handling lowering, so we can differentiate between the case where we actually need to setup a dispatch table and the case where we just need the __builtin_setjmp semantic. Differential Revision: http://reviews.llvm.org/D9313 llvm-svn: 242481
* Fix spelling. NFCI.Simon Pilgrim2015-07-161-3/+3
| | | | llvm-svn: 242448
* AArch64: make inexact signalling on round Darwin-specificTim Northover2015-07-161-1/+1
| | | | | | | | | C11 leaves the choice on whether round-to-integer operations set the inexact flag implementation-defined. Darwin does expect it to be set, but this seems to be against the intent of the IEEE document and slower to implement anyway. So it should be opt-in. llvm-svn: 242446
* [PowerPC] v4i32 is a VSRCRegClassBill Schmidt2015-07-161-0/+1
| | | | | | | | | | | | | | | | I was looking at some vector code generation and kept seeing unnecessary vector copies into the Altivec half of the VSX registers. I discovered that we overlooked v4i32 when adding the register classes for VSX; we only added v4f32 and v2f64. This means that anything that canonicalizes into v4i32 (which is a LOT of stuff) ends up being forced into VRRC on its way to VSRC. The fix is one line. The rest of the patch is fixing up some test cases whose code generation has changed as a result. This seems like it would be a good candidate for backport to 3.7. llvm-svn: 242442
* Streamline the coding style in NVPTXLowerAggrCopiesEli Bendersky2015-07-161-111/+127
| | | | | | Make the style consistent with LLVM style throughout and clang-format. llvm-svn: 242439
* [NVPTX] enable SpeculativeExecution in NVPTXJingyue Wu2015-07-161-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: SpeculativeExecution enables a series straight line optimizations (such as SLSR and NaryReassociate) on conditional code. For example, if (...) ... b * s ... if (...) ... (b + 1) * s ... speculative execution can hoist b * s and (b + 1) * s from then-blocks, so that we have ... b * s ... if (...) ... ... (b + 1) * s ... if (...) ... Then, SLSR can rewrite (b + 1) * s to (b * s + s) because after speculative execution b * s dominates (b + 1) * s. The performance impact of this change is significant. It speeds up the benchmarks running EigenFloatContractionKernelInternal16x16 (https://bitbucket.org/eigen/eigen/src/ba68f42fa69e4f43417fe1e52669d4dd5d2b3bee/unsupported/Eigen/CXX11/src/Tensor/TensorContractionCuda.h?at=default#cl-526) by roughly 2%. Some internal benchmarks that have the above code pattern are improved by up to 40%. No significant slowdowns are observed on Eigen CUDA microbenchmarks. Reviewers: jholewinski, broune, eliben Subscribers: llvm-commits, jholewinski Differential Revision: http://reviews.llvm.org/D11201 llvm-svn: 242437
* AArch64: Implement conditional compare sequence matching.Matthias Braun2015-07-164-76/+347
| | | | | | | | | | | | | | | This is a new iteration of the reverted r238793 / http://reviews.llvm.org/D8232 which wrongly assumed that any and/or trees can be represented by conditional compare sequences, however there are some restrictions to that. This version fixes this and adds comments that explain exactly what types of and/or trees can actually be implemented as conditional compare sequences. Related to http://llvm.org/PR20927, rdar://18326194 Differential Revision: http://reviews.llvm.org/D10579 llvm-svn: 242436
* AMDPGU/SI: Negative offsets aren't allowed in MUBUF's vaddr operandTom Stellard2015-07-161-6/+9
| | | | | | | | | | Reviewers: arsenm Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11226 llvm-svn: 242434
* AMDPGU/SI: Use AssertZext node to mask high bit for scratch offsetsTom Stellard2015-07-164-3/+39
| | | | | | | | | | | | | | Summary: We can safely assume that the high bit of scratch offsets will never be set, because this would require at least 128 GB of GPU memory. Reviewers: arsenm Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11225 llvm-svn: 242433
* Revert "Add missing load/store flags to thumb2 instructions."Pete Cooper2015-07-161-4/+1
| | | | | | | | | | This reverts commit r242300. This is causing buildbot failures which we are investigating. I'll reapply once we know whats going on, but for now want to get the bots green. llvm-svn: 242428
* [NVPTX] Don't leak dead instructions after unlinking them from the BasicBlockBenjamin Kramer2015-07-161-2/+2
| | | | llvm-svn: 242417
* Correct lowering of memmove in NVPTXEli Bendersky2015-07-162-61/+168
| | | | | | | | | | This fixes https://llvm.org/bugs/show_bug.cgi?id=24056 Also a bit of refactoring along the way. Differential Revision: http://reviews.llvm.org/D11220 llvm-svn: 242413
* AMDGPU/R600: Remove unused variableTom Stellard2015-07-161-1/+0
| | | | | | This fixes a warning introduced by r242410. llvm-svn: 242412
* AMDPGU/R600: Replace llvm_unreachable() call with LLVMContext::emitError()Tom Stellard2015-07-161-12/+3
| | | | | | | | | | | | | | | | Summary: This fixes an issue on MIPS where the infinite-loop-evergreen.ll test was failing to terminate. Fixes PR24147. Reviewers: arsenm, dsanders Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11260 llvm-svn: 242410
* [X86] Reapply r240257 : "Allow more call sequences to use push instructions ↵Michael Kuperstein2015-07-161-26/+91
| | | | | | | | | | | for argument passing" This allows more call sequences to use pushes instead of movs when optimizing for size. In particular, calling conventions that pass some parameters in registers (e.g. thiscall) are now supported. This should no longer cause miscompiles, now that a bug in emitPrologue was fixed in r242395. llvm-svn: 242398
* [X86] Fix emitPrologue() to make less assumptions about pushesMichael Kuperstein2015-07-161-0/+1
| | | | | | | | | | | | | | | When X86FrameLowering::emitPrologue() looks for where to insert the %esp subtraction to allocate stack space for local allocations, it assumes that any sequence of push instructions that starts at function entry consists purely of spills of callee-save registers. This may be false, since from some point forward, the pushes may pushing arguments to a subsequent function call. This caused a miscompile that was exposed by r240257, and is not easily testable since r240257 was reverted. A test will be committed separately after r240257 is reapplied. llvm-svn: 242395
* [Mips] Make helper function static, NFC.Benjamin Kramer2015-07-161-1/+2
| | | | llvm-svn: 242393
* Add missing break in switch case in R600ISelLoweringMehdi Amini2015-07-161-0/+1
| | | | | | | | | | | | | Summary: Catched by coverity. Reviewers: arsenm Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11120 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 242388
* Move most user of TargetMachine::getDataLayout to the Module oneMehdi Amini2015-07-1625-128/+125
| | | | | | | | | | | | | | | | | | | | | Summary: This change is part of a series of commits dedicated to have a single DataLayout during compilation by using always the one owned by the module. This patch is quite boring overall, except for some uglyness in ASMPrinter which has a getDataLayout function but has some clients that use it without a Module (llmv-dsymutil, llvm-dwarfdump), so some methods are taking a DataLayout as parameter. Reviewers: echristo Subscribers: yaron.keren, rafael, llvm-commits, jholewinski Differential Revision: http://reviews.llvm.org/D11090 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 242386
* Remove DataLayout from TargetLoweringObjectFile, redirect to ModuleMehdi Amini2015-07-1610-27/+26
| | | | | | | | | | | | | | | | Summary: This change is part of a series of commits dedicated to have a single DataLayout during compilation by using always the one owned by the module. Reviewers: echristo Subscribers: yaron.keren, rafael, llvm-commits, jholewinski Differential Revision: http://reviews.llvm.org/D11079 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 242385
* Revert "[X86] Allow more call sequences to use push instructions for ↵Reid Kleckner2015-07-161-91/+26
| | | | | | | | | | | argument passing" It miscompiles some code and a reduced test case has been sent to the author. This reverts commit r240257. llvm-svn: 242373
* [ARM] Define a subtarget feature that is used to avoid using movt/movwAkira Hatanaka2015-07-163-11/+11
| | | | | | | | | | | | | | | | | pairs for 32-bit immediates. This change is needed to avoid emitting movt/movw pairs when doing LTO and do so on a per-function basis. Out-of-tree projects currently using cl::opt option -arm-use-movt=0 or false to avoid emitting movt/movw pairs should make changes to add subtarget feature "+no-movt" (see the changes made to clang in r242368). rdar://problem/21529937 Differential Revision: http://reviews.llvm.org/D11026 llvm-svn: 242369
* Clear kill flags in ARMLoadStoreOptimizer.Pete Cooper2015-07-161-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The pass here was clearing kill flags on instructions which had their sources killed in the instruction being combined. But given that the new instruction is inserted after the existing ones, any existing instructions with kill flags will lead to the verifier complaining that we are reading an undefined physreg. For example, what we had prior to this optimization is t2STRi12 %R1, %SP, 12 t2STRi12 %R1<kill>, %SP, 16 t2STRi12 %R0<kill>, %SP, 8 and prior to this fix that would generate t2STRi12 %R1<kill>, %SP, 16 t2STRDi8 %R0<kill>, %R1, %SP, 8 This is clearly incorrect as it didn't clear the kill flag on R1 used with offset 16 because there was no kill flag on the instruction with offset 12. After this change we clear the kill flag on the offset 16 instruction because we know it will be used afterwards in the new instruction. I haven't provided a test case. I have a small test, but even it is very sensitive to register allocation order which isn't ideal. llvm-svn: 242359
* TargetRegisterInfo: Provide a way to check assigned registers in ↵Matthias Braun2015-07-152-2/+4
| | | | | | | | | | getRegAllocationHints() Pass a const reference to LiveRegMatrix to getRegAllocationHints() because some targets can prodive better hints if they can test whether a physreg has been used for register allocation yet. llvm-svn: 242340
* Revert "Look through PHIs to find additional register sources"Bruno Cardoso Lopes2015-07-151-2/+1
| | | | | | | | | | Likely broke compilation on ARM: http://lab.llvm.org:8011/builders/clang-native-arm-lnt/builds/13054 This reverts commit 131ce4a838c081516cbfed039fc986b33e3979d6. llvm-svn: 242310
* Add missing load/store flags to thumb2 instructions.Pete Cooper2015-07-151-1/+4
| | | | | | | | | | | | | These were the cause of a verifier error when building 7zip with -verify-machineinstrs. Running 'make check' with the verifier triggered the same error on the test here so i've updated the test to run the verifier on one of its runs instead of adding a new one. While looking at this code, there was a stale comment that these instructions were only used for disassembly. This probably used to be the case, but they are now used in the 'ARM load / store optimization pass' too. llvm-svn: 242300
* [PPC64LE] Fix vec_sld semantics for little endianBill Schmidt2015-07-151-4/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The vec_sld interface provides access to the vsldoi instruction. Unlike most of the vec_* interfaces, we do not attempt to change the generated code for vec_sld based on the endian mode. It is too difficult to correctly infer the desired semantics because of different element types, and the corrected instruction sequence is expensive, involving loading a permute control vector and performing a generalized permute. For GCC, this was implemented as "Don't touch the vec_sld" implementation. When it came time for the LLVM implementation, I did the same thing. However, this was hasty and incorrect. In LLVM's version of altivec.h, vec_sld was previously defined in terms of the vec_perm interface. Because vec_perm semantics are adjusted for little endian, this means that leaving vec_sld untouched causes it to generate something different for LE than for BE. Not good. This back-end patch accompanies the changes to altivec.h that change vec_sld's behavior for little endian. Those changes mean that we see slightly different code in the back end when trying to recognize a VSLDOI instruction in isVSLDOIShuffleMask. In particular, a ShuffleKind of 1 (where the two inputs are identical) must now be treated the same way as a ShuffleKind of 2 (little endian with different inputs) when little endian mode is in force. This is because ShuffleKind of 1 is defined using big-endian numbering. This has a ripple effect on LowerBUILD_VECTOR, where we create our own internal VSLDOI instructions. Because these are a ShuffleKind of 1, they will now have their shift amounts subtracted from 16 when recognizing the shuffle mask. To avoid problems we have to subtract them from 16 again before creating the VSLDOI instructions. There are a couple of other uses of BuildVSLDOI, but these do not need to be modified because the shift amount is 8, which is unchanged when subtracted from 16. llvm-svn: 242296
OpenPOWER on IntegriCloud