summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target
Commit message (Collapse)AuthorAgeFilesLines
* Add missing break in switch case in R600ISelLoweringMehdi Amini2015-07-161-0/+1
| | | | | | | | | | | | | Summary: Catched by coverity. Reviewers: arsenm Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11120 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 242388
* Move most user of TargetMachine::getDataLayout to the Module oneMehdi Amini2015-07-1625-128/+125
| | | | | | | | | | | | | | | | | | | | | Summary: This change is part of a series of commits dedicated to have a single DataLayout during compilation by using always the one owned by the module. This patch is quite boring overall, except for some uglyness in ASMPrinter which has a getDataLayout function but has some clients that use it without a Module (llmv-dsymutil, llvm-dwarfdump), so some methods are taking a DataLayout as parameter. Reviewers: echristo Subscribers: yaron.keren, rafael, llvm-commits, jholewinski Differential Revision: http://reviews.llvm.org/D11090 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 242386
* Remove DataLayout from TargetLoweringObjectFile, redirect to ModuleMehdi Amini2015-07-1610-27/+26
| | | | | | | | | | | | | | | | Summary: This change is part of a series of commits dedicated to have a single DataLayout during compilation by using always the one owned by the module. Reviewers: echristo Subscribers: yaron.keren, rafael, llvm-commits, jholewinski Differential Revision: http://reviews.llvm.org/D11079 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 242385
* Revert "[X86] Allow more call sequences to use push instructions for ↵Reid Kleckner2015-07-161-91/+26
| | | | | | | | | | | argument passing" It miscompiles some code and a reduced test case has been sent to the author. This reverts commit r240257. llvm-svn: 242373
* [ARM] Define a subtarget feature that is used to avoid using movt/movwAkira Hatanaka2015-07-163-11/+11
| | | | | | | | | | | | | | | | | pairs for 32-bit immediates. This change is needed to avoid emitting movt/movw pairs when doing LTO and do so on a per-function basis. Out-of-tree projects currently using cl::opt option -arm-use-movt=0 or false to avoid emitting movt/movw pairs should make changes to add subtarget feature "+no-movt" (see the changes made to clang in r242368). rdar://problem/21529937 Differential Revision: http://reviews.llvm.org/D11026 llvm-svn: 242369
* Clear kill flags in ARMLoadStoreOptimizer.Pete Cooper2015-07-161-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The pass here was clearing kill flags on instructions which had their sources killed in the instruction being combined. But given that the new instruction is inserted after the existing ones, any existing instructions with kill flags will lead to the verifier complaining that we are reading an undefined physreg. For example, what we had prior to this optimization is t2STRi12 %R1, %SP, 12 t2STRi12 %R1<kill>, %SP, 16 t2STRi12 %R0<kill>, %SP, 8 and prior to this fix that would generate t2STRi12 %R1<kill>, %SP, 16 t2STRDi8 %R0<kill>, %R1, %SP, 8 This is clearly incorrect as it didn't clear the kill flag on R1 used with offset 16 because there was no kill flag on the instruction with offset 12. After this change we clear the kill flag on the offset 16 instruction because we know it will be used afterwards in the new instruction. I haven't provided a test case. I have a small test, but even it is very sensitive to register allocation order which isn't ideal. llvm-svn: 242359
* TargetRegisterInfo: Provide a way to check assigned registers in ↵Matthias Braun2015-07-152-2/+4
| | | | | | | | | | getRegAllocationHints() Pass a const reference to LiveRegMatrix to getRegAllocationHints() because some targets can prodive better hints if they can test whether a physreg has been used for register allocation yet. llvm-svn: 242340
* Revert "Look through PHIs to find additional register sources"Bruno Cardoso Lopes2015-07-151-2/+1
| | | | | | | | | | Likely broke compilation on ARM: http://lab.llvm.org:8011/builders/clang-native-arm-lnt/builds/13054 This reverts commit 131ce4a838c081516cbfed039fc986b33e3979d6. llvm-svn: 242310
* Add missing load/store flags to thumb2 instructions.Pete Cooper2015-07-151-1/+4
| | | | | | | | | | | | | These were the cause of a verifier error when building 7zip with -verify-machineinstrs. Running 'make check' with the verifier triggered the same error on the test here so i've updated the test to run the verifier on one of its runs instead of adding a new one. While looking at this code, there was a stale comment that these instructions were only used for disassembly. This probably used to be the case, but they are now used in the 'ARM load / store optimization pass' too. llvm-svn: 242300
* [PPC64LE] Fix vec_sld semantics for little endianBill Schmidt2015-07-151-4/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The vec_sld interface provides access to the vsldoi instruction. Unlike most of the vec_* interfaces, we do not attempt to change the generated code for vec_sld based on the endian mode. It is too difficult to correctly infer the desired semantics because of different element types, and the corrected instruction sequence is expensive, involving loading a permute control vector and performing a generalized permute. For GCC, this was implemented as "Don't touch the vec_sld" implementation. When it came time for the LLVM implementation, I did the same thing. However, this was hasty and incorrect. In LLVM's version of altivec.h, vec_sld was previously defined in terms of the vec_perm interface. Because vec_perm semantics are adjusted for little endian, this means that leaving vec_sld untouched causes it to generate something different for LE than for BE. Not good. This back-end patch accompanies the changes to altivec.h that change vec_sld's behavior for little endian. Those changes mean that we see slightly different code in the back end when trying to recognize a VSLDOI instruction in isVSLDOIShuffleMask. In particular, a ShuffleKind of 1 (where the two inputs are identical) must now be treated the same way as a ShuffleKind of 2 (little endian with different inputs) when little endian mode is in force. This is because ShuffleKind of 1 is defined using big-endian numbering. This has a ripple effect on LowerBUILD_VECTOR, where we create our own internal VSLDOI instructions. Because these are a ShuffleKind of 1, they will now have their shift amounts subtracted from 16 when recognizing the shuffle mask. To avoid problems we have to subtract them from 16 again before creating the VSLDOI instructions. There are a couple of other uses of BuildVSLDOI, but these do not need to be modified because the shift amount is 8, which is unchanged when subtracted from 16. llvm-svn: 242296
* Look through PHIs to find additional register sourcesBruno Cardoso Lopes2015-07-151-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Teaches the ValueTracker in the PeepholeOptimizer to look through PHI instructions. - Add findNextSourceAndRewritePHI method to lookup into multiple sources returnted by the ValueTracker and rewrite PHIs with new sources. With these changes we can find more register sources and rewrite more copies to allow coaslescing of bitcast instructions. Hence, we eliminate unnecessary VR64 <-> GR64 copies in x86, but it could be extended to other archs by marking "isBitcast" on target specific instructions. The x86 example follows: A: psllq %mm1, %mm0 movd %mm0, %r9 jmp C B: por %mm1, %mm0 movd %mm0, %r9 jmp C C: movd %r9, %mm0 pshufw $238, %mm0, %mm0 Becomes: A: psllq %mm1, %mm0 jmp C B: por %mm1, %mm0 jmp C C: pshufw $238, %mm0, %mm0 Differential Revision: http://reviews.llvm.org/D11197 rdar://problem/20404526 llvm-svn: 242295
* [PPC] Disassemble little endian ppc instructions in the right byte orderBenjamin Kramer2015-07-151-8/+17
| | | | | | PR24122. The test is simply a byte swapped version of ppc64-encoding.txt. llvm-svn: 242288
* [PowerPC] Use the MachineCombiner to reassociate fadd/fmulHal Finkel2015-07-153-0/+303
| | | | | | | | | | | | | This is a direct port of the code from the X86 backend (r239486/r240361), which uses the MachineCombiner to reassociate (floating-point) adds/muls to increase ILP, to the PowerPC backend. The rationale is the same. There is a lot of copy-and-paste here between the X86 code and the PowerPC code, and we should extract at least some of this into CodeGen somewhere. However, I don't want to do that until this code is enhanced to handle FMAs as well. After that, we'll be in a better position to extract the common parts. llvm-svn: 242279
* [PowerPC] Extend physical register live range in PPCVSXFMAMutateHal Finkel2015-07-151-2/+15
| | | | | | | | | | | | | If the source of the copy that defines the addend is a physical register, then its existing live range may not extend to the FMA being mutated. Make sure we extend the live range of the register to meet the FMA because it will become its operand in this case. I don't have an independent test case, but it will be exposed by change to be committed shortly enabling the use of the machine combiner to do fadd/fmul reassociation, and will be covered by one of the associated regression tests. llvm-svn: 242278
* [AArch64] Fix problems in decoding generic MSR instructionsPetr Pavlu2015-07-151-0/+3
| | | | | | | | | Bitpatterns rejected by the decoder method of `MSR (immediate)` should be decoded as the `extended MSR (register)` instruction. Differential Revision: http://reviews.llvm.org/D7174 llvm-svn: 242276
* AVX : Fix ISA disabling in case AVX512VL , some instructions should be ↵Igor Breger2015-07-152-30/+31
| | | | | | | | | | disabled only if AVX512BW present. Tests added. Differential Revision: http://reviews.llvm.org/D11122 llvm-svn: 242270
* Change conditional to assert. NFC.Pete Cooper2015-07-151-3/+2
| | | | | | | | This code was breaking from the case statement if the getStoreSizeInBits() value was not a multiple of 0. Given that the implementation returns getStoreSize() * 8, it can only be a multiple of 8. llvm-svn: 242255
* Use more foreach loops in SelectionDAG. NFCPete Cooper2015-07-141-2/+2
| | | | llvm-svn: 242249
* WebAssembly: fix build breakage.JF Bastien2015-07-145-8/+9
| | | | | | | | | | | | | | | Summary: processFunctionBeforeCalleeSavedScan was renamed to determineCalleeSaves and now takes a BitVector parameter as of rL242165, reviewed in http://reviews.llvm.org/D10909 WebAssembly is still marked as experimental and therefore doesn't build by default. It does, however, grep by default! I notice that processFunctionBeforeCalleeSavedScan is still mentioned in a few comments and error messages, which I also fixed. Reviewers: qcolombet, sunfish Subscribers: jfb, dsanders, hfinkel, MatzeB, llvm-commits Differential Revision: http://reviews.llvm.org/D11199 llvm-svn: 242242
* [PowerPC] Support symbolic targets in patchpointsHal Finkel2015-07-141-57/+71
| | | | | | | Follow-up r235483, with the corresponding support in PPC. We use a regular call for symbolic targets (because they're much cheaper than indirect calls). llvm-svn: 242239
* [PowerPC] Use the ABI indirect-call protocol for patchpointsHal Finkel2015-07-141-1/+43
| | | | | | | | | | | | We used to take the address specified as the direct target of the patchpoint and did no TOC-pointer handling. This, however, as not all that useful, because MCJIT tends to create a lot of modules, and they have their own TOC sections. Thus, to call from the generated code to other generated code, you really need to switch TOC pointers. Make this work as expected, and under ELFv1, tread the address as the function descriptor address so that the correct TOC pointer can be loaded. llvm-svn: 242217
* Add allnodes() iterator range to SelectionDAG. NFC.Pete Cooper2015-07-142-11/+6
| | | | | | | | | | | SelectionDAG already had begin/end methods for iterating over all the nodes, but didn't define an iterator_range for us in foreach loops. This adds such a method and uses it in some of the eligible places throughout the backends. llvm-svn: 242212
* WebAssembly: add basic int/fp instruction codegen.JF Bastien2015-07-143-28/+63
| | | | | | | | | | | | Summary: This patch has the most basic instruction codegen for 32 and 64 bit int/fp. Reviewers: sunfish Subscribers: llvm-commits, jfb Differential Revision: http://reviews.llvm.org/D11193 llvm-svn: 242201
* Fix NDEBUG build warningKrzysztof Parzyszek2015-07-141-0/+2
| | | | llvm-svn: 242200
* Fix Windows build: replace __func__ with LLVM_FUNCTION_NAMEKrzysztof Parzyszek2015-07-141-4/+5
| | | | llvm-svn: 242192
* [MMX] Use the appropriate instructions for GR64 <-> VR64 copies.Bruno Cardoso Lopes2015-07-141-2/+2
| | | | | | | | | | | | | | | MOVSDto64rr and MOV64toSDrr are defined to convert between FR64 (%xmm) <-> GR64 registers, not VR64 (%mm) <-> GR64. This is wrong. I found this by inspection and could not find a suitable testcase for it since (1) we don't handle MMX bitcasts in Peephole optimizer as to generate COPYs that (2) could be expanded back to the appropriate x86 instruction in ExpandPostRA. Switch to use the appropriate instructions: MMX_MOVD64from64rr and MMX_MOVD64to64rr here. llvm-svn: 242191
* [PowerPC] Fix the PPCInstrInfo::getInstrLatency implementationHal Finkel2015-07-145-0/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PowerPC uses itineraries to describe processor pipelines (and dispatch-group restrictions for P7/P8 cores). Unfortunately, the target-independent implementation of TII.getInstrLatency calls ItinData->getStageLatency, and that looks for the largest cycle count in the pipeline for any given instruction. This, however, yields the wrong answer for the PPC itineraries, because we don't encode the full pipeline. Because the functional units are fully pipelined, we only model the initial stages (there are no relevant hazards in the later stages to model), and so the technique employed by getStageLatency does not really work. Instead, we should take the maximum output operand latency, and that's what PPCInstrInfo::getInstrLatency now does. This caused some test-case churn, including two unfortunate side effects. First, the new arrangement of copies we get from function parameters now sometimes blocks VSX FMA mutation (a FIXME has been added to the code and the test cases), and we have one significant test-suite regression: SingleSource/Benchmarks/BenchmarkGame/spectral-norm 56.4185% +/- 18.9398% In this benchmark we have a loop with a vectorized FP divide, and it with the new scheduling both divides end up in the same dispatch group (which in this case seems to cause a problem, although why is not exactly clear). The grouping structure is hard to predict from the bottom of the loop, and there may not be much we can do to fix this. Very few other test-suite performance effects were really significant, but almost all weakly favor this change. However, in light of the issues highlighted above, I've left the old behavior available via a command-line flag. llvm-svn: 242188
* [Hexagon] Generate instructions for operations on predicate registersKrzysztof Parzyszek2015-07-143-0/+531
| | | | | | | Convert logical operations on general-purpose registers to the correspon- ding operations on predicate registers. llvm-svn: 242186
* AMDGPU: Avoid using 64-bit shift for i64 (shl x, 32)Matt Arsenault2015-07-142-0/+35
| | | | | | | | | | | | | | | | | This can be done only with moves which theoretically will optimize better later. Although this transform increases the instruction count, it should be code size / cycle count neutral in the worst VALU case. It also seems to slightly improve a couple of testcases due to other DAG combines this exposes. This is probably slightly worse for the SALU case, so it might be better to handle this during moveToVALU, although then you lose some simplifications like the load width reducing in the simple testcase. llvm-svn: 242177
* AMDGPU/SI: Fix read2 merging into a super register.Matt Arsenault2015-07-143-13/+35
| | | | | | | | | | | | | | | | If the read2 produced was supposed to be writing into a super register, it would use the wrong subregister indices. Fix this by inserting copies, so we only ever write to a vreg_64. Run the register coalescer again to clean this up, although this isn't ideal and often does result in an extra move. Also remove the assert that offset1 > offset0. There isn't a real reason to not allow this other than a minor convenience in the compiler, and it doesn't seem worth the effort of avoiding it. llvm-svn: 242174
* MachineRegisterInfo: Remove UsedPhysReg infrastructureMatthias Braun2015-07-1410-22/+15
| | | | | | | | | | | | | We have a detailed def/use lists for every physical register in MachineRegisterInfo anyway, so there is little use in maintaining an additional bitset of which ones are used. Removing it frees us from extra book keeping. This simplifies VirtRegMap. Differential Revision: http://reviews.llvm.org/D10911 llvm-svn: 242173
* Add missing builtins to the PPC back end for ABI compliance (vol. 4)Nemanja Ivanovic2015-07-141-0/+6
| | | | | | | | | This patch corresponds to review: http://reviews.llvm.org/D11183 Back end portion of the fourth round of additions to altivec.h. llvm-svn: 242167
* PrologEpilogInserter: Rewrite API to determine callee save regsiters.Matthias Braun2015-07-1423-107/+131
| | | | | | | | | | | | | | | | This changes TargetFrameLowering::processFunctionBeforeCalleeSavedScan(): - Rename the function to determineCalleeSaves() - Pass a bitset of callee saved registers by reference, thus avoiding the function-global PhysRegUsed bitset in MachineRegisterInfo. - Without PhysRegUsed the implementation is fine tuned to not save physcial registers which are only read but never modified. Related to rdar://21539507 Differential Revision: http://reviews.llvm.org/D10909 llvm-svn: 242165
* AArch64: add rev64 alias for 64-bit rev instruction.Tim Northover2015-07-141-0/+2
| | | | | | | It could be useful to assembly programmers and makes the permitted variants a little more uniform. llvm-svn: 242164
* [Hexagon] Generate "extract" instructions more aggressivelyKrzysztof Parzyszek2015-07-143-13/+278
| | | | | | | Generate extract instructions (via intrinsics) before the DAG combiner folds shifts into unrecognizable forms. llvm-svn: 242163
* ARMAsmParser: Take MCInst param by const-refHans Wennborg2015-07-141-8/+9
| | | | | | (Broken out from http://reviews.llvm.org/D11167) llvm-svn: 242160
* AMDGPU/SI: Add support for shrinking v_cndmask_b32_e32 instructionsTom Stellard2015-07-141-6/+27
| | | | | | | | | | Reviewers: arsenm Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11061 llvm-svn: 242146
* Silencing two MSVC warnings; 'argument' : truncation from 'unsigned int' to ↵Aaron Ballman2015-07-141-1/+1
| | | | | | 'int16_t' and truncation of constant value. NFC intended. llvm-svn: 242145
* [mips] Fix li/la differences between IAS and GAS.Daniel Sanders2015-07-141-82/+83
| | | | | | | | | | | | | | | | | | | Summary: - Signed 16-bit should have priority over unsigned. - For la, unsigned 16-bit must use ori+addu rather than directly use ori. - Correct tests on 32-bit immediates with 64-bit predicates by sign-extending the immediate beforehand. For example, isInt<16>(0xffff8000) should be true and use addiu. Also split li/la testing into separate files due to their size. Reviewers: vkalintiris Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D10967 llvm-svn: 242139
* Generate correct asm info for mingw and cygwin ARM targets.Yaron Keren2015-07-141-2/+2
| | | | | | | | | http://reviews.llvm.org/D11075 Patch by Martell Malone Reviewed by Reid Kleckner llvm-svn: 242123
* Prune trailing whitespaces and CRs.NAKAMURA Takumi2015-07-141-23/+23
| | | | llvm-svn: 242117
* [PPC64LE] More improvements to VSX swap optimizationBill Schmidt2015-07-131-21/+188
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch allows VSX swap optimization to succeed more frequently. Specifically, it is concerned with common code sequences that occur when copying a scalar floating-point value to a vector register. This patch currently handles cases where the floating-point value is already in a register, but does not yet handle loads (such as via an LXSDX scalar floating-point VSX load). That will be dealt with later. A typical case is when a scalar value comes in as a floating-point parameter. The value is copied into a virtual VSFRC register, and then a sequence of SUBREG_TO_REG and/or COPY operations will convert it to a full vector register of the class required by the context. If this vector register is then used as part of a lane-permuted computation, the original scalar value will be in the wrong lane. We can fix this by adding a swap operation following any widening SUBREG_TO_REG operation. Additional COPY operations may be needed around the swap operation in order to keep register assignment happy, but these are pro forma operations that will be removed by coalescing. If a scalar value is otherwise directly referenced in a computation (such as by one of the many XS* vector-scalar operations), we currently disable swap optimization. These operations are lane-sensitive by definition. A MentionsPartialVR flag is added for use in each swap table entry that mentions a scalar floating-point register without having special handling defined. A common idiom for PPC64LE is to convert a double-precision scalar to a vector by performing a splat operation. This ensures that the value can be referenced as V[0], as it would be for big endian, whereas just converting the scalar to a vector with a SUBREG_TO_REG operation leaves this value only in V[1]. A doubleword splat operation is one form of an XXPERMDI instruction, which takes one doubleword from a first operand and another doubleword from a second operand, with a two-bit selector operand indicating which doublewords are chosen. In the general case, an XXPERMDI can be permitted in a lane-swapped region provided that it is properly transformed to select the corresponding swapped values. This transformation is to reverse the order of the two input operands, and to reverse and complement the bits of the selector operand (derivation left as an exercise to the reader ;). A new test case that exercises the scalar-to-vector and generalized XXPERMDI transformations is added as CodeGen/PowerPC/swaps-le-5.ll. The patch also requires a change to CodeGen/PowerPC/swaps-le-3.ll to use CHECK-DAG instead of CHECK for two independent instructions that now appear in reverse order. There are two small unrelated changes that are added with this patch. First, the XXSLDWI instruction was incorrectly omitted from the list of lane-sensitive instructions; this is now fixed. Second, I observed that the same webs were being rejected over and over again for different reasons. Since it's sufficient to reject a web only once, I added a check for this to speed up the compilation time slightly. llvm-svn: 242081
* [Hexagon] Move BitTracker into the llvm namespace and remove redundant ↵Benjamin Kramer2015-07-134-64/+52
| | | | | | | | qualifications No functional change intended. llvm-svn: 242062
* AMDGPU: Minor cleanups to always inline passMatt Arsenault2015-07-131-7/+4
| | | | llvm-svn: 242053
* Enable partial and runtime loop unrolling for NVPTX.Mark Heffernan2015-07-132-0/+14
| | | | | | | | | Enable partial and runtime loop unrolling for NVPTX backend via TTI::UnrollingPreferences with a small threshold. This partially unrolls small loops which are often unrolled by the PTX to SASS compiler and unrolling earlier can be beneficial. llvm-svn: 242049
* [WinEH] Strip the \01 character from the __CxxFrameHandler3 thunk nameReid Kleckner2015-07-131-3/+5
| | | | | | Add another C++ 32-bit EH table test. llvm-svn: 242044
* AMDGPU/SI: Select mad patterns to v_mac_f32Tom Stellard2015-07-137-8/+141
| | | | | | | | | The two-address instruction pass will convert these back to v_mad_f32 if necessary. Differential Revision: http://reviews.llvm.org/D11060 llvm-svn: 242038
* ARM: Fix cttz expansion on vector types.Logan Chien2015-07-131-2/+98
| | | | | | | | | | | | The 64/128-bit vector types are legal if NEON instructions are available. However, there was no matching patterns for @llvm.cttz.*() intrinsics and result in fatal error. This commit fixes the problem by lowering cttz to: a. ctpop((x & -x) - 1) b. width - ctlz(x & -x) - 1 llvm-svn: 242037
* [ARM] Handle commutativity when converting to tADDhirr in Thumb2Scott Douglass2015-07-131-3/+11
| | | | | | | | Also, run thumb_rewrite.s tests in Thumb2 now that they pass. Differential Revision: http://reviews.llvm.org/D11132 llvm-svn: 242036
* [ARM] Add Thumb2 ADD with SP narrowing from 3 operand to 2Scott Douglass2015-07-131-5/+14
| | | | | | Differential Revision: http://reviews.llvm.org/D11131 llvm-svn: 242035
OpenPOWER on IntegriCloud