summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target/PowerPC
Commit message (Collapse)AuthorAgeFilesLines
...
* [PowerPC] Compute the MMO offset for an unaligned load with signed arithmeticHal Finkel2015-09-031-1/+2
| | | | | | | | | | | If you compute the MMO offset using unsigned arithmetic, you end up with a large positive offset instead of a small negative one. In theory, this could cause bad instruction-scheduling decisions later. I noticed this by inspection from the debug output, and using that for the regression test is the best I can do right now. llvm-svn: 246805
* [PowerPC] Cleanup cost model for unaligned vector loads/storesHal Finkel2015-09-021-22/+37
| | | | | | | | | | I'm adding a regression test to better cover code generation for unaligned vector loads and stores, but there's no functional change to the code generation here. There is an improvement to the cost model for unaligned vector loads and stores, mostly for QPX (for which we were not previously accounting for the permutation-based loads), and the cost model implementation is cleaner. llvm-svn: 246712
* [PowerPC] Don't always consider P8Altivec-only masks in LowerVECTOR_SHUFFLEHal Finkel2015-09-021-6/+8
| | | | | | | | | | | LowerVECTOR_SHUFFLE needs to decide whether to pass a vector shuffle off to the TableGen-generated matching code, and it does this by testing the same predicates used by the TableGen files. Unfortunately, when we added new P8Altivec-only predicates, we started universally testing them in LowerVECTOR_SHUFFLE, and if then matched when targeting a system prior to a P8, we'd end up with a selection failure. llvm-svn: 246675
* [EH] Handle non-Function personalities like unknown personalitiesReid Kleckner2015-08-311-7/+7
| | | | | | | | | Also delete and simplify a lot of MachineModuleInfo code that used to be needed to handle personalities on landingpads. Now that the personality is on the LLVM Function, we no longer need to track it this way on MMI. Certainly it should not live on LandingPadInfo. llvm-svn: 246478
* [PowerPC] Fixup SELECT_CC (and SETCC) patterns with i1 comparison operandsHal Finkel2015-08-304-5/+168
| | | | | | | | | | | | | | | | | | | | | | There were really two problems here. The first was that we had the truth tables for signed i1 comparisons backward. I imagine these are not very common, but if you have: setcc i1 x, y, LT this has the '0 1' and the '1 0' results flipped compared to: setcc i1 x, y, ULT because, in the signed case, '1 0' is really '-1 0', and the answer is not the same as in the unsigned case. The second problem was that we did not have patterns (at all) for the unsigned comparisons select_cc nodes for i1 comparison operands. This was the specific cause of PR24552. These had to be added (and a missing Altivec promotion added as well) to make sure these function for all types. I've added a bunch more test cases for these patterns, and there are a few FIXMEs in the test case regarding code-quality. Fixes PR24552. llvm-svn: 246400
* [MIR Serialization] static -> static const in ↵Hal Finkel2015-08-301-2/+2
| | | | | | | | | getSerializable*MachineOperandTargetFlags Make the arrays 'static const' instead of just 'static'. Post-commit review comment from Roman Divacky on IRC. NFC. llvm-svn: 246376
* [PowerPC/MIR Serialization] Target flags serialization supportHal Finkel2015-08-302-0/+41
| | | | | | | | | | | | | Add support for MIR serialization of PowerPC-specific operand target flags (based on the generic infrastructure added in r244185 and r245383). I won't even pretend that this is good test coverage, but this includes the regression test associated with r246372. Adding an MIR test for that fix is far superior to adding an IR-level test because particular instruction-scheduling decisions are necessary in order to expose the bug, and using an MIR test we can start the pipeline post-scheduling. llvm-svn: 246373
* [PowerPC] Don't assume ADDISdtprelHA's source is r3Hal Finkel2015-08-301-5/+5
| | | | | | | | | | | | Even through ADDISdtprelHA generally has r3 as its source register, it is possible for the instruction scheduler to move things around such that some other register is the source. We need to print the actual source register, not always r3. Fixes PR24394. The test case will come in a follow-up commit because it depends on MIR target-flags parsing. llvm-svn: 246372
* [PowerPC] Remove unnecessary braces in PPCVSXFMAMutateHal Finkel2015-08-261-3/+3
| | | | | | Address Eric's post-commit review of r245741. NFC. llvm-svn: 246121
* FastISel: Use finishCondBranch() for ARM,Mips,PowerPC FastISelMatthias Braun2015-08-261-2/+1
| | | | | | Note that after this change branch probabilities are preserved now. llvm-svn: 245998
* [PowerPC] PPCVSXFMAMutate should ignore trivial-copy addendsHal Finkel2015-08-241-5/+8
| | | | | | | | We might end up with a trivial copy as the addend, and if so, we should ignore the corresponding FMA instruction. The trivial copy can be coalesced away later, so there's nothing to do here. We should not, however, assert. Fixes PR24544. llvm-svn: 245907
* [PPC64LE] Fix PR24546 - Swap optimization and debug valuesBill Schmidt2015-08-241-0/+3
| | | | | | | | | | This patch fixes PR24546, which demonstrates a segfault during the VSX swap removal pass. The problem is that debug value instructions were not excluded from the list of instructions to be analyzed for webs of related computation. I've added the test case from the PR as a crash test in test/CodeGen/PowerPC. llvm-svn: 245862
* [PowerPC] PPCVSXFMAMutate should not segfault on undef input registersHal Finkel2015-08-211-0/+5
| | | | | | | | | | When PPCVSXFMAMutate would look at the input addend register, it would get its input value number. This would fail, however, if the register was undef, causing a segfault. Don't segfault (just skip such FMA instructions). Fixes the test case from PR24542 (although that may have been over-reduced). llvm-svn: 245741
* [PowerPC] Fix value type on XVCMPEQDP for v2f64 comparisonsHal Finkel2015-08-201-3/+4
| | | | | | | | | XVCMPEQDP is used for VSX v2f64 equality comparisons, but the value type needs to be v2i64 (as that's the corresponding SETCC type). Fixes PR24225. llvm-svn: 245535
* [PowerPC] Fix the int2fp(fp2int(x)) DAGCombine to ignore ppc_fp128Hal Finkel2015-08-201-0/+3
| | | | | | | | This DAGCombine was creating custom SDAG nodes with an illegal ppc_fp128 operand type because it was triggering on f64/f32 int2fp(fp2int(ppc_fp128 x)), but shouldn't (it should only apply to f32/f64 types). The result was a crash. llvm-svn: 245530
* Temporary fix for the self-host failures introduced by rL244921.Nemanja Ivanovic2015-08-191-1/+2
| | | | | | | | | This revision has introduced an issue that only affects bootstrapped compiler when it is printing the ASM. I am working on resolving the issue, but in the meantime, I'm disabling the legalization of scalar_to_vector operation for v2i64 and the associated testing until I can get this fixed. llvm-svn: 245481
* [PM] Port ScalarEvolution to the new pass manager.Chandler Carruth2015-08-173-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change makes ScalarEvolution a stand-alone object and just produces one from a pass as needed. Making this work well requires making the object movable, using references instead of overwritten pointers in a number of places, and other refactorings. I've also wired it up to the new pass manager and added a RUN line to a test to exercise it under the new pass manager. This includes basic printing support much like with other analyses. But there is a big and somewhat scary change here. Prior to this patch ScalarEvolution was never *actually* invalidated!!! Re-running the pass just re-wired up the various other analyses and didn't remove any of the existing entries in the SCEV caches or clear out anything at all. This might seem OK as everything in SCEV that can uses ValueHandles to track updates to the values that serve as SCEV keys. However, this still means that as we ran SCEV over each function in the module, we kept accumulating more and more SCEVs into the cache. At the end, we would have a SCEV cache with every value that we ever needed a SCEV for in the entire module!!! Yowzers. The releaseMemory routine would dump all of this, but that isn't realy called during normal runs of the pipeline as far as I can see. To make matters worse, there *is* actually a key that we don't update with value handles -- there is a map keyed off of Loop*s. Because LoopInfo *does* release its memory from run to run, it is entirely possible to run SCEV over one function, then over another function, and then lookup a Loop* from the second function but find an entry inserted for the first function! Ouch. To make matters still worse, there are plenty of updates that *don't* trip a value handle. It seems incredibly unlikely that today GVN or another pass that invalidates SCEV can update values in *just* such a way that a subsequent run of SCEV will incorrectly find lookups in a cache, but it is theoretically possible and would be a nightmare to debug. With this refactoring, I've fixed all this by actually destroying and recreating the ScalarEvolution object from run to run. Technically, this could increase the amount of malloc traffic we see, but then again it is also technically correct. ;] I don't actually think we're suffering from tons of malloc traffic from SCEV because if we were, the fact that we never clear the memory would seem more likely to have come up as an actual problem before now. So, I've made the simple fix here. If in fact there are serious issues with too much allocation and deallocation, I can work on a clever fix that preserves the allocations (while clearing the data) between each run, but I'd prefer to do that kind of optimization with a test case / benchmark that shows why we need such cleverness (and that can test that we actually make it faster). It's possible that this will make some things faster by making the SCEV caches have higher locality (due to being significantly smaller) so until there is a clear benchmark, I think the simple change is best. Differential Revision: http://reviews.llvm.org/D12063 llvm-svn: 245193
* PowerPC: remove dead initialization (NFC)Saleem Abdulrasool2015-08-141-2/+1
| | | | | | Identified by the clang static analyzer. No functional change intended. llvm-svn: 245022
* Scalar to vector conversions using direct movesNemanja Ivanovic2015-08-133-2/+89
| | | | | | | | | | | | This patch corresponds to review: http://reviews.llvm.org/D11471 It improves the code generated for converting a scalar to a vector value. With direct moves from GPRs to VSRs, we no longer require expensive stack operations for this. Subsequent patches will handle the reverse case and more general operations between vectors and their scalar elements. llvm-svn: 244921
* PseudoSourceValue: Replace global manager with a manager in a machine function.Alex Lorenz2015-08-113-58/+68
| | | | | | | | | | | | | | | | | | | | | | This commit removes the global manager variable which is responsible for storing and allocating pseudo source values and instead it introduces a new manager class named 'PseudoSourceValueManager'. Machine functions now own an instance of the pseudo source value manager class. This commit also modifies the 'get...' methods in the 'MachinePointerInfo' class to construct pseudo source values using the instance of the pseudo source value manager object from the machine function. This commit updates calls to the 'get...' methods from the 'MachinePointerInfo' class in a lot of different files because those calls now need to pass in a reference to a machine function to those methods. This change will make it easier to serialize pseudo source values as it will enable me to transform the mips specific MipsCallEntry PseudoSourceValue subclass into two target independent subclasses. Reviewers: Akira Hatanaka llvm-svn: 244693
* Explicitly clear the MI operand list when getInstruction() is called. Call ↵Cameron Esfahani2015-08-111-2/+0
| | | | | | | | | | | | | | MI.clear() within MCD::OPC_Decode case and inside of translateInstruction() for the X86 target. Remove now unnecessary MI.clear() from ARMDisassembler. Summary: Explicitly clear the MI operand list when getInstruction() is called. Reviewers: hfinkel, t.p.northover, hvarga, kparzysz, jyknight, qcolombet, uweigand Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11665 llvm-svn: 244557
* Fix some comment typos.Benjamin Kramer2015-08-081-1/+1
| | | | llvm-svn: 244402
* Convert a bunch of loops to foreach. NFC.Pete Cooper2015-08-061-2/+2
| | | | | | | | After r244074, we now have a successors() method to iterate over all the successors of a TerminatorInst. This commit changes a bunch of eligible loops to use it. llvm-svn: 244260
* [TTI] Make the cost APIs in TargetTransformInfo consistently use 'int'Chandler Carruth2015-08-052-31/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | rather than 'unsigned' for their costs. For something like costs in particular there is a natural "negative" value, that of savings or saved cost. As a consequence, there is a lot of code that subtracts or creates negative values based on cost, all of which is prone to awkwardness or bugs when dealing with an unsigned type. Similarly, we *never* want these values to wrap, as that would cause Very Bad code generation (likely percieved as an infinite loop as we try to emit over 2^32 instructions or some such insanity). All around 'int' seems a much better fit for these basic metrics. I've added asserts to ensure that at least the TTI interface never returns negative numbers here. If we ever have a use case for negative numbers, we can remove this, but this way a bug where someone used '-1' to produce a 'very large' cost will be caught by the assert. This passes all tests, and is also UBSan clean. No functional change intended. Differential Revision: http://reviews.llvm.org/D11741 llvm-svn: 244080
* [PPC] Fix PR24216: Don't generate splat for misaligned shuffle maskBill Schmidt2015-07-291-0/+5
| | | | | | | | | | | | | | | | Given certain shuffle-vector masks, LLVM emits splat instructions which splat the wrong bytes from the source register. The issue is that the function PPC::isSplatShuffleMask() in PPCISelLowering.cpp does not ensure that the splat pattern found is requesting bytes that are aligned on an EltSize boundary. This patch detects this situation as not a valid splat mask, resulting in a permute being generated instead of a splat. Patch and test case by Tyler Kenney, cleaned up a bit by me. This is a simple bug fix that would be good to incorporate into 3.7. llvm-svn: 243519
* fix TLI's combineRepeatedFPDivisors interface to return the minimum user ↵Sanjay Patel2015-07-282-4/+4
| | | | | | | | | | | | | | | threshold This fix was suggested as part of D11345 and is part of fixing PR24141. With this change, we can avoid walking the uses of a divisor node if the target doesn't want the combineRepeatedFPDivisors transform in the first place. There is no NFC-intended other than that. Differential Revision: http://reviews.llvm.org/D11531 llvm-svn: 243498
* Implement target independent TLS compatible with glibc's emutls.c.Chih-Hung Hsieh2015-07-281-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The 'common' section TLS is not implemented. Current C/C++ TLS variables are not placed in common section. DWARF debug info to get the address of TLS variables is not generated yet. clang and driver changes in http://reviews.llvm.org/D10524 Added -femulated-tls flag to select the emulated TLS model, which will be used for old targets like Android that do not support ELF TLS models. Added TargetLowering::LowerToTLSEmulatedModel as a target-independent function to convert a SDNode of TLS variable address to a function call to __emutls_get_address. Added into lib/Target/*/*ISelLowering.cpp to call LowerToTLSEmulatedModel for TLSModel::Emulated. Although all targets supporting ELF TLS models are enhanced, emulated TLS model has been tested only for Android ELF targets. Modified AsmPrinter.cpp to print the emutls_v.* and emutls_t.* variables for emulated TLS variables. Modified DwarfCompileUnit.cpp to skip some DIE for emulated TLS variabls. TODO: Add proper DIE for emulated TLS variables. Added new unit tests with emulated TLS. Differential Revision: http://reviews.llvm.org/D10522 llvm-svn: 243438
* [llvm-mc] Pushing plumbing through for --fatal-warnings flag.Colin LeMahieu2015-07-271-1/+1
| | | | llvm-svn: 243334
* Revert "Add const to some Type* parameters which didn't need to be mutable. ↵Pete Cooper2015-07-271-5/+5
| | | | | | | | | | NFC." This reverts commit r243146. Feedback from Craig Topper and David Blaikie was that we don't put const on Type as it has no mutable state. llvm-svn: 243282
* Fix PPCMaterializeInt to check the size of the integer based on theEric Christopher2015-07-251-9/+14
| | | | | | | | | | extension property we're requesting - zero or sign extended. This fixes cases where we want to return a zero extended 32-bit -1 and not be sign extended for the entire register. Also updated the already out of date comment with the current behavior. llvm-svn: 243192
* PPCMaterializeInt should only take a ConstantInt so represent this in the ↵Eric Christopher2015-07-251-12/+9
| | | | | | | | prototype and fix up all uses. llvm-svn: 243191
* Add const to some Type* parameters which didn't need to be mutable. NFC.Pete Cooper2015-07-241-5/+5
| | | | | | | We were only getting the size of the type which doesn't need to modify the type. llvm-svn: 243146
* Use foreach loops for StructType::elements(). NFC.Pete Cooper2015-07-241-2/+2
| | | | | | | | | | | | We had a few places where we did for (unsigned i = 0, e = STy->getNumElements(); i != e; ++i) { but those could instead do for (auto *EltTy : STy->elements()) { llvm-svn: 243136
* [PPC64LE] More vector swap optimization TLCBill Schmidt2015-07-211-21/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This makes one substantive change and a few stylistic changes to the VSX swap optimization pass. The substantive change is to permit LXSDX and LXSSPX instructions to participate in swap optimization computations. The previous change to insert a swap following a SUBREG_TO_REG widening operation makes this almost trivial. I experimented with also permitting STXSDX and STXSSPX instructions. This can be done using similar techniques: we could insert a swap prior to a narrowing COPY operation, and then permit these stores to participate. I prototyped this, but discovered that the pattern of a narrowing COPY followed by an STXSDX does not occur in any of our test-suite code. So instead, I added commentary indicating that this could be done. Other TLC: - I changed SH_COPYSCALAR to SH_COPYWIDEN to more clearly indicate the direction of the copy. - I factored the insertion of swap instructions into a separate function. Finally, I added a new test case to check that the scalar-to-vector loads are working properly with swap optimization. llvm-svn: 242838
* Targets: commonize some stack realignment codeJF Bastien2015-07-202-20/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch does the following: * Fix FIXME on `needsStackRealignment`: it is now shared between multiple targets, implemented in `TargetRegisterInfo`, and isn't `virtual` anymore. This will break out-of-tree targets, silently if they used `virtual` and with a build error if they used `override`. * Factor out `canRealignStack` as a `virtual` function on `TargetRegisterInfo`, by default only looks for the `no-realign-stack` function attribute. Multiple targets duplicated the same `needsStackRealignment` code: - Aarch64. - ARM. - Mips almost: had extra `DEBUG` diagnostic, which the default implementation now has. - PowerPC. - WebAssembly. - x86 almost: has an extra `-force-align-stack` option, which the default implementation now has. The default implementation of `needsStackRealignment` used to just return `false`. My current patch changes the behavior by simply using the above shared behavior. This affects: - AMDGPU - BPF - CppBackend - MSP430 - NVPTX - Sparc - SystemZ - XCore - Out-of-tree targets This is a breaking change! `make check` passes. The only implementation of the `virtual` function (besides the slight different in x86) was Hexagon (which did `MF.getFrameInfo()->getMaxAlignment() > 8`), and potentially some out-of-tree targets. Hexagon now uses the default implementation. `needsStackRealignment` was being overwritten in `<Target>GenRegisterInfo.inc`, to return `false` as the default also did. That was odd and is now gone. Reviewers: sunfish Subscribers: aemerson, llvm-commits, jfb Differential Revision: http://reviews.llvm.org/D11160 llvm-svn: 242727
* [PowerPC] v4i32 is a VSRCRegClassBill Schmidt2015-07-161-0/+1
| | | | | | | | | | | | | | | | I was looking at some vector code generation and kept seeing unnecessary vector copies into the Altivec half of the VSX registers. I discovered that we overlooked v4i32 when adding the register classes for VSX; we only added v4f32 and v2f64. This means that anything that canonicalizes into v4i32 (which is a LOT of stuff) ends up being forced into VRRC on its way to VSRC. The fix is one line. The rest of the patch is fixing up some test cases whose code generation has changed as a result. This seems like it would be a good candidate for backport to 3.7. llvm-svn: 242442
* Move most user of TargetMachine::getDataLayout to the Module oneMehdi Amini2015-07-163-12/+12
| | | | | | | | | | | | | | | | | | | | | Summary: This change is part of a series of commits dedicated to have a single DataLayout during compilation by using always the one owned by the module. This patch is quite boring overall, except for some uglyness in ASMPrinter which has a getDataLayout function but has some clients that use it without a Module (llmv-dsymutil, llvm-dwarfdump), so some methods are taking a DataLayout as parameter. Reviewers: echristo Subscribers: yaron.keren, rafael, llvm-commits, jholewinski Differential Revision: http://reviews.llvm.org/D11090 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 242386
* [PPC64LE] Fix vec_sld semantics for little endianBill Schmidt2015-07-151-4/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The vec_sld interface provides access to the vsldoi instruction. Unlike most of the vec_* interfaces, we do not attempt to change the generated code for vec_sld based on the endian mode. It is too difficult to correctly infer the desired semantics because of different element types, and the corrected instruction sequence is expensive, involving loading a permute control vector and performing a generalized permute. For GCC, this was implemented as "Don't touch the vec_sld" implementation. When it came time for the LLVM implementation, I did the same thing. However, this was hasty and incorrect. In LLVM's version of altivec.h, vec_sld was previously defined in terms of the vec_perm interface. Because vec_perm semantics are adjusted for little endian, this means that leaving vec_sld untouched causes it to generate something different for LE than for BE. Not good. This back-end patch accompanies the changes to altivec.h that change vec_sld's behavior for little endian. Those changes mean that we see slightly different code in the back end when trying to recognize a VSLDOI instruction in isVSLDOIShuffleMask. In particular, a ShuffleKind of 1 (where the two inputs are identical) must now be treated the same way as a ShuffleKind of 2 (little endian with different inputs) when little endian mode is in force. This is because ShuffleKind of 1 is defined using big-endian numbering. This has a ripple effect on LowerBUILD_VECTOR, where we create our own internal VSLDOI instructions. Because these are a ShuffleKind of 1, they will now have their shift amounts subtracted from 16 when recognizing the shuffle mask. To avoid problems we have to subtract them from 16 again before creating the VSLDOI instructions. There are a couple of other uses of BuildVSLDOI, but these do not need to be modified because the shift amount is 8, which is unchanged when subtracted from 16. llvm-svn: 242296
* [PPC] Disassemble little endian ppc instructions in the right byte orderBenjamin Kramer2015-07-151-8/+17
| | | | | | PR24122. The test is simply a byte swapped version of ppc64-encoding.txt. llvm-svn: 242288
* [PowerPC] Use the MachineCombiner to reassociate fadd/fmulHal Finkel2015-07-153-0/+303
| | | | | | | | | | | | | This is a direct port of the code from the X86 backend (r239486/r240361), which uses the MachineCombiner to reassociate (floating-point) adds/muls to increase ILP, to the PowerPC backend. The rationale is the same. There is a lot of copy-and-paste here between the X86 code and the PowerPC code, and we should extract at least some of this into CodeGen somewhere. However, I don't want to do that until this code is enhanced to handle FMAs as well. After that, we'll be in a better position to extract the common parts. llvm-svn: 242279
* [PowerPC] Extend physical register live range in PPCVSXFMAMutateHal Finkel2015-07-151-2/+15
| | | | | | | | | | | | | If the source of the copy that defines the addend is a physical register, then its existing live range may not extend to the FMA being mutated. Make sure we extend the live range of the register to meet the FMA because it will become its operand in this case. I don't have an independent test case, but it will be exposed by change to be committed shortly enabling the use of the machine combiner to do fadd/fmul reassociation, and will be covered by one of the associated regression tests. llvm-svn: 242278
* [PowerPC] Support symbolic targets in patchpointsHal Finkel2015-07-141-57/+71
| | | | | | | Follow-up r235483, with the corresponding support in PPC. We use a regular call for symbolic targets (because they're much cheaper than indirect calls). llvm-svn: 242239
* [PowerPC] Use the ABI indirect-call protocol for patchpointsHal Finkel2015-07-141-1/+43
| | | | | | | | | | | | We used to take the address specified as the direct target of the patchpoint and did no TOC-pointer handling. This, however, as not all that useful, because MCJIT tends to create a lot of modules, and they have their own TOC sections. Thus, to call from the generated code to other generated code, you really need to switch TOC pointers. Make this work as expected, and under ELFv1, tread the address as the function descriptor address so that the correct TOC pointer can be loaded. llvm-svn: 242217
* Add allnodes() iterator range to SelectionDAG. NFC.Pete Cooper2015-07-141-3/+2
| | | | | | | | | | | SelectionDAG already had begin/end methods for iterating over all the nodes, but didn't define an iterator_range for us in foreach loops. This adds such a method and uses it in some of the eligible places throughout the backends. llvm-svn: 242212
* [PowerPC] Fix the PPCInstrInfo::getInstrLatency implementationHal Finkel2015-07-145-0/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PowerPC uses itineraries to describe processor pipelines (and dispatch-group restrictions for P7/P8 cores). Unfortunately, the target-independent implementation of TII.getInstrLatency calls ItinData->getStageLatency, and that looks for the largest cycle count in the pipeline for any given instruction. This, however, yields the wrong answer for the PPC itineraries, because we don't encode the full pipeline. Because the functional units are fully pipelined, we only model the initial stages (there are no relevant hazards in the later stages to model), and so the technique employed by getStageLatency does not really work. Instead, we should take the maximum output operand latency, and that's what PPCInstrInfo::getInstrLatency now does. This caused some test-case churn, including two unfortunate side effects. First, the new arrangement of copies we get from function parameters now sometimes blocks VSX FMA mutation (a FIXME has been added to the code and the test cases), and we have one significant test-suite regression: SingleSource/Benchmarks/BenchmarkGame/spectral-norm 56.4185% +/- 18.9398% In this benchmark we have a loop with a vectorized FP divide, and it with the new scheduling both divides end up in the same dispatch group (which in this case seems to cause a problem, although why is not exactly clear). The grouping structure is hard to predict from the bottom of the loop, and there may not be much we can do to fix this. Very few other test-suite performance effects were really significant, but almost all weakly favor this change. However, in light of the issues highlighted above, I've left the old behavior available via a command-line flag. llvm-svn: 242188
* MachineRegisterInfo: Remove UsedPhysReg infrastructureMatthias Braun2015-07-141-1/+2
| | | | | | | | | | | | | We have a detailed def/use lists for every physical register in MachineRegisterInfo anyway, so there is little use in maintaining an additional bitset of which ones are used. Removing it frees us from extra book keeping. This simplifies VirtRegMap. Differential Revision: http://reviews.llvm.org/D10911 llvm-svn: 242173
* Add missing builtins to the PPC back end for ABI compliance (vol. 4)Nemanja Ivanovic2015-07-141-0/+6
| | | | | | | | | This patch corresponds to review: http://reviews.llvm.org/D11183 Back end portion of the fourth round of additions to altivec.h. llvm-svn: 242167
* PrologEpilogInserter: Rewrite API to determine callee save regsiters.Matthias Braun2015-07-142-10/+11
| | | | | | | | | | | | | | | | This changes TargetFrameLowering::processFunctionBeforeCalleeSavedScan(): - Rename the function to determineCalleeSaves() - Pass a bitset of callee saved registers by reference, thus avoiding the function-global PhysRegUsed bitset in MachineRegisterInfo. - Without PhysRegUsed the implementation is fine tuned to not save physcial registers which are only read but never modified. Related to rdar://21539507 Differential Revision: http://reviews.llvm.org/D10909 llvm-svn: 242165
* [PPC64LE] More improvements to VSX swap optimizationBill Schmidt2015-07-131-21/+188
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch allows VSX swap optimization to succeed more frequently. Specifically, it is concerned with common code sequences that occur when copying a scalar floating-point value to a vector register. This patch currently handles cases where the floating-point value is already in a register, but does not yet handle loads (such as via an LXSDX scalar floating-point VSX load). That will be dealt with later. A typical case is when a scalar value comes in as a floating-point parameter. The value is copied into a virtual VSFRC register, and then a sequence of SUBREG_TO_REG and/or COPY operations will convert it to a full vector register of the class required by the context. If this vector register is then used as part of a lane-permuted computation, the original scalar value will be in the wrong lane. We can fix this by adding a swap operation following any widening SUBREG_TO_REG operation. Additional COPY operations may be needed around the swap operation in order to keep register assignment happy, but these are pro forma operations that will be removed by coalescing. If a scalar value is otherwise directly referenced in a computation (such as by one of the many XS* vector-scalar operations), we currently disable swap optimization. These operations are lane-sensitive by definition. A MentionsPartialVR flag is added for use in each swap table entry that mentions a scalar floating-point register without having special handling defined. A common idiom for PPC64LE is to convert a double-precision scalar to a vector by performing a splat operation. This ensures that the value can be referenced as V[0], as it would be for big endian, whereas just converting the scalar to a vector with a SUBREG_TO_REG operation leaves this value only in V[1]. A doubleword splat operation is one form of an XXPERMDI instruction, which takes one doubleword from a first operand and another doubleword from a second operand, with a two-bit selector operand indicating which doublewords are chosen. In the general case, an XXPERMDI can be permitted in a lane-swapped region provided that it is properly transformed to select the corresponding swapped values. This transformation is to reverse the order of the two input operands, and to reverse and complement the bits of the selector operand (derivation left as an exercise to the reader ;). A new test case that exercises the scalar-to-vector and generalized XXPERMDI transformations is added as CodeGen/PowerPC/swaps-le-5.ll. The patch also requires a change to CodeGen/PowerPC/swaps-le-3.ll to use CHECK-DAG instead of CHECK for two independent instructions that now appear in reverse order. There are two small unrelated changes that are added with this patch. First, the XXSLDWI instruction was incorrectly omitted from the list of lane-sensitive instructions; this is now fixed. Second, I observed that the same webs were being rejected over and over again for different reasons. Since it's sufficient to reject a web only once, I added a check for this to speed up the compilation time slightly. llvm-svn: 242081
* [PowerPC] Make use of the TargetRecip systemHal Finkel2015-07-123-15/+47
| | | | | | | | | | r238842 added the TargetRecip system for controlling use of reciprocal estimates for sqrt and division using a set of parameters that can be set by the frontend. Clang now supports a sophisticated -mrecip option, and this will allow that option to effectively control the relevant code-generation functionality of the PPC backend. llvm-svn: 241985
OpenPOWER on IntegriCloud