summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen/TargetLoweringBase.cpp
Commit message (Collapse)AuthorAgeFilesLines
* PseudoSourceValue: Replace global manager with a manager in a machine function.Alex Lorenz2015-08-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | This commit removes the global manager variable which is responsible for storing and allocating pseudo source values and instead it introduces a new manager class named 'PseudoSourceValueManager'. Machine functions now own an instance of the pseudo source value manager class. This commit also modifies the 'get...' methods in the 'MachinePointerInfo' class to construct pseudo source values using the instance of the pseudo source value manager object from the machine function. This commit updates calls to the 'get...' methods from the 'MachinePointerInfo' class in a lot of different files because those calls now need to pass in a reference to a machine function to those methods. This change will make it easier to serialize pseudo source values as it will enable me to transform the mips specific MipsCallEntry PseudoSourceValue subclass into two target independent subclasses. Reviewers: Akira Hatanaka llvm-svn: 244693
* Add new ISD nodes: ISD::FMINNAN and ISD::FMAXNANJames Molloy2015-08-111-0/+2
| | | | | | | | | | | | | | | | | | The intention of these is to be a corollary to ISD::FMINNUM/FMAXNUM, differing only on how NaNs are treated. FMINNUM returns the non-NaN input (when given one NaN and one non-NaN), FMINNAN returns the NaN input instead. This patch includes support for scalarizing, widening and splitting vectors, but not expansion or softening. The reason is that these should never be needed - FMINNAN nodes are only going to be created in one place (SDAGBuilder::visitSelect) and there we'll check if the node is legal or custom. I could preemptively add expand and soften code, but I'm fairly opposed to adding code I can't test. It's bad enough I can't create tests with this patch, but at least this code will be exercised by the ARM and AArch64 backends fairly shortly. llvm-svn: 244581
* [TTI] Make the cost APIs in TargetTransformInfo consistently use 'int'Chandler Carruth2015-08-051-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | rather than 'unsigned' for their costs. For something like costs in particular there is a natural "negative" value, that of savings or saved cost. As a consequence, there is a lot of code that subtracts or creates negative values based on cost, all of which is prone to awkwardness or bugs when dealing with an unsigned type. Similarly, we *never* want these values to wrap, as that would cause Very Bad code generation (likely percieved as an infinite loop as we try to emit over 2^32 instructions or some such insanity). All around 'int' seems a much better fit for these basic metrics. I've added asserts to ensure that at least the TTI interface never returns negative numbers here. If we ever have a use case for negative numbers, we can remove this, but this way a bug where someone used '-1' to produce a 'very large' cost will be caught by the assert. This passes all tests, and is also UBSan clean. No functional change intended. Differential Revision: http://reviews.llvm.org/D11741 llvm-svn: 244080
* New EH representation for MSVC compatibilityDavid Majnemer2015-07-311-0/+6
| | | | | | | | | | This introduces new instructions neccessary to implement MSVC-compatible exception handling support. Most of the middle-end and none of the back-end haven't been audited or updated to take them into account. Differential Revision: http://reviews.llvm.org/D11097 llvm-svn: 243766
* move DAGCombiner's allowableAlignment() helper function into the TLISanjay Patel2015-07-291-0/+23
| | | | | | | | | | | | | | | | | | | | | | | Making allowableAlignment() more accessible was suggested as a predecessor patch for D10662, so I've pulled it into TargetLowering. This let's us remove 4 instances of duplicate logic in LegalizeDAG. There's a subtle functional change in the implementation: the existing allowableAlignment() code was using getPrefTypeAlignment() when checking alignment with the DataLayout and assumed that was fast. In this implementation, we use getABITypeAlignment() and assume that is fast. See the TODO comment or the discussion in the Phab review for future improvements in this implementation (don't use the data layout at all). There are no regression test changes from this difference, and I'm not sure how to expose it via a test. I think we actually do want to provide the 'Fast' param when checking this from DAGCombiner::MergeConsecutiveStores(). Ie, we shouldn't merge stores if the new stores are not going to be fast. But that change will require fixing allowsMisalignedMemoryAccess() overrides as noted in D10662. Differential Revision: http://reviews.llvm.org/D10905 llvm-svn: 243549
* [Codegen] Add intrinsics 'absdiff' and corresponding SDNodes for absolute ↵James Molloy2015-07-161-0/+2
| | | | | | | | | | | | | difference operation This adds new intrinsics "*absdiff" for absolute difference ops to facilitate efficient code generation for "sum of absolute differences" operation. The patch also contains the introduction of corresponding SDNodes and basic legalization support.Sanity of the generated code is tested on X86. This is 1st of the three patches. Patch by Shahid Asghar-ahmad! llvm-svn: 242409
* Move most user of TargetMachine::getDataLayout to the Module oneMehdi Amini2015-07-161-1/+1
| | | | | | | | | | | | | | | | | | | | | Summary: This change is part of a series of commits dedicated to have a single DataLayout during compilation by using always the one owned by the module. This patch is quite boring overall, except for some uglyness in ASMPrinter which has a getDataLayout function but has some clients that use it without a Module (llmv-dsymutil, llvm-dwarfdump), so some methods are taking a DataLayout as parameter. Reviewers: echristo Subscribers: yaron.keren, rafael, llvm-commits, jholewinski Differential Revision: http://reviews.llvm.org/D11090 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 242386
* Revert the new EH instructionsDavid Majnemer2015-07-101-6/+0
| | | | | | This reverts commits r241888-r241891, I didn't mean to commit them. llvm-svn: 241893
* New EH representation for MSVC compatibilityDavid Majnemer2015-07-101-0/+6
| | | | | | | | | | | | | | | Summary: This introduces new instructions neccessary to implement MSVC-compatible exception handling support. Most of the middle-end and none of the back-end haven't been audited or updated to take them into account. Reviewers: rnk, JosephTremoulet, reames, nlewycky, rjmccall Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11041 llvm-svn: 241888
* Re-instate the EVT parameter to getScalarShiftAmountTy() for OOT userMehdi Amini2015-07-091-2/+3
| | | | | | | A documentation for this function would be nice by the way. From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 241807
* Make isLegalAddressingMode() taking DataLayout as an argumentMehdi Amini2015-07-091-2/+2
| | | | | | | | | | | | | | | | Summary: This change is part of a series of commits dedicated to have a single DataLayout during compilation by using always the one owned by the module. Reviewers: echristo Subscribers: jholewinski, llvm-commits, rafael, yaron.keren Differential Revision: http://reviews.llvm.org/D11040 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 241778
* Make getByValTypeAlignment() taking DataLayout as an argumentMehdi Amini2015-07-091-2/+3
| | | | | | | | | | | | | | | | Summary: This change is part of a series of commits dedicated to have a single DataLayout during compilation by using always the one owned by the module. Reviewers: echristo Subscribers: yaron.keren, rafael, llvm-commits, jholewinski Differential Revision: http://reviews.llvm.org/D11038 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 241777
* Make TargetLowering::getShiftAmountTy() taking DataLayout as an argumentMehdi Amini2015-07-091-4/+5
| | | | | | | | | | | | | | | | Summary: This change is part of a series of commits dedicated to have a single DataLayout during compilation by using always the one owned by the module. Reviewers: echristo Subscribers: jholewinski, llvm-commits, rafael, yaron.keren Differential Revision: http://reviews.llvm.org/D11037 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 241776
* Make TargetLowering::getPointerTy() taking DataLayout as an argumentMehdi Amini2015-07-091-17/+6
| | | | | | | | | | | | | | | | Summary: This change is part of a series of commits dedicated to have a single DataLayout during compilation by using always the one owned by the module. Reviewers: echristo Subscribers: jholewinski, ted, yaron.keren, rafael, llvm-commits Differential Revision: http://reviews.llvm.org/D11028 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 241775
* Redirect DataLayout from TargetMachine to Module in ComputeValueVTs()Mehdi Amini2015-07-091-3/+3
| | | | | | | | | | | | | | | | | | | | Summary: Avoid using the TargetMachine owned DataLayout and use the Module owned one instead. This requires passing the DataLayout up the stack to ComputeValueVTs(). This change is part of a series of commits dedicated to have a single DataLayout during compilation by using always the one owned by the module. Reviewers: echristo Subscribers: jholewinski, yaron.keren, rafael, llvm-commits Differential Revision: http://reviews.llvm.org/D11019 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 241773
* Remove IsLittleEndian from TargetLowering and redirect to DataLayoutMehdi Amini2015-07-081-1/+0
| | | | | | | | | | | | | | | | Summary: This change is part of a series of commits dedicated to have a single DataLayout during compilation by using always the one owned by the module. Reviewers: echristo Subscribers: llvm-commits, rafael, yaron.keren Differential Revision: http://reviews.llvm.org/D11017 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 241655
* add a cl::opt override for TargetLoweringBase's JumpIsExpensiveSanjay Patel2015-07-011-1/+12
| | | | | | | | | | | | | This patch is not intended to change existing codegen behavior for any target. It just exposes the JumpIsExpensive setting on the command-line to allow for easier testing and emergency overrides. Also, change the existing regression test to use FileCheck, explicitly specify the jump-is-expensive option, and use more precise checks. Differential Revision: http://reviews.llvm.org/D10846 llvm-svn: 241179
* Eliminate additional redundant copies of Triple objects. NFC.Daniel Sanders2015-06-241-1/+1
| | | | | | | | Subscribers: rafael, llvm-commits, rengolin Differential Revision: http://reviews.llvm.org/D10654 llvm-svn: 240540
* Add address space argument to isLegalAddressingModeMatt Arsenault2015-06-011-1/+2
| | | | | | | | | | This is important because of different addressing modes depending on the address space for GPU targets. This only adds the argument, and does not update any of the uses to provide the correct address space. llvm-svn: 238723
* Add SDNodes for umin, umax, smin and smax.James Molloy2015-05-151-0/+4
| | | | | | | | | | | | | This adds new SDNodes for signed/unsigned min/max. These nodes are built from select/icmp pairs matched at SDAGBuilder stage. This patch adds the nodes, as well as legalization support and sets them to be "expand" for all targets. NFC for now; this will be tested when I switch AArch64 to using these new nodes. llvm-svn: 237423
* [CodeGen] Use standard -not gnueabi- naming for f16 libcalls on Darwin.Ahmed Bougacha2015-05-141-0/+8
| | | | | | | | Other targets probably should as well. Since r237161, compiler-rt has both, but I don't see why anything other than gnueabi would use a gnueabi naming scheme. llvm-svn: 237324
* CodeGen: Default overflow operations to expand so we don't have to assume ↵Jan Vesely2015-04-291-0/+8
| | | | | | | | | | targets are lying Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu> Reviewed-by: ab Differential Revision: http://reviews.llvm.org/D9265 llvm-svn: 236119
* Add support to promote f16 to f32Pirama Arumuga Nainar2015-04-171-4/+13
| | | | | | | | | | | | | | Summary: This patch adds legalization support to operate on FP16 as a load/store type and do operations on it as floats. Tests for ARM are added to test/CodeGen/ARM/fp16-promote.ll Reviewers: srhines, t.p.northover Differential Revision: http://reviews.llvm.org/D8755 llvm-svn: 235215
* [CodeGen] "PromoteInteger" f32 to f64 doesn't make sense.Ahmed Bougacha2015-03-281-13/+6
| | | | | | | | | | | | | | | | | | | | | | The original f32->f64 promotion logic was refactored into roughly the currently shape in r37781. However, starting with r132263, the legalizer has been split into different kinds, and the previous "Promote" (which did the right thing) was search-and-replace'd into "PromoteInteger". The divide gradually deepened, with type legalization ("PromoteInteger") being separated from ops legalization ("Promote", which still works for floating point ops). Fast-forward to today: there's no in-tree target with legal f64 but illegal f32 (rather: no tests were harmed in the making of this patch). With such a target, i.e., if you trick the legalizer into going through the PromoteInteger path for FP, you get the expected brokenness. For instance, there's no PromoteIntRes_FADD (the name itself sounds wrong), so we'll just hit some assert in the PromoteInteger path. Don't pretend we can promote f32 to f64. Instead, always soften. llvm-svn: 233464
* Deduplicate a bunch of setOpActions into an MVT range-for. NFC.Ahmed Bougacha2015-03-261-39/+15
| | | | llvm-svn: 233330
* [CodeGen] Don't pretend we can expand f16 libcalls.Ahmed Bougacha2015-03-261-13/+0
| | | | | | | | | | | | | | | | | | | We used to mark a bunch of libm nodes as Expand for f16. There are no libcalls we can use for those, so we eventually just hit an unhelpful llvm_unreachable in ExpandFPLibCall. Instead, just ignore them altogether. If nothing else changes, we'll then get the more descriptive and pleasant "Cannot select" fatal error. There's an argument to be made for consistency, but f16 is already special in all the good ways, and as long as there's no f16 support in the ops expander (this patch), as well as the Soften/Expand float legalizers (which, when hit, will currently segfault), I think there's no point in even pretending we can legalize any of this. This shouldn't affect anything that's not already broken. llvm-svn: 233328
* DataLayout is mandatory, update the API to reflect it with references.Mehdi Amini2015-03-101-6/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Now that the DataLayout is a mandatory part of the module, let's start cleaning the codebase. This patch is a first attempt at doing that. This patch is not exactly NFC as for instance some places were passing a nullptr instead of the DataLayout, possibly just because there was a default value on the DataLayout argument to many functions in the API. Even though it is not purely NFC, there is no change in the validation. I turned as many pointer to DataLayout to references, this helped figuring out all the places where a nullptr could come up. I had initially a local version of this patch broken into over 30 independant, commits but some later commit were cleaning the API and touching part of the code modified in the previous commits, so it seemed cleaner without the intermediate state. Test Plan: Reviewers: echristo Subscribers: llvm-commits From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 231740
* SDAG: Merge the meat of two ExpandAtomic implementations.Benjamin Kramer2015-03-051-0/+38
| | | | | | | The copies already diverged, don't let them become any worse. Reduce redundancy in code with a little macro metaprogramming. llvm-svn: 231401
* Add a comment above findRepresentativeClass explaining why it'sEric Christopher2015-03-031-0/+4
| | | | | | where it is so that future generations can understand. llvm-svn: 231111
* Remove an argument-less call to getSubtargetImpl from TargetLoweringBase.Eric Christopher2015-02-261-6/+6
| | | | | | | | | This required plumbing a TargetRegisterInfo through computeRegisterProperties and into findRepresentativeClass which uses it for register class iteration. This required passing a subtarget into a few target specific initializations of TargetLowering. llvm-svn: 230583
* Move TargetLoweringBase::getTypeConversion to the .cpp file fromEric Christopher2015-02-251-0/+132
| | | | | | | | | the .h file. It's used in only one place (other than recursively) and there's no need to include it everywhere. Saves almost 900k from total llvm object file size. llvm-svn: 230561
* Add generic fmad DAG node.Matt Arsenault2015-02-201-0/+1
| | | | | | | | | | | This allows sharing of FMA forming combines to work with instructions that have the same semantics as a separate multiply and add. This is expand by default, and only formed post legalization so it shouldn't have much impact on targets that do not want it. llvm-svn: 230070
* Move DataLayout back to the TargetMachine from TargetSubtargetInfoEric Christopher2015-01-261-3/+2
| | | | | | | | | | | | | | | | | | | derived classes. Since global data alignment, layout, and mangling is often based on the DataLayout, move it to the TargetMachine. This ensures that global data is going to be layed out and mangled consistently if the subtarget changes on a per function basis. Prior to this all targets(*) have had subtarget dependent code moved out and onto the TargetMachine. *One target hasn't been migrated as part of this change: R600. The R600 port has, as a subtarget feature, the size of pointers and this affects global data layout. I've currently hacked in a FIXME to enable progress, but the port needs to be updated to either pass the 64-bitness to the TargetMachine, or fix the DataLayout to avoid subtarget dependent features. llvm-svn: 227113
* R600: Implement getRecipEstimateMatt Arsenault2015-01-131-0/+1
| | | | | | | | | This requires a new hook to prevent expanding sqrt in terms of rsqrt and reciprocal. v_rcp_f32, v_rsq_f32, and v_sqrt_f32 are all the same rate, so this expansion would just double the number of instructions and cycles. llvm-svn: 225828
* [CodeGen] Use MVT iterator_ranges in legality loops. NFC intended.Ahmed Bougacha2015-01-071-19/+14
| | | | | | | | A few loops do trickier things than just iterating on an MVT subset, so I'll leave them be for now. Follow-up of r225387. llvm-svn: 225392
* [CodeGenPrepare] Reapply r224351 with a fix for the assertion failure:Quentin Colombet2014-12-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The type promotion helper does not support vector type, so when make such it does not kick in in such cases. Original commit message: [CodeGenPrepare] Move sign/zero extensions near loads using type promotion. This patch extends the optimization in CodeGenPrepare that moves a sign/zero extension near a load when the target can combine them. The optimization may promote any operations between the extension and the load to make that possible. Although this optimization may be beneficial for all targets, in particular AArch64, this is enabled for X86 only as I have not benchmarked it for other targets yet. ** Context ** Most targets feature extended loads, i.e., loads that perform a zero or sign extension for free. In that context it is interesting to expose such pattern in CodeGenPrepare so that the instruction selection pass can form such loads. Sometimes, this pattern is blocked because of instructions between the load and the extension. When those instructions are promotable to the extended type, we can expose this pattern. ** Motivating Example ** Let us consider an example: define void @foo(i8* %addr1, i32* %addr2, i8 %a, i32 %b) { %ld = load i8* %addr1 %zextld = zext i8 %ld to i32 %ld2 = load i32* %addr2 %add = add nsw i32 %ld2, %zextld %sextadd = sext i32 %add to i64 %zexta = zext i8 %a to i32 %addza = add nsw i32 %zexta, %zextld %sextaddza = sext i32 %addza to i64 %addb = add nsw i32 %b, %zextld %sextaddb = sext i32 %addb to i64 call void @dummy(i64 %sextadd, i64 %sextaddza, i64 %sextaddb) ret void } As it is, this IR generates the following assembly on x86_64: [...] movzbl (%rdi), %eax # zero-extended load movl (%rsi), %es # plain load addl %eax, %esi # 32-bit add movslq %esi, %rdi # sign extend the result of add movzbl %dl, %edx # zero extend the first argument addl %eax, %edx # 32-bit add movslq %edx, %rsi # sign extend the result of add addl %eax, %ecx # 32-bit add movslq %ecx, %rdx # sign extend the result of add [...] The throughput of this sequence is 7.45 cycles on Ivy Bridge according to IACA. Now, by promoting the additions to form more extended loads we would generate: [...] movzbl (%rdi), %eax # zero-extended load movslq (%rsi), %rdi # sign-extended load addq %rax, %rdi # 64-bit add movzbl %dl, %esi # zero extend the first argument addq %rax, %rsi # 64-bit add movslq %ecx, %rdx # sign extend the second argument addq %rax, %rdx # 64-bit add [...] The throughput of this sequence is 6.15 cycles on Ivy Bridge according to IACA. This kind of sequences happen a lot on code using 32-bit indexes on 64-bit architectures. Note: The throughput numbers are similar on Sandy Bridge and Haswell. ** Proposed Solution ** To avoid the penalty of all these sign/zero extensions, we merge them in the loads at the beginning of the chain of computation by promoting all the chain of computation on the extended type. The promotion is done if and only if we do not introduce new extensions, i.e., if we do not degrade the code quality. To achieve this, we extend the existing “move ext to load” optimization with the promotion mechanism introduced to match larger patterns for addressing mode (r200947). The idea of this extension is to perform the following transformation: ext(promotableInst1(...(promotableInstN(load)))) => promotedInst1(...(promotedInstN(ext(load)))) The promotion mechanism in that optimization is enabled by a new TargetLowering switch, which is off by default. In other words, by default, the optimization performs the “move ext to load” optimization as it was before this patch. ** Performance ** Configuration: x86_64: Ivy Bridge fixed at 2900MHz running OS X 10.10. Tested Optimization Levels: O3/Os Tests: llvm-testsuite + externals. Results: - No regression beside noise. - Improvements: CINT2006/473.astar: ~2% Benchmarks/PAQ8p: ~2% Misc/perlin: ~3% The results are consistent for both O3 and Os. <rdar://problem/18310086> llvm-svn: 224402
* Revert "[CodeGenPrepare] Move sign/zero extensions near loads using type ↵Reid Kleckner2014-12-171-1/+0
| | | | | | | | | promotion." This reverts commit r224351. It causes assertion failures when building ICU. llvm-svn: 224397
* [CodeGenPrepare] Move sign/zero extensions near loads using type promotion.Quentin Colombet2014-12-161-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch extends the optimization in CodeGenPrepare that moves a sign/zero extension near a load when the target can combine them. The optimization may promote any operations between the extension and the load to make that possible. Although this optimization may be beneficial for all targets, in particular AArch64, this is enabled for X86 only as I have not benchmarked it for other targets yet. ** Context ** Most targets feature extended loads, i.e., loads that perform a zero or sign extension for free. In that context it is interesting to expose such pattern in CodeGenPrepare so that the instruction selection pass can form such loads. Sometimes, this pattern is blocked because of instructions between the load and the extension. When those instructions are promotable to the extended type, we can expose this pattern. ** Motivating Example ** Let us consider an example: define void @foo(i8* %addr1, i32* %addr2, i8 %a, i32 %b) { %ld = load i8* %addr1 %zextld = zext i8 %ld to i32 %ld2 = load i32* %addr2 %add = add nsw i32 %ld2, %zextld %sextadd = sext i32 %add to i64 %zexta = zext i8 %a to i32 %addza = add nsw i32 %zexta, %zextld %sextaddza = sext i32 %addza to i64 %addb = add nsw i32 %b, %zextld %sextaddb = sext i32 %addb to i64 call void @dummy(i64 %sextadd, i64 %sextaddza, i64 %sextaddb) ret void } As it is, this IR generates the following assembly on x86_64: [...] movzbl (%rdi), %eax # zero-extended load movl (%rsi), %es # plain load addl %eax, %esi # 32-bit add movslq %esi, %rdi # sign extend the result of add movzbl %dl, %edx # zero extend the first argument addl %eax, %edx # 32-bit add movslq %edx, %rsi # sign extend the result of add addl %eax, %ecx # 32-bit add movslq %ecx, %rdx # sign extend the result of add [...] The throughput of this sequence is 7.45 cycles on Ivy Bridge according to IACA. Now, by promoting the additions to form more extended loads we would generate: [...] movzbl (%rdi), %eax # zero-extended load movslq (%rsi), %rdi # sign-extended load addq %rax, %rdi # 64-bit add movzbl %dl, %esi # zero extend the first argument addq %rax, %rsi # 64-bit add movslq %ecx, %rdx # sign extend the second argument addq %rax, %rdx # 64-bit add [...] The throughput of this sequence is 6.15 cycles on Ivy Bridge according to IACA. This kind of sequences happen a lot on code using 32-bit indexes on 64-bit architectures. Note: The throughput numbers are similar on Sandy Bridge and Haswell. ** Proposed Solution ** To avoid the penalty of all these sign/zero extensions, we merge them in the loads at the beginning of the chain of computation by promoting all the chain of computation on the extended type. The promotion is done if and only if we do not introduce new extensions, i.e., if we do not degrade the code quality. To achieve this, we extend the existing “move ext to load” optimization with the promotion mechanism introduced to match larger patterns for addressing mode (r200947). The idea of this extension is to perform the following transformation: ext(promotableInst1(...(promotableInstN(load)))) => promotedInst1(...(promotedInstN(ext(load)))) The promotion mechanism in that optimization is enabled by a new TargetLowering switch, which is off by default. In other words, by default, the optimization performs the “move ext to load” optimization as it was before this patch. ** Performance ** Configuration: x86_64: Ivy Bridge fixed at 2900MHz running OS X 10.10. Tested Optimization Levels: O3/Os Tests: llvm-testsuite + externals. Results: - No regression beside noise. - Improvements: CINT2006/473.astar: ~2% Benchmarks/PAQ8p: ~2% Misc/perlin: ~3% The results are consistent for both O3 and Os. <rdar://problem/18310086> llvm-svn: 224351
* [Statepoints 2/4] Statepoint infrastructure for garbage collection: MI & ↵Philip Reames2014-12-011-1/+7
| | | | | | | | | | | | | | x86-64 Backend This is the second patch in a small series. This patch contains the MachineInstruction and x86-64 backend pieces required to lower Statepoints. It does not include the code to actually generate the STATEPOINT machine instruction and as a result, the entire patch is currently dead code. I will be submitting the SelectionDAG parts within the next 24-48 hours. Since those pieces are by far the most complicated, I wanted to minimize the size of that patch. That patch will include the tests which exercise the functionality in this patch. The entire series can be seen as one combined whole in http://reviews.llvm.org/D5683. The STATEPOINT psuedo node is generated after all gc values are explicitly spilled to stack slots. The purpose of this node is to wrap an actual call instruction while recording the spill locations of the meta arguments used for garbage collection and other purposes. The STATEPOINT is modeled as modifing all of those locations to prevent backend optimizations from forwarding the value from before the STATEPOINT to after the STATEPOINT. (Doing so would break relocation semantics for collectors which wish to relocate roots.) The implementation of STATEPOINT is closely modeled on PATCHPOINT. Eventually, much of the code in this patch will be removed. The long term plan is to merge the functionality provided by statepoints and patchpoints. Merging their implementations in the backend is likely to be a good starting point. Reviewed by: atrick, ributzka llvm-svn: 223085
* Target triple OS detection tidyup. NFCSimon Pilgrim2014-11-291-1/+1
| | | | | | Use Triple::isOS*() helpers where possible. llvm-svn: 222960
* Replace a couple asserts with static_asserts.Craig Topper2014-11-171-2/+2
| | | | llvm-svn: 222114
* We can get the TLOF from the TargetMachine - so constructor no longer ↵Aditya Nandakumar2014-11-131-4/+3
| | | | | | requires TargetLoweringObjectFile to be passed. llvm-svn: 221926
* This patch changes the ownership of TLOF from TargetLoweringBase to ↵Aditya Nandakumar2014-11-131-4/+0
| | | | | | TargetMachine so that different subtargets could share the TLOF effectively llvm-svn: 221878
* PR20557: Fix the bug that bogus cpu parameter crashes llc on AArch64 backend.Hao Liu2014-10-311-1/+5
| | | | | | Initial patch by Oleg Ranevskyy. llvm-svn: 220945
* Add minnum / maxnum codegenMatt Arsenault2014-10-211-0/+20
| | | | llvm-svn: 220342
* Reinstate "Nuke the old JIT."Eric Christopher2014-09-021-1/+0
| | | | | | | | Approved by Jim Grosbach, Lang Hames, Rafael Espindola. This reinstates commits r215111, 215115, 215116, 215117, 215136. llvm-svn: 216982
* name change: isPow2DivCheap -> isPow2SDivCheapSanjay Patel2014-08-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | isPow2DivCheap That name doesn't specify signed or unsigned. Lazy as I am, I eventually read the function and variable comments. It turns out that this is strictly about signed div. But I discovered that the comments are wrong: srl/add/sra is not the general sequence for signed integer division by power-of-2. We need one more 'sra': sra/srl/add/sra That's the sequence produced in DAGCombiner. The first 'sra' may be removed when dividing by exactly '2', but that's a special case. This patch corrects the comments, changes the name of the flag bit, and changes the name of the accessor methods. No functional change intended. Differential Revision: http://reviews.llvm.org/D5010 llvm-svn: 216237
* Added a TLI hook to signal that the target does not have or does not care aboutPedro Artigas2014-08-081-0/+1
| | | | | | | | floating point exceptions, added use of flag to fold potentially exception raising floating point math in selection DAG. No functionality change, as targets have to explicitly ask for this behavior and none does today. llvm-svn: 215222
* Temporarily Revert "Nuke the old JIT." as it's not quite ready toEric Christopher2014-08-071-0/+1
| | | | | | | | | | | be deleted. This will be reapplied as soon as possible and before the 3.6 branch date at any rate. Approved by Jim Grosbach, Lang Hames, Rafael Espindola. This reverts commits r215111, 215115, 215116, 215117, 215136. llvm-svn: 215154
* Nuke the old JIT.Rafael Espindola2014-08-071-1/+0
| | | | | | | | | I am sure we will be finding bits and pieces of dead code for years to come, but this is a good start. Thanks to Lang Hames for making MCJIT a good replacement! llvm-svn: 215111
OpenPOWER on IntegriCloud