summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Target
Commit message (Collapse)AuthorAgeFilesLines
...
* [WebAssembly] Added default stack-only instruction mode for MC.Wouter van Oortmerssen2018-07-275-252/+475
| | | | | | | | | | | | | | | | | | | | | | | Summary: Moved Explicit Locals pass to last. Made that pass obligatory. Made it convert from register to stack based instructions, and removed the registers. Fixes to related code that was expecting register based instructions. Added the correct testing flag to all tests, depending on what the format they were expecting so far. Translated one test to stack format as example: reg-stackify-stack.ll tested: llvm-lit -v `find test -name WebAssembly` unittests/MC/* Reviewers: dschuff, sunfish Subscribers: sbc100, jgravelle-google, eraman, aheejin, llvm-commits Differential Revision: https://reviews.llvm.org/D49160 llvm-svn: 338164
* Recommit "Enable MachineOutliner by default under -Oz for AArch64"Jessica Paquette2018-07-273-0/+9
| | | | | | | | | | | | | | | | | | Fixed the ASAN failure from before in r338148, so recommiting. This patch enables the MachineOutliner by default in AArch64 under -Oz. The MachineOutliner offers around a 4.5% improvement on the current -Oz code size improvements. We have done work into improving the debuggability of outlined code, so that users of -Oz won't be surprised by the optimization. We have also been executing the LLVM test suite and common external tests such as the SPEC suites continuously with no issue. The outliner has a low compile-time overhead of roughly 1%. At this point, the outliner would be a really good addition to the -Oz pass pipeline! llvm-svn: 338160
* [MachineOutliner] Exit getOutliningCandidateInfo when we erase all candidatesJessica Paquette2018-07-271-0/+4
| | | | | | | | | | There was a missing check for if a candidate list was entirely deleted. This adds that check. This fixes an asan failure caused by running test/CodeGen/AArch64/addsub_ext.ll with the MachineOutliner enabled. llvm-svn: 338148
* [ARM] Add new target feature to fuse literal generationEvandro Menezes2018-07-273-19/+55
| | | | | | | | | | This feature enables the fusion of such operations on Cortex A57 and Cortex A72, as recommended in their Software Optimisation Guides, sections 4.14 and 4.11, respectively. Differential revision: https://reviews.llvm.org/D49563 llvm-svn: 338147
* Revert "Enable MachineOutliner by default under -Oz for AArch64"Jessica Paquette2018-07-273-9/+0
| | | | | | | | | | It failed an Asan test on a bot: http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fast/builds/21543/steps/check-llvm%20asan/logs/stdio Fixing that before recommitting. llvm-svn: 338136
* bpf: add missing RegState to notify MachineInstr verifier necessary register ↵Yonghong Song2018-07-271-8/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | usage Errors like the following are reported by: https://urldefense.proofpoint.com/v2/url?u=http-3A__lab.llvm.org-3A8011_builders_llvm-2Dclang-2Dx86-5F64-2Dexpensive-2Dchecks-2Dwin_builds_11261&d=DwIBAg&c=5VD0RTtNlTh3ycd41b3MUw&r=DA8e1B5r073vIqRrFz7MRA&m=929oWPCf7Bf2qQnir4GBtowB8ZAlIRWsAdTfRkDaK-g&s=9k-wbEUVpUm474hhzsmAO29VXVvbxJPWD9RTgCD71fQ&e= *** Bad machine code: Explicit definition marked as use *** - function: cal_align1 - basic block: %bb.0 entry (0x47edd98) - instruction: LDB $r3, $r2, 0 - operand 0: $r3 This is because RegState info was missing for ScratchReg inside expandMEMCPY. This caused incomplete register usage information to MachineInstr verifier which then would complain as there could be potential code-gen issue if the complained MachineInstr is used in place where register usage information matters even though the memcpy expanding is not in such case as it happens at the last stage of IR optimization pipeline. We should always specify those register usage information which compiler couldn't deduct automatically whenever we add a hardware register manually. Reported-by: Builder llvm-clang-x86_64-expensive-checks-win Build #11261 Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Yonghong Song <yhs@fb.com> llvm-svn: 338134
* Enable MachineOutliner by default under -Oz for AArch64Jessica Paquette2018-07-273-0/+9
| | | | | | | | | | | | | | | | This patch enables the MachineOutliner by default in AArch64 under -Oz. The MachineOutliner offers around a 4.5% improvement on the current -Oz code size improvements. We have done work into improving the debuggability of outlined code, so that users of -Oz won't be surprised by the optimization. We have also been executing the LLVM test suite and common external tests such as the SPEC suites continuously with no issue. The outliner has a low compile-time overhead of roughly 1%. At this point, the outliner would be a really good addition to the -Oz pass pipeline! llvm-svn: 338133
* AMDGPU/R600: Add MOV instructions to BFE patternsJan Vesely2018-07-271-5/+5
| | | | | | | | | R600 can't handle immediates for BFE, these will be eliminated later. Fixes powr/pow regressions n r600 since r334817 Differential Revision: https://reviews.llvm.org/D49641 llvm-svn: 338127
* [AArch64][SVE] Asm: Predicated integer reductions.Sander de Smalen2018-07-272-0/+62
| | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for various integer reduction operations: SADDV signed add reduction to scalar UADDV unsigned add reduction to scalar SMAXV signed maximum reduction to scalar SMINV signed minimum reduction to scalar UMAXV unsigned maximum reduction to scalar UMINV unsigned minimum reduction to scalar ANDV logical AND reduction to scalar ORV logical OR reduction to scalar EORV logical EOR reduction to scalar The reduction is predicated, e.g. smaxv s0, p0, z1.s performs a signed maximum reduction on active elements in z1, and stores the (signed max value) result in s0. llvm-svn: 338126
* [AArch64][SVE] Asm: Predicated floating point reductions.Sander de Smalen2018-07-272-1/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for various floating-point reduction operations: FADDA strictly-ordered add reduction, accumulating in scalar FADDV recursive add reduction to scalar FMAXV recursive max reduction to scalar FMINV recursive min reduction to scalar FMAXNMV recursive max number reduction to scalar FMINNMV recursive min number reduction to scalar The reduction is predicated, e.g. fadda d0, p0, d0, z1.d performs the add-reduction in strict order on active elements in z1, accumulating into d0. faddv d0, p0, z1.d performs the add-reduction (not in strict order) on active elements in z1, storing the result in d0. llvm-svn: 338123
* [AArch64][SVE] Asm: Support for FEXPA and FTSSEL.Sander de Smalen2018-07-272-0/+51
| | | | | | | | This patch adds support for transcendental acceleration instructions 'FEXPA' (exponential accelerator) and 'FTSSEL' (trigonometric select coefficient). llvm-svn: 338121
* [AArch64][SVE] Asm: Support for FRECPE and FRSQRTE.Sander de Smalen2018-07-272-0/+30
| | | | | | | Support for floating-point instructions for reciprocal estimate (FRECPE) and reciprocal square root estimate (FRSQRTE). llvm-svn: 338120
* AMDGPU: Fix code size for return_to_epilog pseudoMatt Arsenault2018-07-272-3/+4
| | | | llvm-svn: 338113
* AMDGPU/GlobalISel: Fix crash in regbankselect on non-power-of-2 typesTom Stellard2018-07-271-1/+1
| | | | | | | | | | | | Reviewers: arsenm Reviewed By: arsenm Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, rovka, kristof.beyls, dstuttard, tpr, llvm-commits, t-tye Differential Revision: https://reviews.llvm.org/D49624 llvm-svn: 338102
* [X86] Remove an unnecessary 'if' that prevented treating INT64_MAX and ↵Craig Topper2018-07-271-38/+36
| | | | | | | | -INT64_MAX as power of 2 minus 1 in the multiply expansion code. Not sure why they were being explicitly excluded, but I believe all the math inside the if works. I changed the absolute value to be uint64_t instead of int64_t so INT64_MIN+1 wouldn't be signed wrap. llvm-svn: 338101
* [X86] Add matching for another pattern of PMADDWD.Craig Topper2018-07-271-0/+123
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This is the pattern you get from the loop vectorizer for something like this int16_t A[1024]; int16_t B[1024]; int32_t C[512]; void pmaddwd() { for (int i = 0; i != 512; ++i) C[i] = (A[2*i]*B[2*i]) + (A[2*i+1]*B[2*i+1]); } In this case we will have (add (mul (build_vector), (build_vector)), (mul (build_vector), (build_vector))). This is different than the pattern we currently match which has the build_vectors between an add and a single multiply. I'm not sure what C code would get you that pattern. Reviewers: RKSimon, spatel, zvi Reviewed By: zvi Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D49636 llvm-svn: 338097
* [X86] When removing sign extends from gather/scatter indices, make sure we ↵Craig Topper2018-07-271-15/+20
| | | | | | | | handle UpdateNodeOperands finding an existing node to CSE with. If this happens the operands aren't updated and the existing node is returned. Make sure we pass this existing node up to the DAG combiner so that a proper replacement happens. Otherwise we get stuck in an infinite loop with an unoptimized node. llvm-svn: 338090
* [AMDGPU] Fix VGPR spills where offset doesn't fit in 12 bitsScott Linder2018-07-261-11/+16
| | | | | | | | | | Scale the offset of VGPR spills by the wave size when it cannot fit in the 12-bit offset immediate field and so is added to the soffset SGPR. This accounts for hardware swizzling of scratch memory. Differential Revision: https://reviews.llvm.org/D49448 llvm-svn: 338060
* [RISCV] Add support for _interrupt attributeAna Pazos2018-07-267-3/+142
| | | | | | | | | | | | | | | | | | | | | - Save/restore only registers that are used. This includes Callee saved registers and Caller saved registers (arguments and temporaries) for integer and FP registers. - If there is a call in the interrupt handler, save/restore all Caller saved registers (arguments and temporaries) and all FP registers. - Emit special return instructions depending on "interrupt" attribute type. Based on initial patch by Zhaoshi Zheng. Reviewers: asb Reviewed By: asb Subscribers: rkruppe, the_o, MartinMosbeck, brucehoult, rbar, johnrusso, simoncook, sabuasal, niosHD, kito-cheng, shiva0217, zzheng, edward-jones, mgrang, rogfer01, llvm-commits Differential Revision: https://reviews.llvm.org/D48411 llvm-svn: 338047
* [DEBUGINFO, NVPTX] Emit correct debug information for local variables.Alexey Bataev2018-07-264-0/+18
| | | | | | | | | | | | | | | | Summary: NVPTX target dos not use register-based frame information. Instead it relies on the artificial local_depot that is used instead of the frame and the data for variables must be emitted relatively to this local_depot. Reviewers: tra, jlebar, echristo Subscribers: jholewinski, aprantl, JDevlieghere, llvm-commits Differential Revision: https://reviews.llvm.org/D45963 llvm-svn: 338039
* Enable some pointer authentication instructions for aarch64 v8a targetsLuke Cheeseman2018-07-261-24/+27
| | | | | | | | | | | - Some of the v8.3 pointer authentication instruction inhabit the Hint space - These instructions can be assembled to hint instructions which act as NOP instructions prior to v8.3 - This patch permits using the hint instructions for all v8a targets - Also, correct the RETA{A,B} instructions to match the instruction attributes of RET (set isTerminator and isBarrier) Differential Revision: https://reviews.llvm.org/D49786 llvm-svn: 338029
* [mips] Sign extend i32 return values on MIPS64Stefan Maksimovic2018-07-264-0/+64
| | | | | | | | | | | | | Override getTypeForExtReturn so that functions returning an i32 typed value have it sign extended on MIPS64. Also provide patterns to get rid of unneeded sign extensions for arithmetic instructions which implicitly sign extend their results. Differential Revision: https://reviews.llvm.org/D48374 llvm-svn: 338019
* [x86/SLH] Extract the logic to trace predicate state through calls toChandler Carruth2018-07-261-19/+39
| | | | | | | | | | a helper function with a nice overview comment. NFC. This is a preperatory refactoring to implementing another component of mitigation here that was descibed in the design document but hadn't been implemented yet. llvm-svn: 338016
* [AArch64] Armv8.2-A: add the crypto extensionsSjoerd Meijer2018-07-263-5/+192
| | | | | | | | | This adds MC support for the crypto instructions that were made optional extensions in Armv8.2-A (AArch64 only). Differential Revision: https://reviews.llvm.org/D49370 llvm-svn: 338010
* [X86] Don't use CombineTo to skip adding new nodes to the DAGCombiner ↵Craig Topper2018-07-261-5/+1
| | | | | | | | | | | | worklist in combineMul. I'm not sure if this was trying to avoid optimizing the new nodes further or what. Or maybe to prevent a cycle if something tried to reform the multiply? But I don't think its a reliable way to do that. If the user of the expanded multiply is visited by the DAGCombiner after this conversion happens, the DAGCombiner will check its operands, see that they haven't been visited by the DAGCombiner before and it will then add the first node to the worklist. This process will repeat until all the new nodes are visited. So this seems like an unreliable prevention at best. So this patch just returns the new nodes like any other combine. If this starts causing problems we can try to add target specific nodes or something to more directly prevent optimizations. Now that we handle the combine normally, we can combine any negates the mul expansion creates into their users since those will be visited now. llvm-svn: 338007
* [X86] Remove some unnecessary explicit calls to DCI.AddToWorkList.Craig Topper2018-07-261-10/+0
| | | | | | These calls were making sure some newly created nodes were added to worklist, but the DAGCombiner has internal support for ensuring it has visited all nodes. Any time it visits a node it ensures the operands have been queued to be visited as well. This means if we only need to return the last new node. The DAGCombiner will take care of adding its inputs thus walking backwards through all the new nodes. llvm-svn: 337996
* CodeGen: Cleanup regmask construction; NFCMatthias Braun2018-07-261-3/+3
| | | | | | | | | - Avoid duplication of regmask size calculation. - Simplify allocateRegisterMask() call. - Rename allocateRegisterMask() to allocateRegMask() to be consistent with naming in MachineOperand. llvm-svn: 337986
* bpf: new option -bpf-expand-memcpy-in-order to expand memcpy in orderYonghong Song2018-07-259-8/+255
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Some BPF JIT backends would want to optimize memcpy in their own architecture specific way. However, at the moment, there is no way for JIT backends to see memcpy semantics in a reliable way. This is due to LLVM BPF backend is expanding memcpy into load/store sequences and could possibly schedule them apart from each other further. So, BPF JIT backends inside kernel can't reliably recognize memcpy semantics by peephole BPF sequence. This patch introduce new intrinsic expand infrastructure to memcpy. To get stable in-order load/store sequence from memcpy, we first lower memcpy into BPF::MEMCPY node which then expanded into in-order load/store sequences in expandPostRAPseudo pass which will happen after instruction scheduling. By this way, kernel JIT backends could reliably recognize memcpy through scanning BPF sequence. This new memcpy expand infrastructure is gated by a new option: -bpf-expand-memcpy-in-order Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Yonghong Song <yhs@fb.com> llvm-svn: 337977
* Add missing 'override', fixing compilation with some compilers since SVN r337950Martin Storsjo2018-07-251-1/+1
| | | | llvm-svn: 337952
* [COFF] Hoist constant pool handling from X86AsmPrinter into AsmPrinterMartin Storsjo2018-07-255-30/+13
| | | | | | | | | | | | | | | | | | | In SVN r334523, the first half of comdat constant pool handling was hoisted from X86WindowsTargetObjectFile (which despite the name only was used for msvc targets) into the arch independent TargetLoweringObjectFileCOFF, but the other half of the handling was left behind in X86AsmPrinter::GetCPISymbol. With only half of the handling in place, inconsistent comdat sections/symbols are created, causing issues with both GNU binutils (avoided for X86 in SVN r335918) and with the MS linker, which would complain like this: fatal error LNK1143: invalid or corrupt file: no symbol for COMDAT section 0x4 Differential Revision: https://reviews.llvm.org/D49644 llvm-svn: 337950
* [ARM] Prefer lsls+lsrs over lsls+ands or lsrs+ands in Thumb1.Eli Friedman2018-07-251-0/+81
| | | | | | | | | | | | | Saves materializing the immediate for the "ands". Corresponding patterns exist for lsrs+lsls, but that seems less common in practice. Now implemented as a DAGCombine. Differential Revision: https://reviews.llvm.org/D49585 llvm-svn: 337945
* [AMDGPU] Use AssumptionCacheTracker in the divrem32 expansionStanislav Mekhanoshin2018-07-251-13/+21
| | | | | | Differential Revision: https://reviews.llvm.org/D49761 llvm-svn: 337938
* [Hexagon] Properly scale bit index when extracting elements from vNi1Krzysztof Parzyszek2018-07-251-1/+3
| | | | | | | | For example v = <2 x i1> is represented as bbbbaaaa in a predicate register, where b = v[1], a = v[0]. Extracting v[1] is equivalent to extracting bit 4 from the predicate register. llvm-svn: 337934
* [MIPS GlobalISel] Lower pointer argumentsPetar Jovanovic2018-07-252-1/+3
| | | | | | | | | | | | Add support for lowering pointer arguments. Changing type from pointer to integer is already done in MipsTargetLowering::getRegisterTypeForCallingConv. Patch by Petar Avramovic. Differential Revision: https://reviews.llvm.org/D49419 llvm-svn: 337912
* [SystemZ] Use tablegen loops in SchedModelsJonas Paulsson2018-07-255-229/+98
| | | | | | | | | | NFC changes to make scheduler TableGen files more readable, by using loops instead of a lot of similar defs with just e.g. a latency value that changes. https://reviews.llvm.org/D49598 Review: Ulrich Weigand, Javed Abshar llvm-svn: 337909
* [x86/SLH] Sink the return hardening into the main block-walk + hardeningChandler Carruth2018-07-251-26/+17
| | | | | | | | | | | code. This consolidates all our hardening calls, and simplifies the code a bit. It seems much more clear to handle all of these together. No functionality changed here. llvm-svn: 337895
* [x86/SLH] Improve name and comments for the main hardening function.Chandler Carruth2018-07-251-174/+190
| | | | | | | | | | | | | | | | | | | This function actually does two things: it traces the predicate state through each of the basic blocks in the function (as that isn't directly handled by the SSA updater) *and* it hardens everything necessary in the block as it goes. These need to be done together so that we have the currently active predicate state to use at each point of the hardening. However, this also made obvious that the flag to disable actual hardening of loads was flawed -- it also disabled tracing the predicate state across function calls within the body of each block. So this patch sinks this debugging flag test to correctly guard just the hardening of loads. Unless load hardening was disabled, no functionality should change with tis patch. llvm-svn: 337894
* [mips] Replace custom parsing logic for data directives by the ↵Simon Atanasyan2018-07-253-42/+12
| | | | | | | | | | | | | | | | | | | | | `addAliasForDirective` The target independent AsmParser doesn't recognise .hword, .word, .dword which are required for Mips. Currently MipsAsmParser recognises these through dispatch to MipsAsmParser::parseDataDirective. This contains equivalent logic to AsmParser::parseDirectiveValue. This patch allows reuse of AsmParser::parseDirectiveValue by making use of addAliasForDirective to support .hword, .word and .dword. Original patch provided by Alex Bradbury at D47001 was modified to fix handling of microMIPS symbols. The `AsmParser::parseDirectiveValue` calls either `EmitIntValue` or `EmitValue`. In this patch we override `EmitIntValue` in the `MipsELFStreamer` to clear a pending set of microMIPS symbols. Differential revision: https://reviews.llvm.org/D49539 llvm-svn: 337893
* [X86] Use X86ISD::MUL_IMM instead of ISD::MUL for multiply we intend to be ↵Craig Topper2018-07-251-1/+2
| | | | | | | | selected to LEA. This prevents other combines from possibly disturbing it. llvm-svn: 337890
* [x86/SLH] Teach the x86 speculative load hardening pass to hardenChandler Carruth2018-07-251-0/+200
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | against v1.2 BCBS attacks directly. Attacks using spectre v1.2 (a subset of BCBS) are described in the paper here: https://people.csail.mit.edu/vlk/spectre11.pdf The core idea is to speculatively store over the address in a vtable, jumptable, or other target of indirect control flow that will be subsequently loaded. Speculative execution after such a store can forward the stored value to subsequent loads, and if called or jumped to, the speculative execution will be steered to this potentially attacker controlled address. Up until now, this could be mitigated by enableing retpolines. However, that is a relatively expensive technique to mitigate this particular flavor. Especially because in most cases SLH will have already mitigated this. To fully mitigate this with SLH, we need to do two core things: 1) Unfold loads from calls and jumps, allowing the loads to be post-load hardened. 2) Force hardening of incoming registers even if we didn't end up needing to harden the load itself. The reason we need to do these two things is because hardening calls and jumps from this particular variant is importantly different from hardening against leak of secret data. Because the "bad" data here isn't a secret, but in fact speculatively stored by the attacker, it may be loaded from any address, regardless of whether it is read-only memory, mapped memory, or a "hardened" address. The only 100% effective way to harden these instructions is to harden the their operand itself. But to the extent possible, we'd like to take advantage of all the other hardening going on, we just need a fallback in case none of that happened to cover the particular input to the control transfer instruction. For users of SLH, currently they are paing 2% to 6% performance overhead for retpolines, but this mechanism is expected to be substantially cheaper. However, it is worth reminding folks that this does not mitigate all of the things retpolines do -- most notably, variant #2 is not in *any way* mitigated by this technique. So users of SLH may still want to enable retpolines, and the implementation is carefuly designed to gracefully leverage retpolines to avoid the need for further hardening here when they are enabled. Differential Revision: https://reviews.llvm.org/D49663 llvm-svn: 337878
* [X86] Use a shift plus an lea for multiplying by a constant that is a power ↵Craig Topper2018-07-251-0/+18
| | | | | | | | of 2 plus 2/4/8. The LEA allows us to combine an add and the multiply by 2/4/8 together so we just need a shift for the larger power of 2. llvm-svn: 337875
* [X86] Expand mul by pow2 + 2 using a shift and two adds similar to what we ↵Craig Topper2018-07-251-11/+15
| | | | | | do for pow2 - 2. llvm-svn: 337874
* [X86] Use a two lea sequence for multiply by 37, 41, and 73.Craig Topper2018-07-241-0/+9
| | | | | | These fit a pattern used by 11, 21, and 19. llvm-svn: 337871
* [X86] Change multiply by 26 to use two multiplies by 5 and an add instead of ↵Craig Topper2018-07-241-7/+7
| | | | | | | | multiply by 3 and 9 and a subtract. Same number of operations, but ending in an add is friendlier due to it being commutable. llvm-svn: 337869
* [X86] When expanding a multiply by a negative of one less than a power of 2, ↵Craig Topper2018-07-241-10/+12
| | | | | | | | | | like 31, don't generate a negate of a subtract that we'll never optimize. We generated a subtract for the power of 2 minus one then negated the result. The negate can be optimized away by swapping the subtract operands, but DAG combine doesn't know how to do that and we don't add any of the new nodes to the worklist anyway. This patch makes use explicitly emit the swapped subtract. llvm-svn: 337858
* [X86] Generalize the multiply by 30 lowering to generic multipy by power 2 ↵Craig Topper2018-07-241-15/+10
| | | | | | | | | | minus 2. Use a left shift and 2 subtracts like we do for 30. Move this out from behind the slow lea check since it doesn't even use an LEA. Use this for multiply by 14 as well. llvm-svn: 337856
* [X86] Change multiply by 19 to use (9 * X) * 2 + X instead of (5 * X) * 4 - 1.Craig Topper2018-07-241-2/+2
| | | | | | The new lowering can be done in 2 LEAs. The old code took 1 LEA, 1 shift, and 1 sub. llvm-svn: 337851
* [MachineOutliner][NFC] Move target frame info into OutlinedFunctionJessica Paquette2018-07-244-21/+23
| | | | | | | | | | | | | | Just some gardening here. Similar to how we moved call information into Candidates, this moves outlined frame information into OutlinedFunction. This allows us to remove TargetCostInfo entirely. Anywhere where we returned a TargetCostInfo struct, we now return an OutlinedFunction. This establishes OutlinedFunctions as more of a general repeated sequence, and Candidates as occurrences of those repeated sequences. llvm-svn: 337848
* Put "built-in" function definitions in global Used list, for LTO. (fix bug ↵Peter Collingbourne2018-07-241-1/+1
| | | | | | | | | | | | | | 34169) When building with LTO, builtin functions that are defined but whose calls have not been inserted yet, get internalized. The Global Dead Code Elimination phase in the new LTO implementation then removes these function definitions. Later optimizations add calls to those functions, and the linker then dies complaining that there are no definitions. This CL fixes the new LTO implementation to check if a function is builtin, and if so, to not internalize (and later DCE) the function. As part of this fix I needed to move the RuntimeLibcalls.{def,h} files from the CodeGen subidrectory to the IR subdirectory. I have updated all the files that accessed those two files to access their new location. Fixes PR34169 Patch by Caroline Tice! Differential Revision: https://reviews.llvm.org/D49434 llvm-svn: 337847
* [x86] Teach the x86 backend that it can fold between TCRETURNm* and ↵Chandler Carruth2018-07-242-0/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | TCRETURNr* and fix latent bugs with register class updates. Summary: Enabling this fully exposes a latent bug in the instruction folding: we never update the register constraints for the register operands when fusing a load into another operation. The fused form could, in theory, have different register constraints on its operands. And in fact, TCRETURNm* needs its memory operands to use tailcall compatible registers. I've updated the folding code to re-constrain all the registers after they are mapped onto their new instruction. However, we still can't enable folding in the general case from TCRETURNr* to TCRETURNm* because doing so may require more registers to be available during the tail call. If the call itself uses all but one register, and the folded load would require both a base and index register, there will not be enough registers to allocate the tail call. It would be better, IMO, to teach the register allocator to *unfold* TCRETURNm* when it runs out of registers (or specifically check the number of registers available during the TCRETURNr*) but I'm not going to try and solve that for now. Instead, I've just blocked the forward folding from r -> m, leaving LLVM free to unfold from m -> r as that doesn't introduce new register pressure constraints. The down side is that I don't have anything that will directly exercise this. Instead, I will be immediately using this it my SLH patch. =/ Still worse, without allowing the TCRETURNr* -> TCRETURNm* fold, I don't have any tests that demonstrate the failure to update the memory operand register constraints. This patch still seems correct, but I'm nervous about the degree of testing due to this. Suggestions? Reviewers: craig.topper Subscribers: sanjoy, mcrosier, hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D49717 llvm-svn: 337845
OpenPOWER on IntegriCloud