summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
Commit message (Collapse)AuthorAgeFilesLines
* [SelectionDAG] lower math intrinsics to finite version of libcalls when ↵Sanjay Patel2018-01-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | possible (PR35672) Ingredients in this patch: 1. Add HANDLE_LIBCALL defs for finite mathlib functions that correspond to LLVM intrinsics. 2. Plumbing to send TargetLibraryInfo down to SelectionDAGLegalize. 3. Relaxed math and library checking in SelectionDAGLegalize::ConvertNodeToLibcall() to choose finite libcalls. There was a bug about determining the availability of the finite calls that should be fixed with: rL322010 Not in this patch: This doesn't resolve the question/bug of clang creating the intrinsic IR in the first place. There's likely follow-up work needed to support the long double variants better. There's room for improvement to reduce the code duplication. Create finite calls that don't originate from a corresponding intrinsic or DAG node? Differential Revision: https://reviews.llvm.org/D41338 llvm-svn: 322087
* Use phi ranges to simplify code. No functionality change intended.Benjamin Kramer2017-12-301-6/+4
| | | | llvm-svn: 321585
* DAG: Tolerate non-MemSDNodes for OPC_RecordMemRefMatt Arsenault2017-12-201-8/+24
| | | | | | | | | | | | | | | | | | | | When intrinsics are allowed to have mem operands, there are two ways this can happen. First is an intrinsic that is marked has having a mem operand, but is not handled by getTgtMemIntrinsic. The second way can occur even for intrinsics which do not have a mem operand. It seems the selector table does some kind of sorting based on the opcode, and the mem ref recording can happen in the same scope for intrinsics that both do and do not have mem refs. I haven't been able to figure out exactly why this happens (although it happens even with the matcher optimizations disabled). I'm not sure if it's worth trying to avoid hitting this for these nodes since I think it's still reasonable to handle this in case getTgtMemIntrinic is not implemented. llvm-svn: 321208
* MachineFunction: Return reference from getFunction(); NFCMatthias Braun2017-12-151-3/+3
| | | | | | The Function can never be nullptr so we can return a reference. llvm-svn: 320884
* [RISCV] Support lowering FrameIndexAlex Bradbury2017-12-111-0/+19
| | | | | | | | | | | | | | | | Introduces the AddrFI "addressing mode", which is necessary simply because it's not possible to write a pattern that directly matches a frameindex. Ensure callee-saved registers are accessed relative to the stackpointer. This is necessary as callee-saved register spills are performed before the frame pointer is set. Move HexagonDAGToDAGISel::isOrEquivalentToAdd to SelectionDAGISel, so we can make use of it in the RISC-V backend. Differential Revision: https://reviews.llvm.org/D39848 llvm-svn: 320353
* [CodeGen] Unify MBB reference format in both MIR and debug outputFrancis Visoiu Mistrih2017-12-041-23/+42
| | | | | | | | | | | | | | | | As part of the unification of the debug format and the MIR format, print MBB references as '%bb.5'. The MIR printer prints the IR name of a MBB only for block definitions. * find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" \) -type f -print0 | xargs -0 sed -i '' -E 's/BB#" << ([a-zA-Z0-9_]+)->getNumber\(\)/" << printMBBReference(*\1)/g' * find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" \) -type f -print0 | xargs -0 sed -i '' -E 's/BB#" << ([a-zA-Z0-9_]+)\.getNumber\(\)/" << printMBBReference(\1)/g' * find . \( -name "*.txt" -o -name "*.s" -o -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" \) -type f -print0 | xargs -0 sed -i '' -E 's/BB#([0-9]+)/%bb.\1/g' * grep -nr 'BB#' and fix Differential Revision: https://reviews.llvm.org/D40422 llvm-svn: 319665
* Revert "[X86] Improvement in CodeGen instruction selection for LEAs."Matt Morehouse2017-12-011-11/+0
| | | | | | This reverts r319543, due to ASan bot breakage. llvm-svn: 319591
* [X86] Improvement in CodeGen instruction selection for LEAs.Jatin Bhateja2017-12-011-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: 1/ Operand folding during complex pattern matching for LEAs has been extended, such that it promotes Scale to accommodate similar operand appearing in the DAG e.g. T1 = A + B T2 = T1 + 10 T3 = T2 + A For above DAG rooted at T3, X86AddressMode will now look like Base = B , Index = A , Scale = 2 , Disp = 10 2/ During OptimizeLEAPass down the pipeline factorization is now performed over LEAs so that if there is an opportunity then complex LEAs (having 3 operands) could be factored out e.g. leal 1(%rax,%rcx,1), %rdx leal 1(%rax,%rcx,2), %rcx will be factored as following leal 1(%rax,%rcx,1), %rdx leal (%rdx,%rcx) , %edx 3/ Aggressive operand folding for AM based selection for LEAs is sensitive to loops, thus avoiding creation of any complex LEAs within a loop. 4/ Simplify LEA converts (lea (BASE,1,INDEX,0) --> add (BASE, INDEX) which offers better through put. PR32755 will be taken care of by this pathc. Previous patch revisions : r313343 , r314886 Reviewers: lsaba, RKSimon, craig.topper, qcolombet, jmolloy, jbhateja Reviewed By: lsaba, RKSimon, jbhateja Subscribers: jmolloy, spatel, igorb, llvm-commits Differential Revision: https://reviews.llvm.org/D35014 llvm-svn: 319543
* [SelectionDAG] Add a isel matcher op to check the type of node results other ↵Craig Topper2017-11-221-0/+14
| | | | | | | | than result 0. I plan to use this to check the type of the mask result of masked gathers in the X86 backend. llvm-svn: 318820
* Fix a bunch more layering of CodeGen headers that are in TargetDavid Blaikie2017-11-171-3/+3
| | | | | | | | All these headers already depend on CodeGen headers so moving them into CodeGen fixes the layering (since CodeGen depends on Target, not the other way around). llvm-svn: 318490
* Fix APInt bit size in processDbgDeclaresYaxun Liu2017-11-161-1/+1
| | | | | | | | | | | | | processDbgDeclares assumes pointer size is the same for different addr spaces. It uses pointer size for addr space 0 for all pointers, which causes assertion in stripAndAccumulateInBoundsConstantOffsets for amdgcn---amdgiz since pointer in addr space 5 has different size than in addr space 0. This patch fixes that. Differential Revision: https://reviews.llvm.org/D40085 llvm-svn: 318370
* Target/TargetInstrInfo.h -> CodeGen/TargetInstrInfo.h to match layeringDavid Blaikie2017-11-081-2/+2
| | | | | | | | This header includes CodeGen headers, and is not, itself, included by any Target headers, so move it into CodeGen to match the layering of its implementation. llvm-svn: 317647
* Implement salavageDebugInfo functionality for SelectionDAG.Adrian Prantl2017-10-241-0/+1
| | | | | | | | | | | | | | Similar to how llvm::salvagDebugInfo hooks into InstCombine, this adds a hook that can be invoked before an SDNode that is associated with an SDDbgValue is erased to capture the effect of the deleted node in a DIExpression. The motivating example is an SDDebugValue attached to an ADD operation that gets folded into a LOAD+OFFSET operation. rdar://problem/32121503 llvm-svn: 316525
* Add iterator range MachineRegisterInfo::liveins(), adopt users, NFCKrzysztof Parzyszek2017-10-161-4/+3
| | | | llvm-svn: 315927
* Rename OptimizationDiagnosticInfo.* to OptimizationRemarkEmitter.*Adam Nemet2017-10-091-1/+1
| | | | | | | Sync it up with the name of the class actually defined here. This has been bothering me for a while... llvm-svn: 315249
* Revert r314886 "[X86] Improvement in CodeGen instruction selection for LEAs ↵Hans Wennborg2017-10-041-11/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (re-applying post required revision changes.)" It broke the Chromium / SQLite build; see PR34830. > Summary: > 1/ Operand folding during complex pattern matching for LEAs has been > extended, such that it promotes Scale to accommodate similar operand > appearing in the DAG. > e.g. > T1 = A + B > T2 = T1 + 10 > T3 = T2 + A > For above DAG rooted at T3, X86AddressMode will no look like > Base = B , Index = A , Scale = 2 , Disp = 10 > > 2/ During OptimizeLEAPass down the pipeline factorization is now performed over LEAs > so that if there is an opportunity then complex LEAs (having 3 operands) > could be factored out. > e.g. > leal 1(%rax,%rcx,1), %rdx > leal 1(%rax,%rcx,2), %rcx > will be factored as following > leal 1(%rax,%rcx,1), %rdx > leal (%rdx,%rcx) , %edx > > 3/ Aggressive operand folding for AM based selection for LEAs is sensitive to loops, > thus avoiding creation of any complex LEAs within a loop. > > Reviewers: lsaba, RKSimon, craig.topper, qcolombet, jmolloy > > Reviewed By: lsaba > > Subscribers: jmolloy, spatel, igorb, llvm-commits > > Differential Revision: https://reviews.llvm.org/D35014 llvm-svn: 314919
* [X86] Improvement in CodeGen instruction selection for LEAs (re-applying ↵Jatin Bhateja2017-10-041-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | post required revision changes.) Summary: 1/ Operand folding during complex pattern matching for LEAs has been extended, such that it promotes Scale to accommodate similar operand appearing in the DAG. e.g. T1 = A + B T2 = T1 + 10 T3 = T2 + A For above DAG rooted at T3, X86AddressMode will no look like Base = B , Index = A , Scale = 2 , Disp = 10 2/ During OptimizeLEAPass down the pipeline factorization is now performed over LEAs so that if there is an opportunity then complex LEAs (having 3 operands) could be factored out. e.g. leal 1(%rax,%rcx,1), %rdx leal 1(%rax,%rcx,2), %rcx will be factored as following leal 1(%rax,%rcx,1), %rdx leal (%rdx,%rcx) , %edx 3/ Aggressive operand folding for AM based selection for LEAs is sensitive to loops, thus avoiding creation of any complex LEAs within a loop. Reviewers: lsaba, RKSimon, craig.topper, qcolombet, jmolloy Reviewed By: lsaba Subscribers: jmolloy, spatel, igorb, llvm-commits Differential Revision: https://reviews.llvm.org/D35014 llvm-svn: 314886
* CodeGen: support SwiftError SwiftCC on Windows x64Saleem Abdulrasool2017-09-201-0/+2
| | | | | | | | | | Add support for passing SwiftError through a register on the Windows x64 calling convention. This allows the use of swifterror attributes on parameters which is used by the swift front end for the `Error` parameter. This partially enables building the swift standard library for Windows x86_64. llvm-svn: 313791
* CodeGen: use range based for loops (NFC)Saleem Abdulrasool2017-09-191-6/+1
| | | | | | | Simplify the RPOT traversal by using a range based for loop for the iterator dereference. llvm-svn: 313687
* Revert r313343 "[X86] PR32755 : Improvement in CodeGen instruction selection ↵Hans Wennborg2017-09-151-11/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | for LEAs." This caused PR34629: asserts firing when building Chromium. It also broke some buildbots building test-suite as reported on the commit thread. > Summary: > 1/ Operand folding during complex pattern matching for LEAs has been > extended, such that it promotes Scale to accommodate similar operand > appearing in the DAG. > e.g. > T1 = A + B > T2 = T1 + 10 > T3 = T2 + A > For above DAG rooted at T3, X86AddressMode will no look like > Base = B , Index = A , Scale = 2 , Disp = 10 > > 2/ During OptimizeLEAPass down the pipeline factorization is now performed over LEAs > so that if there is an opportunity then complex LEAs (having 3 operands) > could be factored out. > e.g. > leal 1(%rax,%rcx,1), %rdx > leal 1(%rax,%rcx,2), %rcx > will be factored as following > leal 1(%rax,%rcx,1), %rdx > leal (%rdx,%rcx) , %edx > > 3/ Aggressive operand folding for AM based selection for LEAs is sensitive to loops, > thus avoiding creation of any complex LEAs within a loop. > > Reviewers: lsaba, RKSimon, craig.topper, qcolombet > > Reviewed By: lsaba > > Subscribers: spatel, igorb, llvm-commits > > Differential Revision: https://reviews.llvm.org/D35014 llvm-svn: 313376
* [X86] PR32755 : Improvement in CodeGen instruction selection for LEAs.Jatin Bhateja2017-09-151-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: 1/ Operand folding during complex pattern matching for LEAs has been extended, such that it promotes Scale to accommodate similar operand appearing in the DAG. e.g. T1 = A + B T2 = T1 + 10 T3 = T2 + A For above DAG rooted at T3, X86AddressMode will no look like Base = B , Index = A , Scale = 2 , Disp = 10 2/ During OptimizeLEAPass down the pipeline factorization is now performed over LEAs so that if there is an opportunity then complex LEAs (having 3 operands) could be factored out. e.g. leal 1(%rax,%rcx,1), %rdx leal 1(%rax,%rcx,2), %rcx will be factored as following leal 1(%rax,%rcx,1), %rdx leal (%rdx,%rcx) , %edx 3/ Aggressive operand folding for AM based selection for LEAs is sensitive to loops, thus avoiding creation of any complex LEAs within a loop. Reviewers: lsaba, RKSimon, craig.topper, qcolombet Reviewed By: lsaba Subscribers: spatel, igorb, llvm-commits Differential Revision: https://reviews.llvm.org/D35014 llvm-svn: 313343
* Add llvm.codeview.annotation to implement MSVC __annotationReid Kleckner2017-09-051-0/+1
| | | | | | | | | | | | | | | | | | Summary: This intrinsic represents a label with a list of associated metadata strings. It is modelled as reading and writing inaccessible memory so that it won't be removed as dead code. I think the intention is that the annotation strings should appear at most once in the debug info, so I marked it noduplicate. We are allowed to inline code with annotations as long as we strip the annotation, but that can be done later. Reviewers: majnemer Subscribers: eraman, llvm-commits, hiraditya Differential Revision: https://reviews.llvm.org/D36904 llvm-svn: 312569
* Move helper classes into anonymous namespaces.Benjamin Kramer2017-08-201-4/+4
| | | | | | No functionality change intended. llvm-svn: 311288
* [SelectionDAG] reset NewNodesMustHaveLegalTypes flag between basic blocksGuy Blank2017-08-071-0/+3
| | | | | | | | | | | | | The NewNodesMustHaveLegalTypes flag is set to false at the beginning of CodeGenAndEmitDAG, and set to true after legalizing types. But before calling CodeGenAndEmitDAG we build the DAG for the basic block. So for the first basic block NewNodesMustHaveLegalTypes would be 'false' during the SDAG building, and for all other basic blocks it would be 'true'. This patch sets the flag to false before SDAG building each basic block. Differential Revision: https://reviews.llvm.org/D33435 llvm-svn: 310239
* DAG: Provide access to Pass instance from SelectionDAGMatt Arsenault2017-08-031-1/+1
| | | | | | This allows accessing an analysis pass during lowering. llvm-svn: 309991
* Remove the unused offset from DBG_VALUE (NFC)Adrian Prantl2017-07-281-3/+5
| | | | | | | Followup to r309426. rdar://problem/33580047 llvm-svn: 309450
* [FastISel] fix a fallback diagnostic.Igor Breger2017-07-091-1/+2
| | | | | | | | | | | | | | Summary: FastISel was marked as failed in case instruction selection succeeded. Reviewers: qcolombet, zvi, rovka, ab Reviewed By: zvi Subscribers: javed.absar, ab, qcolombet, bogner, llvm-commits Differential Revision: https://reviews.llvm.org/D34438 llvm-svn: 307489
* [FastISel][SelectionDAG]Teach fastISel about GC intrinsicsAnna Thomas2017-07-041-1/+5
| | | | | | | | | | | | | | | | | | | | | | | Summary: We are crashing in LLC at O0 when gc intrinsics are present in the block. The reason being FastISel performs basic block ISel by modifying GC.relocates to be the first instruction in the block. This can cause us to visit the GC relocate before it's corresponding GC.statepoint is visited, which is incorrect. When we lower the statepoint, we record the base and derived pointers, along with the gc.relocates. After this we can visit the gc.relocate. This patch avoids fastISel from incorrectly creating the block with gc.relocate as the first instruction. Reviewers: qcolombet, skatkov, qikon, reames Reviewed by: skatkov Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D34421 llvm-svn: 307084
* [SelectionDAG] Update Loop info after splitting critical edges.Davide Italiano2017-06-171-6/+9
| | | | | | The analysis is expected to be preserved by SelectionDAG. llvm-svn: 305621
* ISel: Fix FastISel of swifterror valuesArnold Schwaighofer2017-06-151-0/+79
| | | | | | | | | | | | The code assumed that we process instructions in basic block order. FastISel processes instructions in reverse basic block order. We need to pre-assign virtual registers before selecting otherwise we get def-use relationships wrong. This only affects code with swifterror registers. rdar://32659327 llvm-svn: 305484
* [CodeGen] Fix some Clang-tidy modernize-use-using and Include What You Use ↵Eugene Zelenko2017-06-071-6/+9
| | | | | | warnings; other minor fixes (NFC). llvm-svn: 304954
* Sort the remaining #include lines in include/... and lib/....Chandler Carruth2017-06-061-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | I did this a long time ago with a janky python script, but now clang-format has built-in support for this. I fed clang-format every line with a #include and let it re-sort things according to the precise LLVM rules for include ordering baked into clang-format these days. I've reverted a number of files where the results of sorting includes isn't healthy. Either places where we have legacy code relying on particular include ordering (where possible, I'll fix these separately) or where we have particular formatting around #include lines that I didn't want to disturb in this patch. This patch is *entirely* mechanical. If you get merge conflicts or anything, just ignore the changes in this patch and run clang-format over your #include lines in the files. Sorry for any noise here, but it is important to keep these things stable. I was seeing an increasing number of patches with irrelevant re-ordering of #include lines because clang-format was used. This patch at least isolates that churn, makes it easy to skip when resolving conflicts, and gets us to a clean baseline (again). llvm-svn: 304787
* [SelectionDAG] Update the dominator after splitting critical edges.Davide Italiano2017-06-051-5/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Running `llc -verify-dom-info` on the attached testcase results in a crash in the verifier, due to a stale dominator tree. i.e. DominatorTree is not up to date! Computed: =============================-------------------------------- Inorder Dominator Tree: [1] %safe_mod_func_uint8_t_u_u.exit.i.i.i {0,7} [2] %lor.lhs.false.i61.i.i.i {1,2} [2] %safe_mod_func_int8_t_s_s.exit.i.i.i {3,6} [3] %safe_div_func_int64_t_s_s.exit66.i.i.i {4,5} Actual: =============================-------------------------------- Inorder Dominator Tree: [1] %safe_mod_func_uint8_t_u_u.exit.i.i.i {0,9} [2] %lor.lhs.false.i61.i.i.i {1,2} [2] %safe_mod_func_int8_t_s_s.exit.i.i.i {3,8} [3] %safe_div_func_int64_t_s_s.exit66.i.i.i {4,5} [3] %safe_mod_func_int8_t_s_s.exit.i.i.i.lor.lhs.false.i61.i.i.i_crit_edge {6,7} This is because in `SelectionDAGIsel` we split critical edges without updating the corresponding dominator for the function (and we claim in `MachineFunctionPass::getAnalysisUsage()` that the domtree is preserved). We could either stop preserving the domtree in `getAnalysisUsage` or tell `splitCriticalEdge()` to update it. As the second option is easy to implement, that's the one I chose. Differential Revision: https://reviews.llvm.org/D33800 llvm-svn: 304742
* [SelectionDAG] Get rid of recursion in findNonImmUseMax Kazantsev2017-06-021-20/+26
| | | | | | | | | | | | The recursive implementation of findNonImmUse may overflow stack on extremely long use chains. This patch replaces it with an equivalent iterative implementation. Reviewed By: bogner Differential Revision: https://reviews.llvm.org/D33775 llvm-svn: 304522
* Add constrained intrinsics for some libm-equivalent operationsAndrew Kaylor2017-05-251-51/+4
| | | | | | Differential revision: https://reviews.llvm.org/D32319 llvm-svn: 303922
* [CodeGen] Don't require AA in SDAGISel at -O0.Ahmed Bougacha2017-05-101-8/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | Before r247167, the pass manager builder controlled which AA implementations were used, exporting them all in the AliasAnalysis analysis group. Now, AAResultsWrapperPass always uses BasicAA, but still uses other AA implementations if made available in the pass pipeline. But regardless, SDAGISel is required at O0, and really doesn't need to be doing fancy optimizations based on useful AA results. Don't require AA at CodeGenOpt::None, and only use it otherwise. This does have a functional impact (and one testcase is pessimized because we can't reuse a load). But I think that's desirable no matter what. Note that this alone doesn't result in less DT computations: TwoAddress was previously able to reuse the DT we computed for SDAG. That will be fixed separately. Differential Revision: https://reviews.llvm.org/D32766 llvm-svn: 302611
* Re-land "Use the frame index side table for byval and inalloca arguments"Reid Kleckner2017-05-091-0/+48
| | | | | | This re-lands r302483. It was not the cause of PR32977. llvm-svn: 302544
* Revert "Use the frame index side table for byval and inalloca arguments"Reid Kleckner2017-05-091-48/+0
| | | | | | This reverts r302483 and it's follow up fix. llvm-svn: 302493
* Use the frame index side table for byval and inalloca argumentsReid Kleckner2017-05-081-0/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: For inalloca functions, this is a very common code pattern: %argpack = type <{ i32, i32, i32 }> define void @f(%argpack* inalloca %args) { entry: %a = getelementptr inbounds %argpack, %argpack* %args, i32 0, i32 0 %b = getelementptr inbounds %argpack, %argpack* %args, i32 0, i32 1 %c = getelementptr inbounds %argpack, %argpack* %args, i32 0, i32 2 tail call void @llvm.dbg.declare(metadata i32* %a, ... "a") tail call void @llvm.dbg.declare(metadata i32* %c, ... "b") tail call void @llvm.dbg.declare(metadata i32* %b, ... "c") Even though these GEPs can be simplified to a constant offset from EBP or RSP, we don't do that at -O0, and each GEP is computed into a register. Registers used to compute argument addresses are typically spilled and clobbered very quickly after the initial computation, so live debug variable tracking loses information very quickly if we use DBG_VALUE instructions. This change moves processing of dbg.declare between argument lowering and basic block isel, so that we can ask if an argument has a frame index or not. If the argument lives in a register as is the case for byval arguments on some targets, then we don't put it in the side table and during ISel we emit DBG_VALUE instructions. Reviewers: aprantl Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D32980 llvm-svn: 302483
* TargetLowering: Add finalizeLowering() function; NFCMatthias Braun2017-04-281-7/+1
| | | | | | | | | | | | | | | | | Adds a new method finalizeLowering to TargetLoweringBase. This is in preparation for an upcoming commit. This function is meant for target specific adjustments to MachineFrameInfo or register reservations. Move the freezeRegisters() and the hasCopyImplyingStackAdjustment() handling into the new function to prove the concept. As an added bonus GlobalISel no longer missed the hasCopyImplyingStackAdjustment() handling with this. Differential Revision: https://reviews.llvm.org/D32621 llvm-svn: 301679
* [SelectionDAG] Use KnownBits struct in DAG's computeKnownBits and ↵Craig Topper2017-04-281-7/+7
| | | | | | | | | | | | simplifyDemandedBits This patch replaces the separate APInts for KnownZero/KnownOne with a single KnownBits struct. This is similar to what was done to ValueTracking's version recently. This is largely a mechanical transformation from KnownZero to Known.Zero. Differential Revision: https://reviews.llvm.org/D32569 llvm-svn: 301620
* [SelectionDAG] Use various APInt methods to reduce temporary APInt creationCraig Topper2017-04-281-1/+1
| | | | | | This patch uses various APInt methods to reduce the number of temporary APInts. These were all found while working through converting SelectionDAG's computeKnownBits to also use the KnownBits struct recently added to the ValueTracking version. llvm-svn: 301618
* [CodeGen] Pass SDAG an ORE, and replace FastISel stats with remarks.Ahmed Bougacha2017-03-301-239/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the long-term, we want to replace statistics with something finer-grained that lets us gather per-function data. Remarks are that replacement. Create an ORE instance in SelectionDAGISel, and pass it to SelectionDAG. SelectionDAG was used so that we can emit remarks from all SelectionDAG-related code, including TargetLowering and DAGCombiner. This isn't used in the current patch but Adam tells me he's interested for the fp-contract combines. Use the ORE instance to emit FastISel failures as remarks (instead of the mix of dbgs() dumps and statistics that we currently have). Eventually, we want to have an API that tells us whether remarks are enabled (http://llvm.org/PR32352) so that we don't emit expensive remarks (in this case, dumping IR) when it's not needed. For now, use 'isEnabled' as a crude replacement. This does mean that the replacement for '-fast-isel-verbose' is now '-pass-remarks-missed=isel'. Additionally, clang users also need to enable remark diagnostics, using '-Rpass-missed=isel'. This also removes '-fast-isel-verbose2': there are no static statistics that we want to only enable in asserts builds, so we can always use the remarks regardless of the build type. Differential Revision: https://reviews.llvm.org/D31405 llvm-svn: 299093
* Recommitting Craig Topper's patch now that r296476 has been recommitted.Nirav Dave2017-03-141-1/+1
| | | | | | | | When checking if chain node is foldable, make sure the intermediate nodes have a single use across all results not just the result that was used to reach the chain node. This recovers a test case that was severely broken by r296476, my making sure we don't create ADD/ADC that loads and stores when there is also a flag dependency. llvm-svn: 297698
* [SDAG] Revert r296476 (and r296486, r296668, r296690).Chandler Carruth2017-03-031-1/+1
| | | | | | | | | | This patch causes compile times for some patterns to explode. I have a (large, unreduced) test case that slows down by more than 20x and several test cases slow down by 2x. I'm sending some of the test cases directly to Nirav and following up with more details in the review log, but this should unblock anyone else hitting this. llvm-svn: 296862
* Elide argument copies during instruction selectionReid Kleckner2017-03-011-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Avoids tons of prologue boilerplate when arguments are passed in memory and left in memory. This can happen in a debug build or in a release build when an argument alloca is escaped. This will dramatically affect the code size of x86 debug builds, because X86 fast isel doesn't handle arguments passed in memory at all. It only handles the x86_64 case of up to 6 basic register parameters. This is implemented by analyzing the entry block before ISel to identify copy elision candidates. A copy elision candidate is an argument that is used to fully initialize an alloca before any other possibly escaping uses of that alloca. If an argument is a copy elision candidate, we set a flag on the InputArg. If the the target generates loads from a fixed stack object that matches the size and alignment requirements of the alloca, the SelectionDAG builder will delete the stack object created for the alloca and replace it with the fixed stack object. The load is left behind to satisfy any remaining uses of the argument value. The store is now dead and is therefore elided. The fixed stack object is also marked as mutable, as it may now be modified by the user, and it would be invalid to rematerialize the initial load from it. Supersedes D28388 Fixes PR26328 Reviewers: chandlerc, MatzeB, qcolombet, inglorion, hans Subscribers: igorb, llvm-commits Differential Revision: https://reviews.llvm.org/D29668 llvm-svn: 296683
* [CodeGen] Remove dead FastISel code after SDAG emitted a tailcall.Ahmed Bougacha2017-03-011-0/+6
| | | | | | | | | | | | | | | | | | When SDAGISel (top-down) selects a tail-call, it skips the remainder of the block. If, before that, FastISel (bottom-up) selected some of the (no-op) next few instructions, we can end up with dead instructions following the terminator (selected by SDAGISel). We need to erase them, as we know they aren't necessary (in addition to being incorrect). We already do this when FastISel falls back on the tail-call itself. Also remove the FastISel-emitted code if we fallback on the instructions between the tail-call and the return. llvm-svn: 296552
* [DAGISel] When checking if chain node is foldable, make sure the ↵Craig Topper2017-02-281-1/+1
| | | | | | | | intermediate nodes have a single use across all results not just the result that was used to reach the chain node. This recovers a test case that was severely broken by r296476, my making sure we don't create ADD/ADC that loads and stores when there is also a flag dependency. llvm-svn: 296486
* ISel: We need to notify FastIS of the IMPLICIT_DEF we created in ↵Arnold Schwaighofer2017-02-271-1/+7
| | | | | | | | | | createSwiftErrorEntriesInEntryBlock Otherwise, it will insert instructions before it. rdar://30536186 llvm-svn: 296395
* [Tablegen] Instrumenting table gen DAGGenISelDAGAditya Nandakumar2017-02-141-0/+9
| | | | | | | | | | To help assist in debugging ISEL or to prioritize GlobalISel backend work, this patch adds two more tables to <Target>GenISelDAGISel.inc - one which contains the patterns that are used during selection and the other containing include source location of the patterns Enabled through CMake varialbe LLVM_ENABLE_DAGISEL_COV llvm-svn: 295081
OpenPOWER on IntegriCloud