summaryrefslogtreecommitdiffstats
path: root/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* [stack-protection] Add support for MSVC buffer security checkEtienne Bergeron2016-06-071-5/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch is adding support for the MSVC buffer security check implementation The buffer security check is turned on with the '/GS' compiler switch. * https://msdn.microsoft.com/en-us/library/8dbf701c.aspx * To be added to clang here: http://reviews.llvm.org/D20347 Some overview of buffer security check feature and implementation: * https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx * http://www.ksyash.com/2011/01/buffer-overflow-protection-3/ * http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html For the following example: ``` int example(int offset, int index) { char buffer[10]; memset(buffer, 0xCC, index); return buffer[index]; } ``` The MSVC compiler is adding these instructions to perform stack integrity check: ``` push ebp mov ebp,esp sub esp,50h [1] mov eax,dword ptr [__security_cookie (01068024h)] [2] xor eax,ebp [3] mov dword ptr [ebp-4],eax push ebx push esi push edi mov eax,dword ptr [index] push eax push 0CCh lea ecx,[buffer] push ecx call _memset (010610B9h) add esp,0Ch mov eax,dword ptr [index] movsx eax,byte ptr buffer[eax] pop edi pop esi pop ebx [4] mov ecx,dword ptr [ebp-4] [5] xor ecx,ebp [6] call @__security_check_cookie@4 (01061276h) mov esp,ebp pop ebp ret ``` The instrumentation above is: * [1] is loading the global security canary, * [3] is storing the local computed ([2]) canary to the guard slot, * [4] is loading the guard slot and ([5]) re-compute the global canary, * [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling. Overview of the current stack-protection implementation: * lib/CodeGen/StackProtector.cpp * There is a default stack-protection implementation applied on intermediate representation. * The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie. * An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast). * Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling. * Guard manipulation and comparison are added directly to the intermediate representation. * lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp * lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp * There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls). * see long comment above 'class StackProtectorDescriptor' declaration. * The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr). * 'getSDagStackGuard' returns the appropriate stack guard (security cookie) * The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'. * include/llvm/Target/TargetLowering.h * Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'. * lib/Target/X86/X86ISelLowering.cpp * Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm. Function-based Instrumentation: * The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions. * To support function-based instrumentation, this patch is * adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h), * If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue. * modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation, * generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp), * if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp). Modifications * adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp) * adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h) Results * IR generated instrumentation: ``` clang-cl /GS test.cc /Od /c -mllvm -print-isel-input ``` ``` *** Final LLVM Code input to ISel *** ; Function Attrs: nounwind sspstrong define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 { entry: %StackGuardSlot = alloca i8* <<<-- Allocated guard slot %0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot) %index.addr = alloca i32, align 4 %offset.addr = alloca i32, align 4 %buffer = alloca [10 x i8], align 1 store i32 %index, i32* %index.addr, align 4 store i32 %offset, i32* %offset.addr, align 4 %arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0 %1 = load i32, i32* %index.addr, align 4 call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false) %2 = load i32, i32* %index.addr, align 4 %arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2 %3 = load i8, i8* %arrayidx, align 1 %conv = sext i8 %3 to i32 %4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check ret i32 %conv } ``` * SelectionDAG generated instrumentation: ``` clang-cl /GS test.cc /O1 /c /FA ``` ``` "?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z" # BB#0: # %entry pushl %esi subl $16, %esp movl ___security_cookie, %eax <<<-- Loading Stack Guard value movl 28(%esp), %esi movl %eax, 12(%esp) <<<-- Store to Guard slot leal 2(%esp), %eax pushl %esi pushl $204 pushl %eax calll _memset addl $12, %esp movsbl 2(%esp,%esi), %esi movl 12(%esp), %ecx <<<-- Loading Guard slot calll @__security_check_cookie@4 <<<-- Epilogue function-based check movl %esi, %eax addl $16, %esp popl %esi retl ``` Reviewers: kcc, pcc, eugenis, rnk Subscribers: majnemer, llvm-commits, hans, thakis, rnk Differential Revision: http://reviews.llvm.org/D20346 llvm-svn: 272053
* Re-apply "SDAG: Update ChainNodesMatched as nodes are deleted"Justin Bogner2016-06-031-4/+9
| | | | | | | | | | | | | | | My first attempt at this had an overly aggressive assert - chain nodes will only be removed, but we could hit the assert if a non-chain node was CSE'd (NodeToMatch, for instance). This reapplies r271706 by reverting r271713 and fixing an assert. Original message: Avoid relying on UB by looking into deleted nodes for a marker value. Instead, update the list of chain nodes as we go. llvm-svn: 271733
* Revert "SDAG: Update ChainNodesMatched as nodes are deleted"Justin Bogner2016-06-031-9/+4
| | | | | | | | | Seeing failures in CodeGen/Generic/icmp-illegal.ll on quite a few bots. This reverts r271706. llvm-svn: 271713
* SDAG: Update ChainNodesMatched as nodes are deletedJustin Bogner2016-06-031-4/+9
| | | | | | | Avoid relying on UB by looking into deleted nodes for a marker value. Instead, update the list of chain nodes as we go. llvm-svn: 271706
* SDAG: Replace some unreachable code with an assert. NFCJustin Bogner2016-06-031-6/+3
| | | | | | | The current node shouldn't be (and isn't) removed partway through selection. llvm-svn: 271699
* SDAG: Drop a redundant replace and move the dead node removal closer. NFCJustin Bogner2016-06-011-6/+4
| | | | llvm-svn: 271429
* SDAG: Have SelectNodeTo replace uses if it CSE's instead of morphing a nodeJustin Bogner2016-05-111-6/+1
| | | | | | | | It's awkward to force callers of SelectNodeTo to figure out whether the node was morphed or CSE'd. Update uses here instead of requiring callers to (sometimes) do it. llvm-svn: 269235
* SDAG: Make SelectCodeCommon return voidJustin Bogner2016-05-101-25/+41
| | | | | | | | | | | This means SelectCode unconditionally returns nullptr now. I'll follow up with a change to make that return void as well, but it seems best to keep that one very mechanical. This is part of the work to have Select return void instead of an SDNode *, which is in turn part of llvm.org/pr26808. llvm-svn: 269136
* SDAG: Don't leave dangling dead nodes after SelectCodeCommonJustin Bogner2016-05-061-1/+3
| | | | | | | Relying on the caller to clean up after we've replaced all uses of a node won't work when we've migrated to the `void Select(...)` API. llvm-svn: 268774
* SDAG: Rename Select->SelectImpl and repurpose Select as returning voidJustin Bogner2016-05-051-17/+1
| | | | | | | | | | | | | | This is a step towards removing the rampant undefined behaviour in SelectionDAG, which is a part of llvm.org/PR26808. We rename SelectionDAGISel::Select to SelectImpl and update targets to match, and then change Select to return void and consolidate the sketchy behaviour we're trying to get away from there. Next, we'll update backends to implement `void Select(...)` instead of SelectImpl and eventually drop the base Select implementation. llvm-svn: 268693
* SDAG: Remove OPC_MarkGlueResults and associated logic. NFCJustin Bogner2016-05-051-60/+19
| | | | | | | | | This opcode never happens in practice, and yet the logic we have in place to handle it would be undefined behaviour if we ever executed it. Remove it rather than trying to refactor code that's never reached. llvm-svn: 268692
* [CodeGen] Add some space optimized forms of EmitNode and MorphNodeTo that ↵Craig Topper2016-05-031-6/+17
| | | | | | | | implicitly indicate the number of result VTs. This shaves about 16K off the X86 matching table taking it down to about 470K. Overall this reduces the llc binary size with all in-tree targets by about 40K. llvm-svn: 268365
* [CodeGen] Add OPC_MoveChild0-OPC_MoveChild7 opcodes to isel matching tables ↵Craig Topper2016-05-021-0/+12
| | | | | | to optimize table size. Shaves about 12K off the X86 matcher table. llvm-svn: 268209
* Use SelectionDAG::getTargetConstant* helper functions. NFC.Simon Pilgrim2016-04-291-4/+4
| | | | | | Instead of SelectionDAG::getConstant directly to make it more obvious that we're creating target constants. llvm-svn: 268074
* Switch lowering: don't add incoming PHI values from skipped bit test MBB's ↵Hans Wennborg2016-04-151-12/+26
| | | | | | | | | | | (PR27135) After r245976, LLVM will skip the last bit test case if knows it will always be true. However, we would still erroneously update PHI nodes with incoming values from the MBB that would perform the final bit test, causing -verify-machineinstrs to fail. llvm-svn: 266479
* SelectionDAGISel: rangeify a loopHans Wennborg2016-04-151-23/+20
| | | | llvm-svn: 266478
* [SSP] Remove llvm.stackprotectorcheck.Tim Shen2016-04-081-2/+7
| | | | | | | | | | This is a cleanup patch for SSP support in LLVM. There is no functional change. llvm.stackprotectorcheck is not needed, because SelectionDAG isn't actually lowering it in SelectBasicBlock; rather, it adds check code in FinishBasicBlock, ignoring the position where the intrinsic is inserted (See FindSplitPointForStackProtector()). llvm-svn: 265851
* Swift Calling Convention: swifterror target-independent change.Manman Ren2016-04-051-0/+121
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At IR level, the swifterror argument is an input argument with type ErrorObject**. For targets that support swifterror, we want to optimize it to behave as an inout value with type ErrorObject*; it will be passed in a fixed physical register. The main idea is to track the virtual registers for each swifterror value. We define swifterror values as AllocaInsts with swifterror attribute or a function argument with swifterror attribute. In SelectionDAGISel.cpp, we set up swifterror values (SwiftErrorVals) before handling the basic blocks. When iterating over all basic blocks in RPO, before actually visiting the basic block, we call mergeIncomingSwiftErrors to merge incoming swifterror values when there are multiple predecessors or to simply propagate them. There, we create a virtual register for each swifterror value in the entry block. For predecessors that are not yet visited, we create virtual registers to hold the swifterror values at the end of the predecessor. The assignments are saved in SwiftErrorWorklist and will be materialized at the end of visiting the basic block. When visiting a load from a swifterror value, we copy from the current virtual register assignment. When visiting a store to a swifterror value, we create a virtual register to hold the swifterror value and update SwiftErrorMap to track the current virtual register assignment. Differential Revision: http://reviews.llvm.org/D18108 llvm-svn: 265433
* CXX TLS: collect return blocks after SelectAllBasicBlocks.Manman Ren2016-03-241-7/+15
| | | | | | | | | | It is incorrect to get the corresponding MBB for a ReturnInst before SelectAllBasicBlocks since SelectAllBasicBlocks can change the correspondence between a ReturnInst and the MBB it is in. PR27062 llvm-svn: 264358
* [CXX_FAST_TLS] fix issues with O0 on ARM, AArch64 and X86.Manman Ren2016-03-181-1/+1
| | | | | | | Since at O0, explicit copies via SplitCSR may not be removed even if they are unnecessary, we choose not to use SplitCSR at O0. llvm-svn: 263855
* [CodeGen] Add space-optimized EmitMergeInputChains1_2 to the DAG isel ↵Craig Topper2016-03-071-2/+3
| | | | | | matching tables. Shaves about 5100 bytes from the X86 matcher table. NFC llvm-svn: 262815
* Fix Clang-tidy readability-redundant-control-flow warnings; other minor fixes.Eugene Zelenko2016-02-021-14/+6
| | | | | | Differential revision: http://reviews.llvm.org/D16793 llvm-svn: 259539
* [SelectionDAG] Eliminate exponential behavior in WalkChainUsersTim Shen2016-01-311-5/+20
| | | | llvm-svn: 259315
* Avoid overly large SmallPtrSet/SmallSetMatthias Braun2016-01-301-1/+1
| | | | | | | These sets perform linear searching in small mode so it is never a good idea to use SmallSize/N bigger than 32. llvm-svn: 259283
* Don't check if a list is empty with ilist::size.Benjamin Kramer2016-01-231-1/+1
| | | | | | ilist::size() is O(n) while ilist::empty() is O(1) llvm-svn: 258636
* [X86] Don't alter HasOpaqueSPAdjustment after we've relied on itDavid Majnemer2016-01-141-1/+1
| | | | | | | | | | | | | | We rely on HasOpaqueSPAdjustment not changing after we've calculated things based on it. Things like whether or not we can use 'rep;movs' to copy bytes around, that sort of thing. If it changes, invariants in the backend will quietly break. This situation arose when we had a call to memcpy *and* a COPY of the FLAGS register where we would attempt to reference local variables using %esi, a register that was clobbered by the 'rep;movs'. This fixes PR26124. llvm-svn: 257730
* [X86] Make hasFP constant timeDavid Majnemer2016-01-041-0/+3
| | | | | | | | | | We need a frame pointer if there is a push/pop sequence after the prologue in order to unwind the stack. Scanning the instructions to figure out if this happened made hasFP not constant-time which is a violation of expectations. Let's compute this up-front and reuse that computation when we need it. llvm-svn: 256730
* CXX_FAST_TLS calling convention: target independent portion.Manman Ren2015-12-161-2/+2
| | | | | | | | | Update supportSplitCSR's interface to take machine function instead of the calling convention. Review comments for http://reviews.llvm.org/D15341 llvm-svn: 255818
* CXX_FAST_TLS calling convention: target independent portion.Manman Ren2015-12-111-1/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The access function has a short entry and a short exit, the initialization block is only run the first time. To improve the performance, we want to have a short frame at the entry and exit. We explicitly handle most of the CSRs via copies. Only the CSRs that are not handled via copies will be in CSR_SaveList. Frame lowering and prologue/epilogue insertion will generate a short frame in the entry and exit according to CSR_SaveList. The majority of the CSRs will be handled by register allcoator. Register allocator will try to spill and reload them in the initialization block. We add CSRsViaCopy, it will be explicitly handled during lowering. 1> we first set FunctionLoweringInfo->SplitCSR if conditions are met (the target supports it for the given calling convention and the function has only return exits). We also call TLI->initializeSplitCSR to perform initialization. 2> we call TLI->insertCopiesSplitCSR to insert copies from CSRsViaCopy to virtual registers at beginning of the entry block and copies from virtual registers to CSRsViaCopy at beginning of the exit blocks. 3> we also need to make sure the explicit copies will not be eliminated. rdar://problem/23557469 Differential Revision: http://reviews.llvm.org/D15340 llvm-svn: 255353
* Move EH-specific helper functions to a more appropriate placeDavid Majnemer2015-12-021-1/+1
| | | | | | No functionality change is intended. llvm-svn: 254562
* Have 'optnone' respect the -fast-isel=false option.Paul Robinson2015-11-301-3/+7
| | | | | | | | This is primarily useful for debugging optnone v. ISel issues. Differential Revision: http://reviews.llvm.org/D14792 llvm-svn: 254335
* Let SelectionDAG start to use probability-based interface to add successors.Cong Hou2015-11-241-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | The patch in http://reviews.llvm.org/D13745 is broken into four parts: 1. New interfaces without functional changes. 2. Use new interfaces in SelectionDAG, while in other passes treat probabilities as weights. 3. Use new interfaces in all other passes. 4. Remove old interfaces. This the second patch above. In this patch SelectionDAG starts to use probability-based interfaces in MBB to add successors but other MC passes are still using weight-based interfaces. Therefore, we need to maintain correct weight list in MBB even when probability-based interfaces are used. This is done by updating weight list in probability-based interfaces by treating the numerator of probabilities as weights. This change affects many test cases that check successor weight values. I will update those test cases once this patch looks good to you. Differential revision: http://reviews.llvm.org/D14361 llvm-svn: 253965
* Partially revert r253662: some unrelated work was accidentally committed ↵Daniel Sanders2015-11-201-1/+0
| | | | | | | | with it. Sorry. llvm-svn: 253663
* Revert the revert 253497 and 253539 - These commits aren't the cause of the ↵Daniel Sanders2015-11-201-0/+1
| | | | | | | | clang-cmake-mips failures. Sorry for the noise. llvm-svn: 253662
* [WinEH] Update exception pointer registersJoseph Tremoulet2015-11-071-3/+4
| | | | | | | | | | | | | | | | | | | | Summary: The CLR's personality routine passes these in rdx/edx, not rax/eax. Make getExceptionPointerRegister a virtual method parameterized by personality function to allow making this distinction. Similarly make getExceptionSelectorRegister a virtual method parameterized by personality function, for symmetry. Reviewers: pgavlin, majnemer, rnk Subscribers: jyknight, dsanders, llvm-commits Differential Revision: http://reviews.llvm.org/D14344 llvm-svn: 252383
* SelectionDAG: Remove implicit ilist iterator conversions, NFCDuncan P. N. Exon Smith2015-10-131-11/+12
| | | | llvm-svn: 250214
* Don't call PrepareEHLandingPad on non EH padsReid Kleckner2015-10-121-2/+3
| | | | | | | | This was a minor bug in r249492. Calling PrepareEHLandingPad on a non-landingpad was a no-op, but it attempted to get the generic pointer register class, which apparently doesn't exist for some targets. llvm-svn: 250068
* [WinEH] Delete the old landingpad implementation of Windows EHReid Kleckner2015-10-091-37/+0
| | | | | | | | | | | The new implementation works at least as well as the old implementation did. Also delete the associated preparation tests. They don't exercise interesting corner cases of the new implementation. All the codegen tests of the EH tables have already been ported. llvm-svn: 249918
* [SEH] Fix llvm.eh.exceptioncode fast register allocation assertionReid Kleckner2015-10-091-1/+1
| | | | | | I called the wrong MachineBasicBlock::addLiveIn() overload. llvm-svn: 249786
* [SEH] Add llvm.eh.exceptioncode intrinsicReid Kleckner2015-10-071-6/+35
| | | | | | This will support the Clang __exception_code intrinsic. llvm-svn: 249492
* [WinEH] Recognize CoreCLR personality functionJoseph Tremoulet2015-10-061-2/+2
| | | | | | | | | | | | | | | Summary: - Add CoreCLR to if/else ladders and switches as appropriate. - Rename isMSVCEHPersonality to isFuncletEHPersonality to better reflect what it captures. Reviewers: majnemer, andrew.w.kaylor, rnk Subscribers: pgavlin, AndyAyers, llvm-commits Differential Revision: http://reviews.llvm.org/D13449 llvm-svn: 249455
* Add an explicit 'inline' specifier to these static functions. GCC isChandler Carruth2015-09-101-14/+14
| | | | | | | | warning on them having always_inline attribute for reasons I don't fully understand -- static functions are just as inlinable as inline functions in terms of linkage. llvm-svn: 247334
* [PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatibleChandler Carruth2015-09-091-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | with the new pass manager, and no longer relying on analysis groups. This builds essentially a ground-up new AA infrastructure stack for LLVM. The core ideas are the same that are used throughout the new pass manager: type erased polymorphism and direct composition. The design is as follows: - FunctionAAResults is a type-erasing alias analysis results aggregation interface to walk a single query across a range of results from different alias analyses. Currently this is function-specific as we always assume that aliasing queries are *within* a function. - AAResultBase is a CRTP utility providing stub implementations of various parts of the alias analysis result concept, notably in several cases in terms of other more general parts of the interface. This can be used to implement only a narrow part of the interface rather than the entire interface. This isn't really ideal, this logic should be hoisted into FunctionAAResults as currently it will cause a significant amount of redundant work, but it faithfully models the behavior of the prior infrastructure. - All the alias analysis passes are ported to be wrapper passes for the legacy PM and new-style analysis passes for the new PM with a shared result object. In some cases (most notably CFL), this is an extremely naive approach that we should revisit when we can specialize for the new pass manager. - BasicAA has been restructured to reflect that it is much more fundamentally a function analysis because it uses dominator trees and loop info that need to be constructed for each function. All of the references to getting alias analysis results have been updated to use the new aggregation interface. All the preservation and other pass management code has been updated accordingly. The way the FunctionAAResultsWrapperPass works is to detect the available alias analyses when run, and add them to the results object. This means that we should be able to continue to respect when various passes are added to the pipeline, for example adding CFL or adding TBAA passes should just cause their results to be available and to get folded into this. The exception to this rule is BasicAA which really needs to be a function pass due to using dominator trees and loop info. As a consequence, the FunctionAAResultsWrapperPass directly depends on BasicAA and always includes it in the aggregation. This has significant implications for preserving analyses. Generally, most passes shouldn't bother preserving FunctionAAResultsWrapperPass because rebuilding the results just updates the set of known AA passes. The exception to this rule are LoopPass instances which need to preserve all the function analyses that the loop pass manager will end up needing. This means preserving both BasicAAWrapperPass and the aggregating FunctionAAResultsWrapperPass. Now, when preserving an alias analysis, you do so by directly preserving that analysis. This is only necessary for non-immutable-pass-provided alias analyses though, and there are only three of interest: BasicAA, GlobalsAA (formerly GlobalsModRef), and SCEVAA. Usually BasicAA is preserved when needed because it (like DominatorTree and LoopInfo) is marked as a CFG-only pass. I've expanded GlobalsAA into the preserved set everywhere we previously were preserving all of AliasAnalysis, and I've added SCEVAA in the intersection of that with where we preserve SCEV itself. One significant challenge to all of this is that the CGSCC passes were actually using the alias analysis implementations by taking advantage of a pretty amazing set of loop holes in the old pass manager's analysis management code which allowed analysis groups to slide through in many cases. Moving away from analysis groups makes this problem much more obvious. To fix it, I've leveraged the flexibility the design of the new PM components provides to just directly construct the relevant alias analyses for the relevant functions in the IPO passes that need them. This is a bit hacky, but should go away with the new pass manager, and is already in many ways cleaner than the prior state. Another significant challenge is that various facilities of the old alias analysis infrastructure just don't fit any more. The most significant of these is the alias analysis 'counter' pass. That pass relied on the ability to snoop on AA queries at different points in the analysis group chain. Instead, I'm planning to build printing functionality directly into the aggregation layer. I've not included that in this patch merely to keep it smaller. Note that all of this needs a nearly complete rewrite of the AA documentation. I'm planning to do that, but I'd like to make sure the new design settles, and to flesh out a bit more of what it looks like in the new pass manager first. Differential Revision: http://reviews.llvm.org/D12080 llvm-svn: 247167
* [WinEH] Avoid creating MBBs for LLVM BBs that cannot contain codeReid Kleckner2015-09-081-0/+2
| | | | | | | | | | | | | | Typically these are catchpads, which hold data used to decide whether to catch the exception or continue unwinding. We also shouldn't create MBBs for catchendpads, cleanupendpads, or terminatepads, since no real code can live in them. This fixes a problem where MI passes (like the register allocator) would try to put code into catchpad blocks, which are not executed by the runtime. In the new world, blocks ending in invokes now have many possible successors. llvm-svn: 247102
* Distribute the weight on the edge from switch to default statement to edges ↵Cong Hou2015-09-011-3/+1
| | | | | | | | | | | | | | | | | | | generated in lowering switch. Currently, when edge weights are assigned to edges that are created when lowering switch statement, the weight on the edge to default statement (let's call it "default weight" here) is not considered. We need to distribute this weight properly. However, without value profiling, we have no idea how to distribute it. In this patch, I applied the heuristic that this weight is evenly distributed to successors. For example, given a switch statement with cases 1,2,3,5,10,11,20, and every edge from switch to each successor has weight 10. If there is a binary search tree built to test if n < 10, then its two out-edges will have weight 4x10+10/2 = 45 and 3x10 + 10/2 = 35 respectively (currently they are 40 and 30 without considering the default weight). Each distribution (which is 5 here) will be stored in each SwitchWorkListItem for further distribution. There are some exceptions: For a jump table header which doesn't have any edge to default statement, we don't distribute the default weight to it. For a bit test header which covers a contiguous range and hence has no edges to default statement, we don't distribute the default weight to it. When the branch checks a single value or a contiguous range with no edge to default statement, we don't distribute the default weight to it. In other cases, the default weight is evenly distributed to successors. Differential Revision: http://reviews.llvm.org/D12418 llvm-svn: 246522
* [EH] Handle non-Function personalities like unknown personalitiesReid Kleckner2015-08-311-8/+6
| | | | | | | | | Also delete and simplify a lot of MachineModuleInfo code that used to be needed to handle personalities on landingpads. Now that the personality is on the LLVM Function, we no longer need to track it this way on MMI. Certainly it should not live on LandingPadInfo. llvm-svn: 246478
* [WinEH] Add some support for code generating catchpadReid Kleckner2015-08-271-4/+7
| | | | | | | We can now run 32-bit programs with empty catch bodies. The next step is to change PEI so that we get funclet prologues and epilogues. llvm-svn: 246235
* Remove the final bit test during lowering switch statement if all cases in ↵Cong Hou2015-08-251-13/+20
| | | | | | | | | | bit test cover a contiguous range. When lowering switch statement, if bit tests are used then LLVM will always generates a jump to the default statement in the last bit test. However, this is not necessary when all cases in bit tests cover a contiguous range. This is because when generating the bit tests header MBB, there is a range check that guarantees cases in bit tests won't go outside of [low, high], where low and high are minimum and maximum case values in the bit tests. This patch checks if this is the case and then doesn't emit jump to default statement and hence saves a bit test and a branch. Differential Revision: http://reviews.llvm.org/D12249 llvm-svn: 245976
* Move the Target way of overriding DAG Scheduler to a target hookMehdi Amini2015-07-281-8/+6
| | | | | | | | | | | | | | | Summary: The previous way of overriding it was relying on calling "setDefault" on the global registry, which implies global mutable state. Reviewers: echristo, atrick Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D11538 From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 243388
* [PM/AA] Remove all of the dead AliasAnalysis pointers being threadedChandler Carruth2015-07-221-3/+3
| | | | | | | | | | through APIs that are no longer necessary now that the update API has been removed. This will make changes to the AA interfaces significantly less disruptive (I hope). Either way, it seems like a really nice cleanup. llvm-svn: 242882
OpenPOWER on IntegriCloud