summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/Utils/SimplifyCFG.cpp
Commit message (Collapse)AuthorAgeFilesLines
...
* SimplifyCFG: check uses of constant-foldable instrs in switch destinations ↵Hans Wennborg2015-01-091-6/+15
| | | | | | | | | | | (PR20210) The previous code assumed that such instructions could not have any uses outside CaseDest, with the motivation that the instruction could not dominate CommonDest because CommonDest has phi nodes in it. That simply isn't true; e.g., CommonDest could have an edge back to itself. llvm-svn: 225552
* [PM] Split the AssumptionTracker immutable pass into two separate APIs:Chandler Carruth2015-01-041-34/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | a cache of assumptions for a single function, and an immutable pass that manages those caches. The motivation for this change is two fold. Immutable analyses are really hacks around the current pass manager design and don't exist in the new design. This is usually OK, but it requires that the core logic of an immutable pass be reasonably partitioned off from the pass logic. This change does precisely that. As a consequence it also paves the way for the *many* utility functions that deal in the assumptions to live in both pass manager worlds by creating an separate non-pass object with its own independent API that they all rely on. Now, the only bits of the system that deal with the actual pass mechanics are those that actually need to deal with the pass mechanics. Once this separation is made, several simplifications become pretty obvious in the assumption cache itself. Rather than using a set and callback value handles, it can just be a vector of weak value handles. The callers can easily skip the handles that are null, and eventually we can wrap all of this up behind a filter iterator. For now, this adds boiler plate to the various passes, but this kind of boiler plate will end up making it possible to port these passes to the new pass manager, and so it will end up factored away pretty reasonably. llvm-svn: 225131
* [SimplifyCFG] Revise common code sinkingMichael Liao2014-12-231-30/+30
| | | | | | | | | | - Fix the case where more than 1 common instructions derived from the same operand cannot be sunk. When a pair of value has more than 1 derived values in both branches, only 1 derived value could be sunk. - Replace BB1 -> (BB2, PN) map with joint value map, i.e. map of (BB1, BB2) -> PN, which is more accurate to track common ops. llvm-svn: 224757
* IR: Split Metadata from ValueDuncan P. N. Exon Smith2014-12-091-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Split `Metadata` away from the `Value` class hierarchy, as part of PR21532. Assembly and bitcode changes are in the wings, but this is the bulk of the change for the IR C++ API. I have a follow-up patch prepared for `clang`. If this breaks other sub-projects, I apologize in advance :(. Help me compile it on Darwin I'll try to fix it. FWIW, the errors should be easy to fix, so it may be simpler to just fix it yourself. This breaks the build for all metadata-related code that's out-of-tree. Rest assured the transition is mechanical and the compiler should catch almost all of the problems. Here's a quick guide for updating your code: - `Metadata` is the root of a class hierarchy with three main classes: `MDNode`, `MDString`, and `ValueAsMetadata`. It is distinct from the `Value` class hierarchy. It is typeless -- i.e., instances do *not* have a `Type`. - `MDNode`'s operands are all `Metadata *` (instead of `Value *`). - `TrackingVH<MDNode>` and `WeakVH` referring to metadata can be replaced with `TrackingMDNodeRef` and `TrackingMDRef`, respectively. If you're referring solely to resolved `MDNode`s -- post graph construction -- just use `MDNode*`. - `MDNode` (and the rest of `Metadata`) have only limited support for `replaceAllUsesWith()`. As long as an `MDNode` is pointing at a forward declaration -- the result of `MDNode::getTemporary()` -- it maintains a side map of its uses and can RAUW itself. Once the forward declarations are fully resolved RAUW support is dropped on the ground. This means that uniquing collisions on changing operands cause nodes to become "distinct". (This already happened fairly commonly, whenever an operand went to null.) If you're constructing complex (non self-reference) `MDNode` cycles, you need to call `MDNode::resolveCycles()` on each node (or on a top-level node that somehow references all of the nodes). Also, don't do that. Metadata cycles (and the RAUW machinery needed to construct them) are expensive. - An `MDNode` can only refer to a `Constant` through a bridge called `ConstantAsMetadata` (one of the subclasses of `ValueAsMetadata`). As a side effect, accessing an operand of an `MDNode` that is known to be, e.g., `ConstantInt`, takes three steps: first, cast from `Metadata` to `ConstantAsMetadata`; second, extract the `Constant`; third, cast down to `ConstantInt`. The eventual goal is to introduce `MDInt`/`MDFloat`/etc. and have metadata schema owners transition away from using `Constant`s when the type isn't important (and they don't care about referring to `GlobalValue`s). In the meantime, I've added transitional API to the `mdconst` namespace that matches semantics with the old code, in order to avoid adding the error-prone three-step equivalent to every call site. If your old code was: MDNode *N = foo(); bar(isa <ConstantInt>(N->getOperand(0))); baz(cast <ConstantInt>(N->getOperand(1))); bak(cast_or_null <ConstantInt>(N->getOperand(2))); bat(dyn_cast <ConstantInt>(N->getOperand(3))); bay(dyn_cast_or_null<ConstantInt>(N->getOperand(4))); you can trivially match its semantics with: MDNode *N = foo(); bar(mdconst::hasa <ConstantInt>(N->getOperand(0))); baz(mdconst::extract <ConstantInt>(N->getOperand(1))); bak(mdconst::extract_or_null <ConstantInt>(N->getOperand(2))); bat(mdconst::dyn_extract <ConstantInt>(N->getOperand(3))); bay(mdconst::dyn_extract_or_null<ConstantInt>(N->getOperand(4))); and when you transition your metadata schema to `MDInt`: MDNode *N = foo(); bar(isa <MDInt>(N->getOperand(0))); baz(cast <MDInt>(N->getOperand(1))); bak(cast_or_null <MDInt>(N->getOperand(2))); bat(dyn_cast <MDInt>(N->getOperand(3))); bay(dyn_cast_or_null<MDInt>(N->getOperand(4))); - A `CallInst` -- specifically, intrinsic instructions -- can refer to metadata through a bridge called `MetadataAsValue`. This is a subclass of `Value` where `getType()->isMetadataTy()`. `MetadataAsValue` is the *only* class that can legally refer to a `LocalAsMetadata`, which is a bridged form of non-`Constant` values like `Argument` and `Instruction`. It can also refer to any other `Metadata` subclass. (I'll break all your testcases in a follow-up commit, when I propagate this change to assembly.) llvm-svn: 223802
* Revert "Move function to obtain branch weights into the BranchInst class. NFC."Juergen Ributzka2014-12-091-6/+26
| | | | | | This reverts commit r223784 and copies the 'ExtractBranchMetadata' to CodeGenPrepare. llvm-svn: 223795
* Move function to obtain branch weights into the BranchInst class. NFC.Juergen Ributzka2014-12-091-26/+6
| | | | | | Make this function available to other parts of LLVM. llvm-svn: 223784
* Revert r223049, r223050 and r223051 while investigating test failures.Hans Wennborg2014-12-011-8/+56
| | | | | | I didn't foresee affecting the Clang test suite :/ llvm-svn: 223054
* SimplifyCFG: Omit range checks for switch lookup tables when default is ↵Hans Wennborg2014-12-011-7/+8
| | | | | | | | unreachable They would get optimized away later, but we might as well not emit them. llvm-svn: 223051
* SimplifyCFG: don't remove unreachable default switch destinationsHans Wennborg2014-12-011-49/+0
| | | | | | | | | | | | | An unreachable default destination can be exploited by other optimizations, and SDag lowering is now prepared to handle them efficiently. For example, branches to the unreachable destination will be optimized away, such as in the case of range checks for switch lookup tables. On 64-bit Linux, this reduces the size of a clang bootstrap by 80 kB (and Chromium by 30 kB). llvm-svn: 223050
* reinstate r222872: Peephole optimization in switch table lookup: reuse the ↵Erik Eckstein2014-11-271-7/+100
| | | | | | | | | | | | | | | | | | | | | | | | | | | guarding table comparison if possible. Fixed missing dominance check. Original commit message: This optimization tries to reuse the generated compare instruction, if there is a comparison against the default value after the switch. Example: if (idx < tablesize) r = table[idx]; // table does not contain default_value else r = default_value; if (r != default_value) ... Is optimized to: cond = idx < tablesize; if (cond) r = table[idx]; else r = default_value; if (cond) ... Jump threading will then eliminate the second if(cond). llvm-svn: 222891
* Revert "Peephole optimization in switch table lookup: reuse the guarding ↵Erik Eckstein2014-11-271-89/+7
| | | | | | | | table comparison if possible." It is breaking the clang bootstrag. llvm-svn: 222877
* Peephole optimization in switch table lookup: reuse the guarding table ↵Erik Eckstein2014-11-271-7/+89
| | | | | | | | | | | | | | | | | | | | | | | | | comparison if possible. This optimization tries to reuse the generated compare instruction, if there is a comparison against the default value after the switch. Example: if (idx < tablesize) r = table[idx]; // table does not contain default_value else r = default_value; if (r != default_value) ... Is optimized to: cond = idx < tablesize; if (cond) r = table[idx]; else r = default_value; if (cond) ... \endcode Jump threading will then eliminate the second if(cond). llvm-svn: 222872
* SimplifyCFG: Refactor GatherConstantCompares() result in a structMehdi Amini2014-11-201-133/+152
| | | | | | | | | | Code seems cleaner and easier to understand this way This is basically r222416, after fixes for MSVC lack of standard support, and a few cleaning (got rid of a warning). Thanks Nakamura Takumi and Nico Weber for the MSVC fixes. llvm-svn: 222472
* Revert r222416, r222422, r222426: the former revision had problems and ↵Timur Iskhodzhanov2014-11-201-150/+136
| | | | | | fixing them introduced bugs llvm-svn: 222428
* Fix a typoTimur Iskhodzhanov2014-11-201-1/+1
| | | | llvm-svn: 222426
* SimplifyCFG.cpp: Tweak to let msc17 compliant.NAKAMURA Takumi2014-11-201-8/+11
| | | | | | | | - Use LLVM_DELETED_FUNCTION. - Don't use member initializers. - Don't use initializer list. llvm-svn: 222422
* SimplifyCFG: Refactor GatherConstantCompares() result in a structMehdi Amini2014-11-201-136/+147
| | | | | | Code seems cleaner and easier to understand this way llvm-svn: 222416
* Try to fix MSVS build after r222384. No intended behavior change.Nico Weber2014-11-191-3/+3
| | | | llvm-svn: 222386
* SimplifyCFG: turn recursive GatherConstantCompares into iterativeMehdi Amini2014-11-191-99/+138
| | | | | | A long sequence of || or && could lead to a stack explosion. llvm-svn: 222384
* Update SetVector to rely on the underlying set's insert to return a ↵David Blaikie2014-11-191-1/+1
| | | | | | | | | | | | | pair<iterator, bool> This is to be consistent with StringSet and ultimately with the standard library's associative container insert function. This lead to updating SmallSet::insert to return pair<iterator, bool>, and then to update SmallPtrSet::insert to return pair<iterator, bool>, and then to update all the existing users of those functions... llvm-svn: 222334
* SimplifyCFG: Range'ify some for-loops. No functional change.Hans Wennborg2014-11-181-12/+13
| | | | llvm-svn: 222215
* [SimplifyCFG] Make the value type of the hole check bitmask a power-of-2.Juergen Ributzka2014-11-171-2/+5
| | | | | | | | | | | | | | | When converting a switch to a lookup table we might have to generate a bitmaks to encode and check for holes in the original switch statement. The type of this mask depends on the number of switch statements, which can result in illegal types for pretty much all architectures. To avoid unnecessary type legalization and help FastISel this commit increases the size of the bitmask to next power-of-2 value when necessary. This fixes rdar://problem/18984639. llvm-svn: 222168
* Optimize switch lookup tables with linear mapping.Erik Eckstein2014-11-171-1/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a simple optimization for switch table lookup: It computes the output value directly with an (optional) mul and add if there is a linear mapping between index and output. Example: int f1(int x) { switch (x) { case 0: return 10; case 1: return 11; case 2: return 12; case 3: return 13; } return 0; } generates: define i32 @f1(i32 %x) #0 { entry: %0 = icmp ult i32 %x, 4 br i1 %0, label %switch.lookup, label %return switch.lookup: %switch.offset = add i32 %x, 10 ret i32 %switch.offset return: ret i32 0 } llvm-svn: 222121
* Revert "IR: MDNode => Value"Duncan P. N. Exon Smith2014-11-111-4/+4
| | | | | | | | | | | | | | | | | Instead, we're going to separate metadata from the Value hierarchy. See PR21532. This reverts commit r221375. This reverts commit r221373. This reverts commit r221359. This reverts commit r221167. This reverts commit r221027. This reverts commit r221024. This reverts commit r221023. This reverts commit r220995. This reverts commit r220994. llvm-svn: 221711
* IR: MDNode => Value: Instruction::getMetadata()Duncan P. N. Exon Smith2014-11-011-4/+4
| | | | | | | | | | Change `Instruction::getMetadata()` to return `Value` as part of PR21433. Update most callers to use `Instruction::getMDNode()`, which wraps the result in a `cast_or_null<MDNode>`. llvm-svn: 221024
* Preserving 'nonnull' metadata in SimplifyCFGPhilip Reames2014-10-221-1/+4
| | | | | | | | | | When we hoist two loads above an if, we can preserve the nonnull metadata. We could also do the same for sinking them, but we appear to not handle metadata at all in that case. Thanks to Hal for the review. Differential Revision: http://reviews.llvm.org/D5910 llvm-svn: 220392
* Switch to select optimization for two-case switchesMarcello Maggioni2014-10-141-0/+170
| | | | | | | This is the same optimization of r219233 with modifications to support PHIs with multiple incoming edges from the same block and a test to check that this condition is handled. llvm-svn: 219656
* Revert r219223, it creates invalid PHI nodes.Joerg Sonnenberger2014-10-121-170/+0
| | | | llvm-svn: 219587
* SimplifyCFG: Don't convert phis into selects if we could remove undef behaviorArnold Schwaighofer2014-10-101-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | instead We used to transform this: define void @test6(i1 %cond, i8* %ptr) { entry: br i1 %cond, label %bb1, label %bb2 bb1: br label %bb2 bb2: %ptr.2 = phi i8* [ %ptr, %entry ], [ null, %bb1 ] store i8 2, i8* %ptr.2, align 8 ret void } into this: define void @test6(i1 %cond, i8* %ptr) { %ptr.2 = select i1 %cond, i8* null, i8* %ptr store i8 2, i8* %ptr.2, align 8 ret void } because the simplifycfg transformation into selects would happen to happen before the simplifycfg transformation that removes unreachable control flow (We have 'unreachable control flow' due to the store to null which is undefined behavior). The existing transformation that removes unreachable control flow in simplifycfg is: /// If BB has an incoming value that will always trigger undefined behavior /// (eg. null pointer dereference), remove the branch leading here. static bool removeUndefIntroducingPredecessor(BasicBlock *BB) Now we generate: define void @test6(i1 %cond, i8* %ptr) { store i8 2, i8* %ptr.2, align 8 ret void } I did not see any impact on the test-suite + externals. rdar://18596215 llvm-svn: 219462
* Two case switch to select optimizationMarcello Maggioni2014-10-071-0/+170
| | | | | | | | | | | | | | | | | | | | | This optimization tries to convert switch instructions that are used to select a value with only 2 unique cases + default block to a select or a couple of selects (depending if the default block is reachable or not). The typical case this optimization wants to be able to optimize is this one: Example: switch (a) { case 10: %0 = icmp eq i32 %a, 10 return 10; %1 = select i1 %0, i32 10, i32 4 case 20: ----> %2 = icmp eq i32 %a, 20 return 2; %3 = select i1 %2, i32 2, i32 %1 default: return 4; } It also sets the base for further optimizations that are planned and being reviewed. llvm-svn: 219223
* [SimplifyCFG] threshold for folding branches with common destinationJingyue Wu2014-09-301-66/+80
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch adds a threshold that controls the number of bonus instructions allowed for folding branches with common destination. The original code allows at most one bonus instruction. With this patch, users can customize the threshold to allow multiple bonus instructions. The default threshold is still 1, so that the code behaves the same as before when users do not specify this threshold. The motivation of this change is that tuning this threshold significantly (up to 25%) improves the performance of some CUDA programs in our internal code base. In general, branch instructions are very expensive for GPU programs. Therefore, it is sometimes worth trading more arithmetic computation for a more straightened control flow. Here's a reduced example: __global__ void foo(int a, int b, int c, int d, int e, int n, const int *input, int *output) { int sum = 0; for (int i = 0; i < n; ++i) sum += (((i ^ a) > b) && (((i | c ) ^ d) > e)) ? 0 : input[i]; *output = sum; } The select statement in the loop body translates to two branch instructions "if ((i ^ a) > b)" and "if (((i | c) ^ d) > e)" which share a common destination. With the default threshold, SimplifyCFG is unable to fold them, because computing the condition of the second branch "(i | c) ^ d > e" requires two bonus instructions. With the threshold increased, SimplifyCFG can fold the two branches so that the loop body contains only one branch, making the code conceptually look like: sum += (((i ^ a) > b) & (((i | c ) ^ d) > e)) ? 0 : input[i]; Increasing the threshold significantly improves the performance of this particular example. In the configuration where both conditions are guaranteed to be true, increasing the threshold from 1 to 2 improves the performance by 18.24%. Even in the configuration where the first condition is false and the second condition is true, which favors shortcuts, increasing the threshold from 1 to 2 still improves the performance by 4.35%. We are still looking for a good threshold and maybe a better cost model than just counting the number of bonus instructions. However, according to the above numbers, we think it is at least worth adding a threshold to enable more experiments and tuning. Let me know what you think. Thanks! Test Plan: Added one test case to check the threshold is in effect Reviewers: nadav, eliben, meheff, resistor, hfinkel Reviewed By: hfinkel Subscribers: hfinkel, llvm-commits Differential Revision: http://reviews.llvm.org/D5529 llvm-svn: 218711
* Remove dead code in SimplifyCFGJingyue Wu2014-09-151-43/+0
| | | | | | | | | | | | | | | | | | | | | | | Summary: UsedByBranch is always true according to how BonusInst is defined. Test Plan: Passes check-all, and also verified if (BonusInst && !UsedByBranch) { ... } is never entered during check-all. Reviewers: resistor, nadav, jingyue Reviewed By: jingyue Subscribers: llvm-commits, eliben, meheff Differential Revision: http://reviews.llvm.org/D5324 llvm-svn: 217824
* Make use of @llvm.assume in ValueTracking (computeKnownBits, etc.)Hal Finkel2014-09-071-29/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change, which allows @llvm.assume to be used from within computeKnownBits (and other associated functions in ValueTracking), adds some (optional) parameters to computeKnownBits and friends. These functions now (optionally) take a "context" instruction pointer, an AssumptionTracker pointer, and also a DomTree pointer, and most of the changes are just to pass this new information when it is easily available from InstSimplify, InstCombine, etc. As explained below, the significant conceptual change is that known properties of a value might depend on the control-flow location of the use (because we care that the @llvm.assume dominates the use because assumptions have control-flow dependencies). This means that, when we ask if bits are known in a value, we might get different answers for different uses. The significant changes are all in ValueTracking. Two main changes: First, as with the rest of the code, new parameters need to be passed around. To make this easier, I grouped them into a structure, and I made internal static versions of the relevant functions that take this structure as a parameter. The new code does as you might expect, it looks for @llvm.assume calls that make use of the value we're trying to learn something about (often indirectly), attempts to pattern match that expression, and uses the result if successful. By making use of the AssumptionTracker, the process of finding @llvm.assume calls is not expensive. Part of the structure being passed around inside ValueTracking is a set of already-considered @llvm.assume calls. This is to prevent a query using, for example, the assume(a == b), to recurse on itself. The context and DT params are used to find applicable assumptions. An assumption needs to dominate the context instruction, or come after it deterministically. In this latter case we only handle the specific case where both the assumption and the context instruction are in the same block, and we need to exclude assumptions from being used to simplify their own ephemeral values (those which contribute only to the assumption) because otherwise the assumption would prove its feeding comparison trivial and would be removed. This commit adds the plumbing and the logic for a simple masked-bit propagation (just enough to write a regression test). Future commits add more patterns (and, correspondingly, more regression tests). llvm-svn: 217342
* Repace SmallPtrSet with SmallPtrSetImpl in function arguments to avoid ↵Craig Topper2014-08-211-1/+1
| | | | | | needing to mention the size. llvm-svn: 216158
* Revert "Repace SmallPtrSet with SmallPtrSetImpl in function arguments to ↵Craig Topper2014-08-181-1/+1
| | | | | | | | avoid needing to mention the size." Getting a weird buildbot failure that I need to investigate. llvm-svn: 215870
* Repace SmallPtrSet with SmallPtrSetImpl in function arguments to avoid ↵Craig Topper2014-08-171-1/+1
| | | | | | needing to mention the size. llvm-svn: 215868
* Introduce a helper to combine instruction metadata.Rafael Espindola2014-08-151-0/+8
| | | | | | | | | Replace the old code in GVN and BBVectorize with it. Update SimplifyCFG to use it. Patch by Björn Steinbrink! llvm-svn: 215723
* [SimplifyCFG] fix accessing deleted PHINodes in switch-to-table conversion.Manman Ren2014-08-021-1/+4
| | | | | | | | | When we have a covered lookup table, make sure we don't delete PHINodes that are cached in PHIs. rdar://17887153 llvm-svn: 214642
* SimplifyCFG: Avoid miscompilations due to removed lifetime intrinsics.Rafael Espindola2014-07-301-1/+1
| | | | | | | | | | | The lifetime intrinsics need some work in order to make it clear which optimizations are or are not valid. For now dropping this optimization avoids a miscompilation. Patch by Björn Steinbrink. llvm-svn: 214336
* Feedback from Hans on r213815. No functionaility change.Manman Ren2014-07-241-10/+11
| | | | llvm-svn: 213895
* Fixing an MSVC conversion warning about implicitly converting the shift ↵Aaron Ballman2014-07-241-1/+1
| | | | | | results to 64-bits. No functional change intended. llvm-svn: 213863
* SimplifyCFG: fix a bug in switch to table conversionManman Ren2014-07-231-4/+13
| | | | | | | | | | | | | | | | | | | We use gep to access the global array "switch.table", and the table index should be treated as unsigned. When the highest bit is 1, this commit zero-extends the index to an integer type with larger size. For a switch on i2, we used to generate: %switch.tableidx = sub i2 %0, -2 getelementptr inbounds [4 x i64]* @switch.table, i32 0, i2 %switch.tableidx It is incorrect when %switch.tableidx is 2 or 3. The fix is to generate %switch.tableidx = sub i2 %0, -2 %switch.tableidx.zext = zext i2 %switch.tableidx to i3 getelementptr inbounds [4 x i64]* @switch.table, i32 0, i3 %switch.tableidx.zext rdar://17735071 llvm-svn: 213815
* Revert "[C++11] Add predecessors(BasicBlock *) / successors(BasicBlock *) ↵Duncan P. N. Exon Smith2014-07-211-23/+25
| | | | | | | | | iterator ranges." This reverts commit r213474 (and r213475), which causes a miscompile on a stage2 LTO build. I'll reply on the list in a moment. llvm-svn: 213562
* [C++11] Add predecessors(BasicBlock *) / successors(BasicBlock *) iterator ↵Manuel Jacob2014-07-201-25/+23
| | | | | | | | | | | | | | | | | | ranges. Summary: This patch introduces two new iterator ranges and updates existing code to use it. No functional change intended. Test Plan: All tests (make check-all) still pass. Reviewers: dblaikie Reviewed By: dblaikie Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D4481 llvm-svn: 213474
* Feeding isSafeToSpeculativelyExecute its DataLayout pointerHal Finkel2014-07-101-25/+27
| | | | | | | | | | | | | | isSafeToSpeculativelyExecute can optionally take a DataLayout pointer. In the past, this was mainly used to make better decisions regarding divisions known not to trap, and so was not all that important for users concerned with "cheap" instructions. However, now it also helps look through bitcasts for dereferencable loads, and will also be important if/when we add a dereferencable pointer attribute. This is some initial work to feed a DataLayout pointer through to callers of isSafeToSpeculativelyExecute, generally where one was already available. llvm-svn: 212720
* Fix for PR17073 ( http://llvm.org/pr17073 ), simplifycfg illegally hoists an ↵Sanjay Patel2014-07-071-3/+20
| | | | | | | | | | operation in a phi node that can trap. This patch adds to an existing loop over phi nodes in SimplifyCondBranchToCondBranch() to check for trapping ops and bails out of the optimization if we find one of those. The test cases verify that trapping ops are not hoisted and non-trapping ops are still optimized as expected. llvm-svn: 212490
* fixed some typos in commentsSanjay Patel2014-07-061-4/+4
| | | | llvm-svn: 212423
* Minor stylistic fix in SimplifyCFG (test commit)Marcello Maggioni2014-07-031-1/+2
| | | | llvm-svn: 212259
* Don't build switch tables for dllimport and TLS variables in GEPsHans Wennborg2014-06-261-2/+3
| | | | | | | This is a follow-up to r211331, which failed to notice that we were returning early from ValidLookupTableConstant for GEPs. llvm-svn: 211753
* Don't build switch lookup tables for dllimport or TLS variablesHans Wennborg2014-06-201-0/+4
| | | | | | | | | | | | | We would previously put dllimport variables in switch lookup tables, which doesn't work because the address cannot be used in a constant initializer. This is basically the same problem that we have in PR19955. Putting TLS variables in switch tables also desn't work, because the address of such a variable is not constant. Differential Revision: http://reviews.llvm.org/D4220 llvm-svn: 211331
OpenPOWER on IntegriCloud