summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms/Utils/SimplifyCFG.cpp
Commit message (Collapse)AuthorAgeFilesLines
* SimplifyCFG: Correctly handle switch lookup tables which fully cover the ↵Hans Wennborg2015-04-241-4/+9
| | | | | | | | | | | | | input type and use bit tests to check for holes When using bit tests for hole checks, we call AddPredecessorToBlock to give the phi node a value from the bit test block. This would break if we've previously called removePredecessor on the default destination because the switch is fully covered. Test case by Mark Lacey. llvm-svn: 235771
* Fix typo.Mark Lacey2015-04-121-1/+1
| | | | llvm-svn: 234706
* [opaque pointer type] More GEP IRBuilder API migrations...David Blaikie2015-04-031-2/+2
| | | | llvm-svn: 234058
* Merge empty landing pads in SimplifyCFGPhilip Reames2015-03-241-0/+85
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch tries to merge duplicate landing pads when they branch to a common shared target. Given IR that looks like this: lpad1: %exn = landingpad {i8*, i32} personality i32 (...)* @__gxx_personality_v0 cleanup br label %shared_resume lpad2: %exn2 = landingpad {i8*, i32} personality i32 (...)* @__gxx_personality_v0 cleanup br label %shared_resume shared_resume: call void @fn() ret void } We can rewrite the users of both landing pad blocks to use one of them. This will generally allow the shared_resume block to be merged with the common landing pad as well. Without this change, tail duplication would likely kick in - creating N (2 in this case) copies of the shared_resume basic block. Differential Revision: http://reviews.llvm.org/D8297 llvm-svn: 233125
* [ConstantRange] Split makeICmpRegion in two.Sanjoy Das2015-03-181-2/+2
| | | | | | | | | | | | | | | | | | | | Summary: This change splits `makeICmpRegion` into `makeAllowedICmpRegion` and `makeSatisfyingICmpRegion` with slightly different contracts. The first one is useful for determining what values some expression //may// take, given that a certain `icmp` evaluates to true. The second one is useful for determining what values are guaranteed to //satisfy// a given `icmp`. Reviewers: nlewycky Reviewed By: nlewycky Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D8345 llvm-svn: 232575
* DataLayout is mandatory, update the API to reflect it with references.Mehdi Amini2015-03-101-140/+124
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Now that the DataLayout is a mandatory part of the module, let's start cleaning the codebase. This patch is a first attempt at doing that. This patch is not exactly NFC as for instance some places were passing a nullptr instead of the DataLayout, possibly just because there was a default value on the DataLayout argument to many functions in the API. Even though it is not purely NFC, there is no change in the validation. I turned as many pointer to DataLayout to references, this helped figuring out all the places where a nullptr could come up. I had initially a local version of this patch broken into over 30 independant, commits but some later commit were cleaning the API and touching part of the code modified in the previous commits, so it seemed cleaner without the intermediate state. Test Plan: Reviewers: echristo Subscribers: llvm-commits From: Mehdi Amini <mehdi.amini@apple.com> llvm-svn: 231740
* Prevent hoisting fmul from THEN/ELSE to IF if there is fmsub/fmadd opportunity.Chad Rosier2015-02-231-2/+6
| | | | | | | | | | | This patch adds the isProfitableToHoist API. For AArch64, we want to prevent a fmul from being hoisted in cases where it is more profitable to form a fmsub/fmadd. Phabricator Review: http://reviews.llvm.org/D7299 Patch by Lawrence Hu <lawrence@codeaurora.org> llvm-svn: 230241
* Demote vectors to arrays. No functionality change.Benjamin Kramer2015-02-191-7/+5
| | | | llvm-svn: 229861
* Removing LLVM_DELETED_FUNCTION, as MSVC 2012 was the last reason for ↵Aaron Ballman2015-02-151-3/+2
| | | | | | requiring the macro. NFC; LLVM edition. llvm-svn: 229340
* [SimplifyCFG] Be more aggressiveJames Molloy2015-02-131-2/+6
| | | | | | | | Up the phi node folding threshold from a cheap "1" to a meagre "2". Update tests for extra added selects and slight code churn. llvm-svn: 229099
* [SimplifyCFG] Swap to using TargetTransformInfo for costJames Molloy2015-02-111-50/+28
| | | | | | | | | | | | | | | | | | analysis. We're already using TTI in SimplifyCFG, so remove the hard-baked "cheapness" heuristic and use TTI directly. Generally NFC intended, but we're using a slightly different heuristic now so there is a slight test churn. Test changes: * combine-comparisons-by-cse.ll: Removed unneeded branch check. * 2014-08-04-muls-it.ll: Test now doesn't branch but emits muleq. * coalesce-subregs.ll: Superfluous block check. * 2008-01-02-hoist-fp-add.ll: fadd is safe to speculate. Change to udiv. * PhiBlockMerge.ll: Superfluous CFG checking code. Main checks still present. * select-gep.ll: A variable GEP is not expensive, just TCC_Basic, according to the TTI. llvm-svn: 228826
* SimplifyCFG: Omit range checks for switch lookup tables when default is ↵Hans Wennborg2015-01-261-7/+8
| | | | | | | | | | | unreachable The range check would get optimized away later, but we might as well not emit them in the first place. http://reviews.llvm.org/D6471 llvm-svn: 227126
* SimplifyCFG: don't remove unreachable default switch destinationsHans Wennborg2015-01-261-89/+92
| | | | | | | | | | | | | | | An unreachable default destination can be exploited by other optimizations and allows for more efficient lowering. Both the SDag switch lowering and LowerSwitch can exploit unreachable defaults. Also make TurnSwitchRangeICmp handle switches with unreachable default. This is kind of separate change, but it cannot be tested without the change above, and I don't want to land the change above without this since that would regress other tests. Differential Revision: http://reviews.llvm.org/D6471 llvm-svn: 227125
* Revert "Don't remove a landing pad if the invoke requires a table entry."Reid Kleckner2015-01-221-17/+3
| | | | | | | | | | | | This reverts commit r176827. Björn Steinbrink pointed out that this didn't actually fix the bug (PR15555) it was attempting to fix. With this reverted, we can now remove landingpad cleanups that immediately resume unwinding, converting the invoke to a call. llvm-svn: 226850
* fix {typo, build failure} in r225760Ramkumar Ramachandra2015-01-131-1/+1
| | | | llvm-svn: 225762
* Standardize {pred,succ,use,user}_empty()Ramkumar Ramachandra2015-01-131-3/+3
| | | | | | | | | The functions {pred,succ,use,user}_{begin,end} exist, but many users have to check *_begin() with *_end() by hand to determine if the BasicBlock or User is empty. Fix this with a standard *_empty(), demonstrating a few usecases. llvm-svn: 225760
* SimplifyCFG: check uses of constant-foldable instrs in switch destinations ↵Hans Wennborg2015-01-091-6/+15
| | | | | | | | | | | (PR20210) The previous code assumed that such instructions could not have any uses outside CaseDest, with the motivation that the instruction could not dominate CommonDest because CommonDest has phi nodes in it. That simply isn't true; e.g., CommonDest could have an edge back to itself. llvm-svn: 225552
* [PM] Split the AssumptionTracker immutable pass into two separate APIs:Chandler Carruth2015-01-041-34/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | a cache of assumptions for a single function, and an immutable pass that manages those caches. The motivation for this change is two fold. Immutable analyses are really hacks around the current pass manager design and don't exist in the new design. This is usually OK, but it requires that the core logic of an immutable pass be reasonably partitioned off from the pass logic. This change does precisely that. As a consequence it also paves the way for the *many* utility functions that deal in the assumptions to live in both pass manager worlds by creating an separate non-pass object with its own independent API that they all rely on. Now, the only bits of the system that deal with the actual pass mechanics are those that actually need to deal with the pass mechanics. Once this separation is made, several simplifications become pretty obvious in the assumption cache itself. Rather than using a set and callback value handles, it can just be a vector of weak value handles. The callers can easily skip the handles that are null, and eventually we can wrap all of this up behind a filter iterator. For now, this adds boiler plate to the various passes, but this kind of boiler plate will end up making it possible to port these passes to the new pass manager, and so it will end up factored away pretty reasonably. llvm-svn: 225131
* [SimplifyCFG] Revise common code sinkingMichael Liao2014-12-231-30/+30
| | | | | | | | | | - Fix the case where more than 1 common instructions derived from the same operand cannot be sunk. When a pair of value has more than 1 derived values in both branches, only 1 derived value could be sunk. - Replace BB1 -> (BB2, PN) map with joint value map, i.e. map of (BB1, BB2) -> PN, which is more accurate to track common ops. llvm-svn: 224757
* IR: Split Metadata from ValueDuncan P. N. Exon Smith2014-12-091-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Split `Metadata` away from the `Value` class hierarchy, as part of PR21532. Assembly and bitcode changes are in the wings, but this is the bulk of the change for the IR C++ API. I have a follow-up patch prepared for `clang`. If this breaks other sub-projects, I apologize in advance :(. Help me compile it on Darwin I'll try to fix it. FWIW, the errors should be easy to fix, so it may be simpler to just fix it yourself. This breaks the build for all metadata-related code that's out-of-tree. Rest assured the transition is mechanical and the compiler should catch almost all of the problems. Here's a quick guide for updating your code: - `Metadata` is the root of a class hierarchy with three main classes: `MDNode`, `MDString`, and `ValueAsMetadata`. It is distinct from the `Value` class hierarchy. It is typeless -- i.e., instances do *not* have a `Type`. - `MDNode`'s operands are all `Metadata *` (instead of `Value *`). - `TrackingVH<MDNode>` and `WeakVH` referring to metadata can be replaced with `TrackingMDNodeRef` and `TrackingMDRef`, respectively. If you're referring solely to resolved `MDNode`s -- post graph construction -- just use `MDNode*`. - `MDNode` (and the rest of `Metadata`) have only limited support for `replaceAllUsesWith()`. As long as an `MDNode` is pointing at a forward declaration -- the result of `MDNode::getTemporary()` -- it maintains a side map of its uses and can RAUW itself. Once the forward declarations are fully resolved RAUW support is dropped on the ground. This means that uniquing collisions on changing operands cause nodes to become "distinct". (This already happened fairly commonly, whenever an operand went to null.) If you're constructing complex (non self-reference) `MDNode` cycles, you need to call `MDNode::resolveCycles()` on each node (or on a top-level node that somehow references all of the nodes). Also, don't do that. Metadata cycles (and the RAUW machinery needed to construct them) are expensive. - An `MDNode` can only refer to a `Constant` through a bridge called `ConstantAsMetadata` (one of the subclasses of `ValueAsMetadata`). As a side effect, accessing an operand of an `MDNode` that is known to be, e.g., `ConstantInt`, takes three steps: first, cast from `Metadata` to `ConstantAsMetadata`; second, extract the `Constant`; third, cast down to `ConstantInt`. The eventual goal is to introduce `MDInt`/`MDFloat`/etc. and have metadata schema owners transition away from using `Constant`s when the type isn't important (and they don't care about referring to `GlobalValue`s). In the meantime, I've added transitional API to the `mdconst` namespace that matches semantics with the old code, in order to avoid adding the error-prone three-step equivalent to every call site. If your old code was: MDNode *N = foo(); bar(isa <ConstantInt>(N->getOperand(0))); baz(cast <ConstantInt>(N->getOperand(1))); bak(cast_or_null <ConstantInt>(N->getOperand(2))); bat(dyn_cast <ConstantInt>(N->getOperand(3))); bay(dyn_cast_or_null<ConstantInt>(N->getOperand(4))); you can trivially match its semantics with: MDNode *N = foo(); bar(mdconst::hasa <ConstantInt>(N->getOperand(0))); baz(mdconst::extract <ConstantInt>(N->getOperand(1))); bak(mdconst::extract_or_null <ConstantInt>(N->getOperand(2))); bat(mdconst::dyn_extract <ConstantInt>(N->getOperand(3))); bay(mdconst::dyn_extract_or_null<ConstantInt>(N->getOperand(4))); and when you transition your metadata schema to `MDInt`: MDNode *N = foo(); bar(isa <MDInt>(N->getOperand(0))); baz(cast <MDInt>(N->getOperand(1))); bak(cast_or_null <MDInt>(N->getOperand(2))); bat(dyn_cast <MDInt>(N->getOperand(3))); bay(dyn_cast_or_null<MDInt>(N->getOperand(4))); - A `CallInst` -- specifically, intrinsic instructions -- can refer to metadata through a bridge called `MetadataAsValue`. This is a subclass of `Value` where `getType()->isMetadataTy()`. `MetadataAsValue` is the *only* class that can legally refer to a `LocalAsMetadata`, which is a bridged form of non-`Constant` values like `Argument` and `Instruction`. It can also refer to any other `Metadata` subclass. (I'll break all your testcases in a follow-up commit, when I propagate this change to assembly.) llvm-svn: 223802
* Revert "Move function to obtain branch weights into the BranchInst class. NFC."Juergen Ributzka2014-12-091-6/+26
| | | | | | This reverts commit r223784 and copies the 'ExtractBranchMetadata' to CodeGenPrepare. llvm-svn: 223795
* Move function to obtain branch weights into the BranchInst class. NFC.Juergen Ributzka2014-12-091-26/+6
| | | | | | Make this function available to other parts of LLVM. llvm-svn: 223784
* Revert r223049, r223050 and r223051 while investigating test failures.Hans Wennborg2014-12-011-8/+56
| | | | | | I didn't foresee affecting the Clang test suite :/ llvm-svn: 223054
* SimplifyCFG: Omit range checks for switch lookup tables when default is ↵Hans Wennborg2014-12-011-7/+8
| | | | | | | | unreachable They would get optimized away later, but we might as well not emit them. llvm-svn: 223051
* SimplifyCFG: don't remove unreachable default switch destinationsHans Wennborg2014-12-011-49/+0
| | | | | | | | | | | | | An unreachable default destination can be exploited by other optimizations, and SDag lowering is now prepared to handle them efficiently. For example, branches to the unreachable destination will be optimized away, such as in the case of range checks for switch lookup tables. On 64-bit Linux, this reduces the size of a clang bootstrap by 80 kB (and Chromium by 30 kB). llvm-svn: 223050
* reinstate r222872: Peephole optimization in switch table lookup: reuse the ↵Erik Eckstein2014-11-271-7/+100
| | | | | | | | | | | | | | | | | | | | | | | | | | | guarding table comparison if possible. Fixed missing dominance check. Original commit message: This optimization tries to reuse the generated compare instruction, if there is a comparison against the default value after the switch. Example: if (idx < tablesize) r = table[idx]; // table does not contain default_value else r = default_value; if (r != default_value) ... Is optimized to: cond = idx < tablesize; if (cond) r = table[idx]; else r = default_value; if (cond) ... Jump threading will then eliminate the second if(cond). llvm-svn: 222891
* Revert "Peephole optimization in switch table lookup: reuse the guarding ↵Erik Eckstein2014-11-271-89/+7
| | | | | | | | table comparison if possible." It is breaking the clang bootstrag. llvm-svn: 222877
* Peephole optimization in switch table lookup: reuse the guarding table ↵Erik Eckstein2014-11-271-7/+89
| | | | | | | | | | | | | | | | | | | | | | | | | comparison if possible. This optimization tries to reuse the generated compare instruction, if there is a comparison against the default value after the switch. Example: if (idx < tablesize) r = table[idx]; // table does not contain default_value else r = default_value; if (r != default_value) ... Is optimized to: cond = idx < tablesize; if (cond) r = table[idx]; else r = default_value; if (cond) ... \endcode Jump threading will then eliminate the second if(cond). llvm-svn: 222872
* SimplifyCFG: Refactor GatherConstantCompares() result in a structMehdi Amini2014-11-201-133/+152
| | | | | | | | | | Code seems cleaner and easier to understand this way This is basically r222416, after fixes for MSVC lack of standard support, and a few cleaning (got rid of a warning). Thanks Nakamura Takumi and Nico Weber for the MSVC fixes. llvm-svn: 222472
* Revert r222416, r222422, r222426: the former revision had problems and ↵Timur Iskhodzhanov2014-11-201-150/+136
| | | | | | fixing them introduced bugs llvm-svn: 222428
* Fix a typoTimur Iskhodzhanov2014-11-201-1/+1
| | | | llvm-svn: 222426
* SimplifyCFG.cpp: Tweak to let msc17 compliant.NAKAMURA Takumi2014-11-201-8/+11
| | | | | | | | - Use LLVM_DELETED_FUNCTION. - Don't use member initializers. - Don't use initializer list. llvm-svn: 222422
* SimplifyCFG: Refactor GatherConstantCompares() result in a structMehdi Amini2014-11-201-136/+147
| | | | | | Code seems cleaner and easier to understand this way llvm-svn: 222416
* Try to fix MSVS build after r222384. No intended behavior change.Nico Weber2014-11-191-3/+3
| | | | llvm-svn: 222386
* SimplifyCFG: turn recursive GatherConstantCompares into iterativeMehdi Amini2014-11-191-99/+138
| | | | | | A long sequence of || or && could lead to a stack explosion. llvm-svn: 222384
* Update SetVector to rely on the underlying set's insert to return a ↵David Blaikie2014-11-191-1/+1
| | | | | | | | | | | | | pair<iterator, bool> This is to be consistent with StringSet and ultimately with the standard library's associative container insert function. This lead to updating SmallSet::insert to return pair<iterator, bool>, and then to update SmallPtrSet::insert to return pair<iterator, bool>, and then to update all the existing users of those functions... llvm-svn: 222334
* SimplifyCFG: Range'ify some for-loops. No functional change.Hans Wennborg2014-11-181-12/+13
| | | | llvm-svn: 222215
* [SimplifyCFG] Make the value type of the hole check bitmask a power-of-2.Juergen Ributzka2014-11-171-2/+5
| | | | | | | | | | | | | | | When converting a switch to a lookup table we might have to generate a bitmaks to encode and check for holes in the original switch statement. The type of this mask depends on the number of switch statements, which can result in illegal types for pretty much all architectures. To avoid unnecessary type legalization and help FastISel this commit increases the size of the bitmask to next power-of-2 value when necessary. This fixes rdar://problem/18984639. llvm-svn: 222168
* Optimize switch lookup tables with linear mapping.Erik Eckstein2014-11-171-1/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a simple optimization for switch table lookup: It computes the output value directly with an (optional) mul and add if there is a linear mapping between index and output. Example: int f1(int x) { switch (x) { case 0: return 10; case 1: return 11; case 2: return 12; case 3: return 13; } return 0; } generates: define i32 @f1(i32 %x) #0 { entry: %0 = icmp ult i32 %x, 4 br i1 %0, label %switch.lookup, label %return switch.lookup: %switch.offset = add i32 %x, 10 ret i32 %switch.offset return: ret i32 0 } llvm-svn: 222121
* Revert "IR: MDNode => Value"Duncan P. N. Exon Smith2014-11-111-4/+4
| | | | | | | | | | | | | | | | | Instead, we're going to separate metadata from the Value hierarchy. See PR21532. This reverts commit r221375. This reverts commit r221373. This reverts commit r221359. This reverts commit r221167. This reverts commit r221027. This reverts commit r221024. This reverts commit r221023. This reverts commit r220995. This reverts commit r220994. llvm-svn: 221711
* IR: MDNode => Value: Instruction::getMetadata()Duncan P. N. Exon Smith2014-11-011-4/+4
| | | | | | | | | | Change `Instruction::getMetadata()` to return `Value` as part of PR21433. Update most callers to use `Instruction::getMDNode()`, which wraps the result in a `cast_or_null<MDNode>`. llvm-svn: 221024
* Preserving 'nonnull' metadata in SimplifyCFGPhilip Reames2014-10-221-1/+4
| | | | | | | | | | When we hoist two loads above an if, we can preserve the nonnull metadata. We could also do the same for sinking them, but we appear to not handle metadata at all in that case. Thanks to Hal for the review. Differential Revision: http://reviews.llvm.org/D5910 llvm-svn: 220392
* Switch to select optimization for two-case switchesMarcello Maggioni2014-10-141-0/+170
| | | | | | | This is the same optimization of r219233 with modifications to support PHIs with multiple incoming edges from the same block and a test to check that this condition is handled. llvm-svn: 219656
* Revert r219223, it creates invalid PHI nodes.Joerg Sonnenberger2014-10-121-170/+0
| | | | llvm-svn: 219587
* SimplifyCFG: Don't convert phis into selects if we could remove undef behaviorArnold Schwaighofer2014-10-101-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | instead We used to transform this: define void @test6(i1 %cond, i8* %ptr) { entry: br i1 %cond, label %bb1, label %bb2 bb1: br label %bb2 bb2: %ptr.2 = phi i8* [ %ptr, %entry ], [ null, %bb1 ] store i8 2, i8* %ptr.2, align 8 ret void } into this: define void @test6(i1 %cond, i8* %ptr) { %ptr.2 = select i1 %cond, i8* null, i8* %ptr store i8 2, i8* %ptr.2, align 8 ret void } because the simplifycfg transformation into selects would happen to happen before the simplifycfg transformation that removes unreachable control flow (We have 'unreachable control flow' due to the store to null which is undefined behavior). The existing transformation that removes unreachable control flow in simplifycfg is: /// If BB has an incoming value that will always trigger undefined behavior /// (eg. null pointer dereference), remove the branch leading here. static bool removeUndefIntroducingPredecessor(BasicBlock *BB) Now we generate: define void @test6(i1 %cond, i8* %ptr) { store i8 2, i8* %ptr.2, align 8 ret void } I did not see any impact on the test-suite + externals. rdar://18596215 llvm-svn: 219462
* Two case switch to select optimizationMarcello Maggioni2014-10-071-0/+170
| | | | | | | | | | | | | | | | | | | | | This optimization tries to convert switch instructions that are used to select a value with only 2 unique cases + default block to a select or a couple of selects (depending if the default block is reachable or not). The typical case this optimization wants to be able to optimize is this one: Example: switch (a) { case 10: %0 = icmp eq i32 %a, 10 return 10; %1 = select i1 %0, i32 10, i32 4 case 20: ----> %2 = icmp eq i32 %a, 20 return 2; %3 = select i1 %2, i32 2, i32 %1 default: return 4; } It also sets the base for further optimizations that are planned and being reviewed. llvm-svn: 219223
* [SimplifyCFG] threshold for folding branches with common destinationJingyue Wu2014-09-301-66/+80
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch adds a threshold that controls the number of bonus instructions allowed for folding branches with common destination. The original code allows at most one bonus instruction. With this patch, users can customize the threshold to allow multiple bonus instructions. The default threshold is still 1, so that the code behaves the same as before when users do not specify this threshold. The motivation of this change is that tuning this threshold significantly (up to 25%) improves the performance of some CUDA programs in our internal code base. In general, branch instructions are very expensive for GPU programs. Therefore, it is sometimes worth trading more arithmetic computation for a more straightened control flow. Here's a reduced example: __global__ void foo(int a, int b, int c, int d, int e, int n, const int *input, int *output) { int sum = 0; for (int i = 0; i < n; ++i) sum += (((i ^ a) > b) && (((i | c ) ^ d) > e)) ? 0 : input[i]; *output = sum; } The select statement in the loop body translates to two branch instructions "if ((i ^ a) > b)" and "if (((i | c) ^ d) > e)" which share a common destination. With the default threshold, SimplifyCFG is unable to fold them, because computing the condition of the second branch "(i | c) ^ d > e" requires two bonus instructions. With the threshold increased, SimplifyCFG can fold the two branches so that the loop body contains only one branch, making the code conceptually look like: sum += (((i ^ a) > b) & (((i | c ) ^ d) > e)) ? 0 : input[i]; Increasing the threshold significantly improves the performance of this particular example. In the configuration where both conditions are guaranteed to be true, increasing the threshold from 1 to 2 improves the performance by 18.24%. Even in the configuration where the first condition is false and the second condition is true, which favors shortcuts, increasing the threshold from 1 to 2 still improves the performance by 4.35%. We are still looking for a good threshold and maybe a better cost model than just counting the number of bonus instructions. However, according to the above numbers, we think it is at least worth adding a threshold to enable more experiments and tuning. Let me know what you think. Thanks! Test Plan: Added one test case to check the threshold is in effect Reviewers: nadav, eliben, meheff, resistor, hfinkel Reviewed By: hfinkel Subscribers: hfinkel, llvm-commits Differential Revision: http://reviews.llvm.org/D5529 llvm-svn: 218711
* Remove dead code in SimplifyCFGJingyue Wu2014-09-151-43/+0
| | | | | | | | | | | | | | | | | | | | | | | Summary: UsedByBranch is always true according to how BonusInst is defined. Test Plan: Passes check-all, and also verified if (BonusInst && !UsedByBranch) { ... } is never entered during check-all. Reviewers: resistor, nadav, jingyue Reviewed By: jingyue Subscribers: llvm-commits, eliben, meheff Differential Revision: http://reviews.llvm.org/D5324 llvm-svn: 217824
* Make use of @llvm.assume in ValueTracking (computeKnownBits, etc.)Hal Finkel2014-09-071-29/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change, which allows @llvm.assume to be used from within computeKnownBits (and other associated functions in ValueTracking), adds some (optional) parameters to computeKnownBits and friends. These functions now (optionally) take a "context" instruction pointer, an AssumptionTracker pointer, and also a DomTree pointer, and most of the changes are just to pass this new information when it is easily available from InstSimplify, InstCombine, etc. As explained below, the significant conceptual change is that known properties of a value might depend on the control-flow location of the use (because we care that the @llvm.assume dominates the use because assumptions have control-flow dependencies). This means that, when we ask if bits are known in a value, we might get different answers for different uses. The significant changes are all in ValueTracking. Two main changes: First, as with the rest of the code, new parameters need to be passed around. To make this easier, I grouped them into a structure, and I made internal static versions of the relevant functions that take this structure as a parameter. The new code does as you might expect, it looks for @llvm.assume calls that make use of the value we're trying to learn something about (often indirectly), attempts to pattern match that expression, and uses the result if successful. By making use of the AssumptionTracker, the process of finding @llvm.assume calls is not expensive. Part of the structure being passed around inside ValueTracking is a set of already-considered @llvm.assume calls. This is to prevent a query using, for example, the assume(a == b), to recurse on itself. The context and DT params are used to find applicable assumptions. An assumption needs to dominate the context instruction, or come after it deterministically. In this latter case we only handle the specific case where both the assumption and the context instruction are in the same block, and we need to exclude assumptions from being used to simplify their own ephemeral values (those which contribute only to the assumption) because otherwise the assumption would prove its feeding comparison trivial and would be removed. This commit adds the plumbing and the logic for a simple masked-bit propagation (just enough to write a regression test). Future commits add more patterns (and, correspondingly, more regression tests). llvm-svn: 217342
* Repace SmallPtrSet with SmallPtrSetImpl in function arguments to avoid ↵Craig Topper2014-08-211-1/+1
| | | | | | needing to mention the size. llvm-svn: 216158
OpenPOWER on IntegriCloud