| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
forgetLoop() has pretty bad performance because it goes over
the same instructions over and over again in particular when
nested loop are involved.
The refactoring changes the function to a not-recursive function
and reusing the allocation for data-structures and the Visited
set.
NFCI
Differential Revision: https://reviews.llvm.org/D37659
llvm-svn: 312920
|
| |
|
|
| |
llvm-svn: 312919
|
| |
|
|
|
|
|
|
| |
Helps improve combineLogicBlendIntoPBLENDV support by allowing us to peek into through PACKSS truncations of vector comparison results.
Differential Revision: https://reviews.llvm.org/D37680
llvm-svn: 312916
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A mrt exp with vm=1 must be in exact (non-WQM) mode, as it also exports
the exec mask as the valid mask to determine which pixels to render.
This commit marks any exp as needing to be in exact mode.
Actually, if there are multiple mrt exps, only one needs to have vm=1,
and only that one needs to be in exact mode. But that is an optimization
for another day.
Differential Revision: https://reviews.llvm.org/D36305
llvm-svn: 312915
|
| |
|
|
|
|
|
|
|
|
|
| |
I'm trying to refactor some shared code for integer div/rem,
but I keep having to scroll through fdiv. The FP ops have
nothing in common with the integer ops, so I'm moving FP
below everything else.
While here, improve a couple of comments and fix some formatting.
llvm-svn: 312913
|
| |
|
|
|
|
| |
Differential Revision: https://reviews.llvm.org/D37374
llvm-svn: 312908
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Also enables '__do_clear_bss'.
These functions are automaticalled called by the CRT if they are
declared.
We need these to be called otherwise RAM will start completely
uninitialised, even though we need to copy RAM variables from progmem to
RAM.
llvm-svn: 312905
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary: G_ANYEXT support
Reviewers: zvi, delena
Reviewed By: delena
Subscribers: rovka, kristof.beyls, llvm-commits
Differential Revision: https://reviews.llvm.org/D37675
llvm-svn: 312903
|
| |
|
|
|
|
| |
... to check commit access for new committer.
llvm-svn: 312900
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a preparatory step for D34515 and also is being recommitted as its
first version caused PR34045.
This change:
- makes nodes ISD::ADDCARRY and ISD::SUBCARRY legal for i32
- lowering is done by first converting the boolean value into the carry flag
using (_, C) ← (ARMISD::ADDC R, -1) and converted back to an integer value
using (R, _) ← (ARMISD::ADDE 0, 0, C). An ARMISD::ADDE between the two
operations does the actual addition.
- for subtraction, given that ISD::SUBCARRY second result is actually a
borrow, we need to invert the value of the second operand and result before
and after using ARMISD::SUBE. We need to invert the carry result of
ARMISD::SUBE to preserve the semantics.
- given that the generic combiner may lower ISD::ADDCARRY and
ISD::SUBCARRYinto ISD::UADDO and ISD::USUBO we need to update their lowering
as well otherwise i64 operations now would require branches. This implies
updating the corresponding test for unsigned.
- add new combiner to remove the redundant conversions from/to carry flags
to/from boolean values (ARMISD::ADDC (ARMISD::ADDE 0, 0, C), -1) → C
- fixes PR34045
Differential Revision: https://reviews.llvm.org/D35192
llvm-svn: 312898
|
| |
|
|
|
|
|
|
|
| |
After the split of the Scatter operation, the order of the new instructions is well defined - Lo goes before Hi. Otherwise the semantic of Scatter (from LSB to MSB) is broken.
I'm chaining 2 nodes to prevent reordering.
Differential Revision https://reviews.llvm.org/D37670
llvm-svn: 312894
|
| |
|
|
| |
llvm-svn: 312887
|
| |
|
|
|
|
| |
Move towards making it possible to use the shuffle combines for cases where we don't want to call DCI.CombineTo() with the result.
llvm-svn: 312886
|
| |
|
|
|
|
|
|
|
| |
This removes some duplicated code and makes it easier to support signed div/rem
in a similar way if we want to do that. Note that the existing comments were not
accurate - we don't need a constant divisor to simplify; icmp simplification does
more than that. But as the added tests show, it could go even further.
llvm-svn: 312885
|
| |
|
|
|
|
| |
First step towards making it possible to use the shuffle combines for cases where we don't want to call DCI.CombineTo() with the result.
llvm-svn: 312884
|
| |
|
|
|
|
| |
CreateMemForInlineAsm. NFC.
llvm-svn: 312881
|
| |
|
|
|
|
| |
This reverts commit r312879 - An accidental partial commit.
llvm-svn: 312880
|
| |
|
|
| |
llvm-svn: 312879
|
| |
|
|
| |
llvm-svn: 312878
|
| |
|
|
|
|
|
|
|
| |
It now knows the tricks of both functions.
Also, fix a bug that considered allocas of non-zero address space to be always non null
Differential Revision: https://reviews.llvm.org/D37628
llvm-svn: 312869
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Just because INC/DEC is a little slow on some processors doesn't mean we shouldn't prefer it when optimizing for size.
This appears to match gcc behavior.
Reviewers: chandlerc, zvi, RKSimon, spatel
Reviewed By: RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D37177
llvm-svn: 312866
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is intended to be a superset of the functionality from D31037 (EarlyCSE) but implemented
as an independent pass, so there's no stretching of scope and feature creep for an existing pass.
I also proposed a weaker version of this for SimplifyCFG in D30910. And I initially had almost
this same functionality as an addition to CGP in the motivating example of PR31028:
https://bugs.llvm.org/show_bug.cgi?id=31028
The advantage of positioning this ahead of SimplifyCFG in the pass pipeline is that it can allow
more flattening. But it needs to be after passes (InstCombine) that could sink a div/rem and
undo the hoisting that is done here.
Decomposing remainder may allow removing some code from the backend (PPC and possibly others).
Differential Revision: https://reviews.llvm.org/D37121
llvm-svn: 312862
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
test
Summary:
Once we've done our custom isel for these nodes, I think we should be calling removeDeadNode to prune them out of the DAG. Table driven isel ultimately either calls morphNodeTo which modifies a node and doesn't leave dead nodes. Or it emits new nodes and then calls removeDeadNode as part of Opc_CompleteMatch.
If you run a simple multiply test case like this through llc with -debug you'll see a umul_lohi node get printed as part of the dump for Instruction Selection ends.
```
define i64 @foo(i64 %a, i64 %b) local_unnamed_addr #0 {
entry:
%conv = zext i64 %a to i128
%conv1 = zext i64 %b to i128
%mul = mul nuw nsw i128 %conv1, %conv
%shr = lshr i128 %mul, 64
%conv2 = trunc i128 %shr to i64
ret i64 %conv2
}
```
Reviewers: RKSimon, spatel, zvi, guyblank, niravd
Reviewed By: niravd
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D37547
llvm-svn: 312857
|
| |
|
|
|
|
|
|
| |
X86ISD::SHRUNKBLEND to ISD::VSELECT during isel.
This ensures that the SHRUNKBLEND node gets erased immediately.
llvm-svn: 312856
|
| |
|
|
|
|
| |
function (which is too slow)
llvm-svn: 312855
|
| |
|
|
| |
llvm-svn: 312853
|
| |
|
|
| |
llvm-svn: 312852
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
- Use range based for
- Variable names should start with upper case
- Add `const`
- Change class name to match filename
- Fix doxygen comments
- Use MCPhysReg instead of unsigned
- Use references instead of pointers where things cannot be nullptr
- Misc coding style improvements
llvm-svn: 312846
|
| |
|
|
| |
llvm-svn: 312845
|
| |
|
|
| |
llvm-svn: 312844
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The lxv/stxv instructions require an offset that is 0 % 16. Previously we were
selecting lxv/stxv for loads and stores to the stack where the offset from the
slot was a multiple of 16, but the stack slot was not 16 or more byte aligned.
When the frame gets lowered these transform to r(1|31) + slot + offset.
If slot is not aligned, slot + offset may not be 0 % 16.
Now we require 16 byte or more alignment for select lxv/stxv to stack slots.
Includes a testcase that shows both sufficiently and insufficiently aligned
stack slots.
llvm-svn: 312843
|
| |
|
|
| |
llvm-svn: 312836
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed an issue in printImm64Operand where if the value is
an expression, print out the expression properly. Currently,
it will print
r1 = <MCOperand Expr:(tx_port)>ll
With the patch, the printout will be
r1 = tx_port
Suggested-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Yonghong Song <yhs@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
llvm-svn: 312833
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current TargetTransformInfo can support throughput cost model and code size model, but sometimes we also need instruction latency cost model in different optimizations. Hal suggested we need a single public interface to query the different cost of an instruction. So I proposed following interface:
enum TargetCostKind {
TCK_RecipThroughput, ///< Reciprocal throughput.
TCK_Latency, ///< The latency of instruction.
TCK_CodeSize ///< Instruction code size.
};
int getInstructionCost(const Instruction *I, enum TargetCostKind kind) const;
All clients should mainly use this function to query the cost of an instruction, parameter <kind> specifies the desired cost model.
This patch also provides a simple default implementation of getInstructionLatency.
The default getInstructionLatency provides latency numbers for only small number of instruction classes, those latency numbers are only reasonable for modern OOO processors. It can be extended in following ways:
Add more detail into this function.
Add getXXXLatency function and call it from here.
Implement target specific getInstructionLatency function.
Differential Revision: https://reviews.llvm.org/D37170
llvm-svn: 312832
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
We have a lot of operand definition work essentially producing
every valid permutation of operands to workaround builiding
operand lists based on the instruction features. Apparently tablegen
already has a mostly undocumented operator to concat dags which
simplies this.
Convert one simple place to use this. The BUF instruction definitions
have much more complicated logic that can be totally rewritten now.
llvm-svn: 312822
|
| |
|
|
|
|
|
|
| |
The various scalar bit operations set SCC,
so one is erased or moved it needs to be recomputed.
Not sure why the existing tests don't fail on this.
llvm-svn: 312819
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A coverage segment contains a starting line and column, an execution
count, and some other metadata. Clients of the coverage library use
segments to prepare line-oriented reports.
Users of the coverage library depend on segments being unique and sorted
in source order. Currently this is not guaranteed (this is why the clang
change which introduced deferred regions was reverted).
This commit documents the "unique and sorted" condition and asserts that
it holds. It also fixes the SegmentBuilder so that it produces correct
output in some edge cases.
Testing: I've added unit tests for some edge cases. I've also checked
that the new SegmentBuilder implementation is fully covered. Apart from
running check-profile and the llvm-cov tests, I've successfully used a
stage1 llvm-cov to prepare a coverage report for an instrumented clang
binary.
Differential Revision: https://reviews.llvm.org/D36813
llvm-svn: 312817
|
| |
|
|
| |
llvm-svn: 312815
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Each source region has a start and end location. Report an error when
the end location does not precede the begin location.
The old lineExecutionCounts.covmapping test actually had a buggy source
region in it. This commit introduces a regenerated copy of the coverage
and moves the old copy to malformedRegions.covmapping, for a test.
Differential Revision: https://reviews.llvm.org/D37387
llvm-svn: 312814
|
| |
|
|
| |
llvm-svn: 312809
|
| |
|
|
|
|
|
|
|
|
|
|
| |
analysis of vector to be vectorized.
Reviewers: spatel, mzolotukhin, mkuper, hfinkel, RKSimon, filcab, ABataev, davide
Subscribers: llvm-commits, rengolin
Differential Revision: https://reviews.llvm.org/D37212
llvm-svn: 312802
|
| |
|
|
|
|
|
|
|
|
|
| |
rL312641 Allowed llvm.memcpy/memset/memmove to be tail calls when parent
function return the intrinsics's first argument. However on arm-none-eabi
platform, llvm.memcpy will be expanded to __aeabi_memcpy which doesn't
have return value. The fix is to check the libcall name after expansion
to match "memcpy/memset/memmove" before allowing those intrinsic to be
tail calls.
llvm-svn: 312799
|
| |
|
|
|
|
| |
Differential Revision: https://reviews.llvm.org/D37600
llvm-svn: 312797
|
| |
|
|
| |
llvm-svn: 312793
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
SLP vectorizer supports horizontal reductions for Add/FAdd binary
operations. Patch adds support for horizontal min/max reductions.
Function getReductionCost() is split to getArithmeticReductionCost() for
binary operation reductions and getMinMaxReductionCost() for min/max
reductions.
Patch fixes PR26956.
Differential revision: https://reviews.llvm.org/D27846
llvm-svn: 312791
|
| |
|
|
|
|
|
|
| |
Re-applying after the found bug was fixed.
Differential Revision: https://reviews.llvm.org/D36215
llvm-svn: 312783
|
| |
|
|
|
|
|
|
|
|
| |
This patch adds prologue verification, which is already present in
Apple's dwarfdump. It checks for invalid directory indices and warns
about duplicate file paths.
Differential revision: https://reviews.llvm.org/D37511
llvm-svn: 312782
|
| |
|
|
|
|
|
|
| |
This was missed in SVN r310223 when arm64 support was added.
Differential Revision: https://reviews.llvm.org/D37588
llvm-svn: 312776
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
b/lib/Transforms/Scalar/InductiveRangeCheckElimination.cpp
index f72a808..9fa49fd 100644
--- a/lib/Transforms/Scalar/InductiveRangeCheckElimination.cpp
+++ b/lib/Transforms/Scalar/InductiveRangeCheckElimination.cpp
@@ -450,20 +450,10 @@ struct LoopStructure {
// equivalent to:
//
// intN_ty inc = IndVarIncreasing ? 1 : -1;
- // pred_ty predicate = IndVarIncreasing
- // ? IsSignedPredicate ? ICMP_SLT : ICMP_ULT
- // : IsSignedPredicate ? ICMP_SGT : ICMP_UGT;
+ // pred_ty predicate = IndVarIncreasing ? ICMP_SLT : ICMP_SGT;
//
- //
- // for (intN_ty iv = IndVarStart; predicate(IndVarBase, LoopExitAt);
- // iv = IndVarNext)
+ // for (intN_ty iv = IndVarStart; predicate(iv, LoopExitAt); iv = IndVarBase)
// ... body ...
- //
- // Here IndVarBase is either current or next value of the induction variable.
- // in the former case, IsIndVarNext = false and IndVarBase points to the
- // Phi node of the induction variable. Otherwise, IsIndVarNext = true and
- // IndVarBase points to IV increment instruction.
- //
Value *IndVarBase;
Value *IndVarStart;
@@ -471,13 +461,12 @@ struct LoopStructure {
Value *LoopExitAt;
bool IndVarIncreasing;
bool IsSignedPredicate;
- bool IsIndVarNext;
LoopStructure()
: Tag(""), Header(nullptr), Latch(nullptr), LatchBr(nullptr),
LatchExit(nullptr), LatchBrExitIdx(-1), IndVarBase(nullptr),
IndVarStart(nullptr), IndVarStep(nullptr), LoopExitAt(nullptr),
- IndVarIncreasing(false), IsSignedPredicate(true), IsIndVarNext(false) {}
+ IndVarIncreasing(false), IsSignedPredicate(true) {}
template <typename M> LoopStructure map(M Map) const {
LoopStructure Result;
@@ -493,7 +482,6 @@ struct LoopStructure {
Result.LoopExitAt = Map(LoopExitAt);
Result.IndVarIncreasing = IndVarIncreasing;
Result.IsSignedPredicate = IsSignedPredicate;
- Result.IsIndVarNext = IsIndVarNext;
return Result;
}
@@ -841,42 +829,21 @@ LoopStructure::parseLoopStructure(ScalarEvolution &SE,
return false;
};
- // `ICI` can either be a comparison against IV or a comparison of IV.next.
- // Depending on the interpretation, we calculate the start value differently.
+ // `ICI` is interpreted as taking the backedge if the *next* value of the
+ // induction variable satisfies some constraint.
- // Pair {IndVarBase; IsIndVarNext} semantically designates whether the latch
- // comparisons happens against the IV before or after its value is
- // incremented. Two valid combinations for them are:
- //
- // 1) { phi [ iv.start, preheader ], [ iv.next, latch ]; false },
- // 2) { iv.next; true }.
- //
- // The latch comparison happens against IndVarBase which can be either current
- // or next value of the induction variable.
const SCEVAddRecExpr *IndVarBase = cast<SCEVAddRecExpr>(LeftSCEV);
bool IsIncreasing = false;
bool IsSignedPredicate = true;
- bool IsIndVarNext = false;
ConstantInt *StepCI;
if (!IsInductionVar(IndVarBase, IsIncreasing, StepCI)) {
FailureReason = "LHS in icmp not induction variable";
return None;
}
- const SCEV *IndVarStart = nullptr;
- // TODO: Currently we only handle comparison against IV, but we can extend
- // this analysis to be able to deal with comparison against sext(iv) and such.
- if (isa<PHINode>(LeftValue) &&
- cast<PHINode>(LeftValue)->getParent() == Header)
- // The comparison is made against current IV value.
- IndVarStart = IndVarBase->getStart();
- else {
- // Assume that the comparison is made against next IV value.
- const SCEV *StartNext = IndVarBase->getStart();
- const SCEV *Addend = SE.getNegativeSCEV(IndVarBase->getStepRecurrence(SE));
- IndVarStart = SE.getAddExpr(StartNext, Addend);
- IsIndVarNext = true;
- }
+ const SCEV *StartNext = IndVarBase->getStart();
+ const SCEV *Addend = SE.getNegativeSCEV(IndVarBase->getStepRecurrence(SE));
+ const SCEV *IndVarStart = SE.getAddExpr(StartNext, Addend);
const SCEV *Step = SE.getSCEV(StepCI);
ConstantInt *One = ConstantInt::get(IndVarTy, 1);
@@ -1060,7 +1027,6 @@ LoopStructure::parseLoopStructure(ScalarEvolution &SE,
Result.IndVarIncreasing = IsIncreasing;
Result.LoopExitAt = RightValue;
Result.IsSignedPredicate = IsSignedPredicate;
- Result.IsIndVarNext = IsIndVarNext;
FailureReason = nullptr;
@@ -1350,9 +1316,8 @@ LoopConstrainer::RewrittenRangeInfo LoopConstrainer::changeIterationSpaceEnd(
BranchToContinuation);
NewPHI->addIncoming(PN->getIncomingValueForBlock(Preheader), Preheader);
- auto *FixupValue =
- LS.IsIndVarNext ? PN->getIncomingValueForBlock(LS.Latch) : PN;
- NewPHI->addIncoming(FixupValue, RRI.ExitSelector);
+ NewPHI->addIncoming(PN->getIncomingValueForBlock(LS.Latch),
+ RRI.ExitSelector);
RRI.PHIValuesAtPseudoExit.push_back(NewPHI);
}
@@ -1735,10 +1700,7 @@ bool InductiveRangeCheckElimination::runOnLoop(Loop *L, LPPassManager &LPM) {
}
LoopStructure LS = MaybeLoopStructure.getValue();
const SCEVAddRecExpr *IndVar =
- cast<SCEVAddRecExpr>(SE.getSCEV(LS.IndVarBase));
- if (LS.IsIndVarNext)
- IndVar = cast<SCEVAddRecExpr>(SE.getMinusSCEV(IndVar,
- SE.getSCEV(LS.IndVarStep)));
+ cast<SCEVAddRecExpr>(SE.getMinusSCEV(SE.getSCEV(LS.IndVarBase), SE.getSCEV(LS.IndVarStep)));
Optional<InductiveRangeCheck::Range> SafeIterRange;
Instruction *ExprInsertPt = Preheader->getTerminator();
diff --git a/test/Transforms/IRCE/latch-comparison-against-current-value.ll b/test/Transforms/IRCE/latch-comparison-against-current-value.ll
deleted file mode 100644
index afea0e6..0000000
--- a/test/Transforms/IRCE/latch-comparison-against-current-value.ll
+++ /dev/null
@@ -1,182 +0,0 @@
-; RUN: opt -verify-loop-info -irce-print-changed-loops -irce -S < %s 2>&1 | FileCheck %s
-
-; Check that IRCE is able to deal with loops where the latch comparison is
-; done against current value of the IV, not the IV.next.
-
-; CHECK: irce: in function test_01: constrained Loop at depth 1 containing: %loop<header><exiting>,%in.bounds<latch><exiting>
-; CHECK: irce: in function test_02: constrained Loop at depth 1 containing: %loop<header><exiting>,%in.bounds<latch><exiting>
-; CHECK-NOT: irce: in function test_03: constrained Loop at depth 1 containing: %loop<header><exiting>,%in.bounds<latch><exiting>
-; CHECK-NOT: irce: in function test_04: constrained Loop at depth 1 containing: %loop<header><exiting>,%in.bounds<latch><exiting>
-
-; SLT condition for increasing loop from 0 to 100.
-define void @test_01(i32* %arr, i32* %a_len_ptr) #0 {
-
-; CHECK: test_01
-; CHECK: entry:
-; CHECK-NEXT: %exit.mainloop.at = load i32, i32* %a_len_ptr, !range !0
-; CHECK-NEXT: [[COND2:%[^ ]+]] = icmp slt i32 0, %exit.mainloop.at
-; CHECK-NEXT: br i1 [[COND2]], label %loop.preheader, label %main.pseudo.exit
-; CHECK: loop:
-; CHECK-NEXT: %idx = phi i32 [ %idx.next, %in.bounds ], [ 0, %loop.preheader ]
-; CHECK-NEXT: %idx.next = add nuw nsw i32 %idx, 1
-; CHECK-NEXT: %abc = icmp slt i32 %idx, %exit.mainloop.at
-; CHECK-NEXT: br i1 true, label %in.bounds, label %out.of.bounds.loopexit1
-; CHECK: in.bounds:
-; CHECK-NEXT: %addr = getelementptr i32, i32* %arr, i32 %idx
-; CHECK-NEXT: store i32 0, i32* %addr
-; CHECK-NEXT: %next = icmp slt i32 %idx, 100
-; CHECK-NEXT: [[COND3:%[^ ]+]] = icmp slt i32 %idx, %exit.mainloop.at
-; CHECK-NEXT: br i1 [[COND3]], label %loop, label %main.exit.selector
-; CHECK: main.exit.selector:
-; CHECK-NEXT: %idx.lcssa = phi i32 [ %idx, %in.bounds ]
-; CHECK-NEXT: [[COND4:%[^ ]+]] = icmp slt i32 %idx.lcssa, 100
-; CHECK-NEXT: br i1 [[COND4]], label %main.pseudo.exit, label %exit
-; CHECK-NOT: loop.preloop:
-; CHECK: loop.postloop:
-; CHECK-NEXT: %idx.postloop = phi i32 [ %idx.copy, %postloop ], [ %idx.next.postloop, %in.bounds.postloop ]
-; CHECK-NEXT: %idx.next.postloop = add nuw nsw i32 %idx.postloop, 1
-; CHECK-NEXT: %abc.postloop = icmp slt i32 %idx.postloop, %exit.mainloop.at
-; CHECK-NEXT: br i1 %abc.postloop, label %in.bounds.postloop, label %out.of.bounds.loopexit
-
-entry:
- %len = load i32, i32* %a_len_ptr, !range !0
- br label %loop
-
-loop:
- %idx = phi i32 [ 0, %entry ], [ %idx.next, %in.bounds ]
- %idx.next = add nsw nuw i32 %idx, 1
- %abc = icmp slt i32 %idx, %len
- br i1 %abc, label %in.bounds, label %out.of.bounds
-
-in.bounds:
- %addr = getelementptr i32, i32* %arr, i32 %idx
- store i32 0, i32* %addr
- %next = icmp slt i32 %idx, 100
- br i1 %next, label %loop, label %exit
-
-out.of.bounds:
- ret void
-
-exit:
- ret void
-}
-
-; ULT condition for increasing loop from 0 to 100.
-define void @test_02(i32* %arr, i32* %a_len_ptr) #0 {
-
-; CHECK: test_02
-; CHECK: entry:
-; CHECK-NEXT: %exit.mainloop.at = load i32, i32* %a_len_ptr, !range !0
-; CHECK-NEXT: [[COND2:%[^ ]+]] = icmp ult i32 0, %exit.mainloop.at
-; CHECK-NEXT: br i1 [[COND2]], label %loop.preheader, label %main.pseudo.exit
-; CHECK: loop:
-; CHECK-NEXT: %idx = phi i32 [ %idx.next, %in.bounds ], [ 0, %loop.preheader ]
-; CHECK-NEXT: %idx.next = add nuw nsw i32 %idx, 1
-; CHECK-NEXT: %abc = icmp ult i32 %idx, %exit.mainloop.at
-; CHECK-NEXT: br i1 true, label %in.bounds, label %out.of.bounds.loopexit1
-; CHECK: in.bounds:
-; CHECK-NEXT: %addr = getelementptr i32, i32* %arr, i32 %idx
-; CHECK-NEXT: store i32 0, i32* %addr
-; CHECK-NEXT: %next = icmp ult i32 %idx, 100
-; CHECK-NEXT: [[COND3:%[^ ]+]] = icmp ult i32 %idx, %exit.mainloop.at
-; CHECK-NEXT: br i1 [[COND3]], label %loop, label %main.exit.selector
-; CHECK: main.exit.selector:
-; CHECK-NEXT: %idx.lcssa = phi i32 [ %idx, %in.bounds ]
-; CHECK-NEXT: [[COND4:%[^ ]+]] = icmp ult i32 %idx.lcssa, 100
-; CHECK-NEXT: br i1 [[COND4]], label %main.pseudo.exit, label %exit
-; CHECK-NOT: loop.preloop:
-; CHECK: loop.postloop:
-; CHECK-NEXT: %idx.postloop = phi i32 [ %idx.copy, %postloop ], [ %idx.next.postloop, %in.bounds.postloop ]
-; CHECK-NEXT: %idx.next.postloop = add nuw nsw i32 %idx.postloop, 1
-; CHECK-NEXT: %abc.postloop = icmp ult i32 %idx.postloop, %exit.mainloop.at
-; CHECK-NEXT: br i1 %abc.postloop, label %in.bounds.postloop, label %out.of.bounds.loopexit
-
-entry:
- %len = load i32, i32* %a_len_ptr, !range !0
- br label %loop
-
-loop:
- %idx = phi i32 [ 0, %entry ], [ %idx.next, %in.bounds ]
- %idx.next = add nsw nuw i32 %idx, 1
- %abc = icmp ult i32 %idx, %len
- br i1 %abc, label %in.bounds, label %out.of.bounds
-
-in.bounds:
- %addr = getelementptr i32, i32* %arr, i32 %idx
- store i32 0, i32* %addr
- %next = icmp ult i32 %idx, 100
- br i1 %next, label %loop, label %exit
-
-out.of.bounds:
- ret void
-
-exit:
- ret void
-}
-
-; Same as test_01, but comparison happens against IV extended to a wider type.
-; This test ensures that IRCE rejects it and does not falsely assume that it was
-; a comparison against iv.next.
-; TODO: We can actually extend the recognition to cover this case.
-define void @test_03(i32* %arr, i64* %a_len_ptr) #0 {
-
-; CHECK: test_03
-
-entry:
- %len = load i64, i64* %a_len_ptr, !range !1
- br label %loop
-
-loop:
- %idx = phi i32 [ 0, %entry ], [ %idx.next, %in.bounds ]
- %idx.next = add nsw nuw i32 %idx, 1
- %idx.ext = sext i32 %idx to i64
- %abc = icmp slt i64 %idx.ext, %len
- br i1 %abc, label %in.bounds, label %out.of.bounds
-
-in.bounds:
- %addr = getelementptr i32, i32* %arr, i32 %idx
- store i32 0, i32* %addr
- %next = icmp slt i32 %idx, 100
- br i1 %next, label %loop, label %exit
-
-out.of.bounds:
- ret void
-
-exit:
- ret void
-}
-
-; Same as test_02, but comparison happens against IV extended to a wider type.
-; This test ensures that IRCE rejects it and does not falsely assume that it was
-; a comparison against iv.next.
-; TODO: We can actually extend the recognition to cover this case.
-define void @test_04(i32* %arr, i64* %a_len_ptr) #0 {
-
-; CHECK: test_04
-
-entry:
- %len = load i64, i64* %a_len_ptr, !range !1
- br label %loop
-
-loop:
- %idx = phi i32 [ 0, %entry ], [ %idx.next, %in.bounds ]
- %idx.next = add nsw nuw i32 %idx, 1
- %idx.ext = sext i32 %idx to i64
- %abc = icmp ult i64 %idx.ext, %len
- br i1 %abc, label %in.bounds, label %out.of.bounds
-
-in.bounds:
- %addr = getelementptr i32, i32* %arr, i32 %idx
- store i32 0, i32* %addr
- %next = icmp ult i32 %idx, 100
- br i1 %next, label %loop, label %exit
-
-out.of.bounds:
- ret void
-
-exit:
- ret void
-}
-
-!0 = !{i32 0, i32 50}
-!1 = !{i64 0, i64 50}
llvm-svn: 312775
|
| |
|
|
|
|
|
|
|
| |
by reusing more of the existing machinery
This is a follow-up to r312169.
Thanks to Björn Pettersson for the testcase!
llvm-svn: 312773
|