summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Analysis/ScalarEvolution.cpp
Commit message (Collapse)AuthorAgeFilesLines
* [SCEV][NFC] Verify IR in isLoop[Entry,Backedge]GuardedByCondMax Kazantsev2018-11-081-0/+15
| | | | | | | | | | | | | | | | | | We have a lot of various bugs that are caused by misuse of SCEV (in particular in LV), all of them can simply be described as "we ask SCEV to prove some fact on invalid IR". Some of examples of those are PR36311, PR37221, PR39160. The problem is that these failues manifest differently (what we saw was failure of various asserts across SCEV, but there can also be miscompiles). This patch adds an assert into two SCEV methods that strongly rely on correctness of the IR and are involved in known failues. This will at least allow us to have a clear indication of what was wrong in this case. This patch also fixes a unit test with incorrect IR that fails this verification. Differential Revision: https://reviews.llvm.org/D52930 Reviewed By: fhahn llvm-svn: 346389
* [SCEV] Avoid redundant computations when doing AddRec mergeMax Kazantsev2018-11-011-5/+6
| | | | | | | | | | | | | When we calculate a product of 2 AddRecs, we end up making quite massive computations to deduce the operands of resulting AddRec. This process can be optimized by computing all args of intermediate sum and then calling `getAddExpr` once rather than calling `getAddExpr` with intermediate result every time a new argument is computed. Differential Revision: https://reviews.llvm.org/D53189 Reviewed By: rtereshin llvm-svn: 345813
* [NFC] Remove GOTO from SCEVMax Kazantsev2018-10-171-20/+14
| | | | llvm-svn: 344687
* [SCEV] Limit AddRec "simplifications" to avoid combinatorial explosionsMax Kazantsev2018-10-161-1/+1
| | | | | | | | | | | | | | | | | | SCEV's transform that turns `{A1,+,A2,+,...,+,An}<L> * {B1,+,B2,+,...,+,Bn}<L>` into a single AddRec of size `2n+1` with complex combinatorial coefficients can easily trigger exponential growth of the SCEV (in case if nothing gets folded and simplified). We tried to restrain this transform using the option `scalar-evolution-max-add-rec-size`, but its default value seems to be insufficiently small: the test attached to this patch with default value of this option `16` has a SCEV of >3M symbols (when printed out). This patch reduces the simplification limit. It is not a cure to combinatorial explosions, but at least it reduces this corner case to something more or less reasonable. Differential Revision: https://reviews.llvm.org/D53282 Reviewed By: sanjoy llvm-svn: 344584
* [TI removal] Make variables declared as `TerminatorInst` and initializedChandler Carruth2018-10-151-1/+1
| | | | | | | | | | | | | by `getTerminator()` calls instead be declared as `Instruction`. This is the biggest remaining chunk of the usage of `getTerminator()` that insists on the narrow type and so is an easy batch of updates. Several files saw more extensive updates where this would cascade to requiring API updates within the file to use `Instruction` instead of `TerminatorInst`. All of these were trivial in nature (pervasively using `Instruction` instead just worked). llvm-svn: 344502
* [NFC] Factor out getOrCreateAddRecExpr methodMax Kazantsev2018-10-111-18/+24
| | | | llvm-svn: 344227
* llvm::sort(C.begin(), C.end(), ...) -> llvm::sort(C, ...)Fangrui Song2018-09-271-1/+1
| | | | | | | | | | | | Summary: The convenience wrapper in STLExtras is available since rL342102. Reviewers: dblaikie, javed.absar, JDevlieghere, andreadb Subscribers: MatzeB, sanjoy, arsenm, dschuff, mehdi_amini, sdardis, nemanjai, jvesely, nhaehnle, sbc100, jgravelle-google, eraman, aheejin, kbarton, JDevlieghere, javed.absar, gbedwell, jrtc27, mgrang, atanasyan, steven_wu, george.burgess.iv, dexonsmith, kristina, jsji, llvm-commits Differential Revision: https://reviews.llvm.org/D52573 llvm-svn: 343163
* Revert "[SCEV][NFC] Check NoWrap flags before lexicographical comparison of ↵Roman Tereshin2018-08-271-8/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SCEVs" This reverts r319889. Unfortunately, wrapping flags are not a part of SCEV's identity (they do not participate in computing a hash value or in equality comparisons) and in fact they could be assigned after the fact w/o rebuilding a SCEV. Grep for const_cast's to see quite a few of examples, apparently all for AddRec's at the moment. So, if 2 expressions get built in 2 slightly different ways: one with flags set in the beginning, the other with the flags attached later on, we may end up with 2 expressions which are exactly the same but have their operands swapped in one of the commutative N-ary expressions, and at least one of them will have "sorted by complexity" invariant broken. 2 identical SCEV's won't compare equal by pointer comparison as they are supposed to. A real-world reproducer is added as a regression test: the issue described causes 2 identical SCEV expressions to have different order of operands and therefore compare not equal, which in its turn prevents LoadStoreVectorizer from vectorizing a pair of consecutive loads. On a larger example (the source of the test attached, which is a bugpoint) I have seen even weirder behavior: adding a constant to an existing SCEV changes the order of the existing terms, for instance, getAddExpr(1, ((A * B) + (C * D))) returns (1 + (C * D) + (A * B)). Differential Revision: https://reviews.llvm.org/D40645 llvm-svn: 340777
* [SCEV] Properly solve quadratic equationsKrzysztof Parzyszek2018-08-021-106/+258
| | | | | | Differential Revision: https://reviews.llvm.org/D48283 llvm-svn: 338758
* Remove trailing spaceFangrui Song2018-07-301-5/+5
| | | | | | sed -Ei 's/[[:space:]]+$//' include/**/*.{def,h,td} lib/**/*.{cpp,h} llvm-svn: 338293
* [SCEV] Add [zs]ext{C,+,x} -> (D + [zs]ext{C-D,+,x})<nuw><nsw> transformRoman Tereshin2018-07-251-63/+104
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | as well as sext(C + x + ...) -> (D + sext(C-D + x + ...))<nuw><nsw> similar to the equivalent transformation for zext's if the top level addition in (D + (C-D + x * n)) could be proven to not wrap, where the choice of D also maximizes the number of trailing zeroes of (C-D + x * n), ensuring homogeneous behaviour of the transformation and better canonicalization of such AddRec's (indeed, there are 2^(2w) different expressions in `B1 + ext(B2 + Y)` form for the same Y, but only 2^(2w - k) different expressions in the resulting `B3 + ext((B4 * 2^k) + Y)` form, where w is the bit width of the integral type) This patch generalizes sext(C1 + C2*X) --> sext(C1) + sext(C2*X) and sext{C1,+,C2} --> sext(C1) + sext{0,+,C2} transformations added in r209568 relaxing the requirements the following way: 1. C2 doesn't have to be a power of 2, it's enough if it's divisible by 2 a sufficient number of times; 2. C1 doesn't have to be less than C2, instead of extracting the entire C1 we can split it into 2 terms: (00...0XXX + YY...Y000), keep the second one that may cause wrapping within the extension operator, and move the first one that doesn't affect wrapping out of the extension operator, enabling further simplifications; 3. C1 and C2 don't have to be positive, splitting C1 like shown above produces a sum that is guaranteed to not wrap, signed or unsigned; 4. in AddExpr case there could be more than 2 terms, and in case of AddExpr the 2nd and following terms and in case of AddRecExpr the Step component don't have to be in the C2*X form or constant (respectively), they just need to have enough trailing zeros, which in turn could be guaranteed by means other than arithmetics, e.g. by a pointer alignment; 5. the extension operator doesn't have to be a sext, the same transformation works and profitable for zext's as well. Apparently, optimizations like SLPVectorizer currently fail to vectorize even rather trivial cases like the following: double bar(double *a, unsigned n) { double x = 0.0; double y = 0.0; for (unsigned i = 0; i < n; i += 2) { x += a[i]; y += a[i + 1]; } return x * y; } If compiled with `clang -std=c11 -Wpedantic -Wall -O3 main.c -S -o - -emit-llvm` (!{!"clang version 7.0.0 (trunk 337339) (llvm/trunk 337344)"}) it produces scalar code with the loop not unrolled with the unsigned `n` and `i` (like shown above), but vectorized and unrolled loop with signed `n` and `i`. With the changes made in this commit the unsigned version will be vectorized (though not unrolled for unclear reasons). How it all works: Let say we have an AddExpr that looks like (C + x + y + ...), where C is a constant and x, y, ... are arbitrary SCEVs. Let's compute the minimum number of trailing zeroes guaranteed of that sum w/o the constant term: (x + y + ...). If, for example, those terms look like follows: i XXXX...X000 YYYY...YY00 ... ZZZZ...0000 then the rightmost non-guaranteed-zero bit (a potential one at i-th position above) can change the bits of the sum to the left (and at i-th position itself), but it can not possibly change the bits to the right. So we can compute the number of trailing zeroes by taking a minimum between the numbers of trailing zeroes of the terms. Now let's say that our original sum with the constant is effectively just C + X, where X = x + y + .... Let's also say that we've got 2 guaranteed trailing zeros for X: j CCCC...CCCC XXXX...XX00 // this is X = (x + y + ...) Any bit of C to the left of j may in the end cause the C + X sum to wrap, but the rightmost 2 bits of C (at positions j and j - 1) do not affect wrapping in any way. If the upper bits cause a wrap, it will be a wrap regardless of the values of the 2 least significant bits of C. If the upper bits do not cause a wrap, it won't be a wrap regardless of the values of the 2 bits on the right (again). So let's split C to 2 constants like follows: 0000...00CC = D CCCC...CC00 = (C - D) and represent the whole sum as D + (C - D + X). The second term of this new sum looks like this: CCCC...CC00 XXXX...XX00 ----------- // let's add them up YYYY...YY00 The sum above (let's call it Y)) may or may not wrap, we don't know, so we need to keep it under a sext/zext. Adding D to that sum though will never wrap, signed or unsigned, if performed on the original bit width or the extended one, because all that that final add does is setting the 2 least significant bits of Y to the bits of D: YYYY...YY00 = Y 0000...00CC = D ----------- <nuw><nsw> YYYY...YYCC Which means we can safely move that D out of the sext or zext and claim that the top-level sum neither sign wraps nor unsigned wraps. Let's run an example, let's say we're working in i8's and the original expression (zext's or sext's operand) is 21 + 12x + 8y. So it goes like this: 0001 0101 // 21 XXXX XX00 // 12x YYYY Y000 // 8y 0001 0101 // 21 ZZZZ ZZ00 // 12x + 8y 0000 0001 // D 0001 0100 // 21 - D = 20 ZZZZ ZZ00 // 12x + 8y 0000 0001 // D WWWW WW00 // 21 - D + 12x + 8y = 20 + 12x + 8y therefore zext(21 + 12x + 8y) = (1 + zext(20 + 12x + 8y))<nuw><nsw> This approach could be improved if we move away from using trailing zeroes and use KnownBits instead. For instance, with KnownBits we could have the following picture: i 10 1110...0011 // this is C XX X1XX...XX00 // this is X = (x + y + ...) Notice that some of the bits of X are known ones, also notice that known bits of X are interspersed with unknown bits and not grouped on the rigth or left. We can see at the position i that C(i) and X(i) are both known ones, therefore the (i + 1)th carry bit is guaranteed to be 1 regardless of the bits of C to the right of i. For instance, the C(i - 1) bit only affects the bits of the sum at positions i - 1 and i, and does not influence if the sum is going to wrap or not. Therefore we could split the constant C the following way: i 00 0010...0011 = D 10 1100...0000 = (C - D) Let's compute the KnownBits of (C - D) + X: XX1 1 = carry bit, blanks stand for known zeroes 10 1100...0000 = (C - D) XX X1XX...XX00 = X --- ----------- XX X0XX...XX00 Will this add wrap or not essentially depends on bits of X. Adding D to this sum, however, is guaranteed to not to wrap: 0 X 00 0010...0011 = D sX X0XX...XX00 = (C - D) + X --- ----------- sX XXXX XX11 As could be seen above, adding D preserves the sign bit of (C - D) + X, if any, and has a guaranteed 0 carry out, as expected. The more bits of (C - D) we constrain, the better the transformations introduced here canonicalize expressions as it leaves less freedom to what values the constant part of ((C - D) + x + y + ...) can take. Reviewed By: mzolotukhin, efriedma Differential Revision: https://reviews.llvm.org/D48853 llvm-svn: 337943
* [SCEV] Add zext(C + x + ...) -> D + zext(C-D + x + ...)<nuw><nsw> transformRoman Tereshin2018-07-241-0/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | if the top level addition in (D + (C-D + x + ...)) could be proven to not wrap, where the choice of D also maximizes the number of trailing zeroes of (C-D + x + ...), ensuring homogeneous behaviour of the transformation and better canonicalization of such expressions. This enables better canonicalization of expressions like 1 + zext(5 + 20 * %x + 24 * %y) and zext(6 + 20 * %x + 24 * %y) which get both transformed to 2 + zext(4 + 20 * %x + 24 * %y) This pattern is common in address arithmetics and the transformation makes it easier for passes like LoadStoreVectorizer to prove that 2 or more memory accesses are consecutive and optimize (vectorize) them. Reviewed By: mzolotukhin Differential Revision: https://reviews.llvm.org/D48853 llvm-svn: 337859
* [SCEV] Fix buggy behavior in getAddExpr with truncsMax Kazantsev2018-07-191-1/+1
| | | | | | | | | | | | | | | | | SCEV tries to constant-fold arguments of trunc operands in SCEVAddExpr, and when it does that, it passes wrong flags into the recursion. It is only valid to pass flags that are proved for narrow type into a computation in wider type if we can prove that trunc instruction doesn't actually change the value. If it did lose some meaningful bits, we may end up proving wrong no-wrap flags for sum of arguments of trunc. In the provided test we end up with `nuw` where it shouldn't be because of this bug. The solution is to conservatively pass `SCEV::FlagAnyWrap` which is always a valid thing to do. Reviewed By: sanjoy Differential Revision: https://reviews.llvm.org/D49471 llvm-svn: 337435
* Re-apply "[SCEV] Strengthen StrengthenNoWrapFlags (reapply r334428)."Tim Shen2018-07-131-7/+20
| | | | llvm-svn: 337075
* Revert "[SCEV] Strengthen StrengthenNoWrapFlags (reapply r334428)."Tim Shen2018-07-061-20/+7
| | | | | | This reverts commit r336140. Our tests shows that LSR assert fails with it. llvm-svn: 336473
* Use Type::isIntOrPtrTy where possible, NFCVedant Kumar2018-07-061-19/+10
| | | | | | | | | | | It's a bit neater to write T.isIntOrPtrTy() over `T.isIntegerTy() || T.isPointerTy()`. I used Python's re.sub with this regex to update users: r'([\w.\->()]+)isIntegerTy\(\)\s*\|\|\s*\1isPointerTy\(\)' llvm-svn: 336462
* [SCEV] Strengthen StrengthenNoWrapFlags (reapply r334428).Tim Shen2018-07-021-7/+20
| | | | | | | | | | | | | | | | Summary: Comment on Transforms/LoopVersioning/incorrect-phi.ll: With the change SCEV is able to prove that the loop doesn't wrap-self (due to zext i16 to i64), disabling the entire loop versioning pass. Removed the zext and just use i64. Reviewers: sanjoy Subscribers: jlebar, hiraditya, javed.absar, bixia, llvm-commits Differential Revision: https://reviews.llvm.org/D48409 llvm-svn: 336140
* Fix overconfident assert in ScalarEvolution::isImpliedViaMergeRoman Shirokiy2018-06-291-2/+3
| | | | | | | | | We can have AddRec with loops having many predecessors. This changes an assert to an early return. Differential Revision: https://reviews.llvm.org/D48766 llvm-svn: 335965
* [SCEV] Re-apply r335197 (with Polly fixes).Tim Shen2018-06-211-0/+54
| | | | | | | | | | | | | | | | | Summary: This initiates a discussion on changing Polly accordingly while re-applying r335197 (D48338). I have never worked on Polly. The proposed change to param_div_div_div_2.ll is not educated, but just patterns that match the output. All LLVM files are already reviewed in D48338. Reviewers: jdoerfert, bollu, efriedma Subscribers: jlebar, sanjoy, hiraditya, llvm-commits, bixia Differential Revision: https://reviews.llvm.org/D48453 llvm-svn: 335292
* Revert "[SCEV] Improve zext(A /u B) and zext(A % B)"Tim Shen2018-06-211-54/+0
| | | | | | This reverts commit r335197, as some bots are not happy. llvm-svn: 335198
* [SCEV] Improve zext(A /u B) and zext(A % B)Tim Shen2018-06-211-0/+54
| | | | | | | | | | | | | | | Summary: Try to match udiv and urem patterns, and sink zext down to the leaves. I'm not entirely sure why some unrelated tests change, but the added <nsw>s seem right. Reviewers: sanjoy Subscribers: jlebar, hiraditya, bixia, llvm-commits Differential Revision: https://reviews.llvm.org/D48338 llvm-svn: 335197
* Revert "[SCEV] Add nuw/nsw to mul ops in StrengthenNoWrapFlags"Sanjoy Das2018-06-191-19/+6
| | | | | | | | | | | | | | This reverts r334428. It incorrectly marks some multiplications as nuw. Tim Shen is working on a proper fix. Original commit message: [SCEV] Add nuw/nsw to mul ops in StrengthenNoWrapFlags where safe. Summary: Previously we would add them for adds, but not multiplies. llvm-svn: 335016
* Revert "[SCEV] Use LLVM_MARK_AS_BITMASK_ENUM in SCEV." -- breaks MSVC builds.Justin Lebar2018-06-161-34/+38
| | | | | | This reverts D48237. llvm-svn: 334878
* Revert "[SCEV] Simplify some flags expressions." -- dependent revision ↵Justin Lebar2018-06-161-4/+4
| | | | | | | | breaks MSVC builds. This reverts D48238. llvm-svn: 334877
* [SCEV] Simplify some flags expressions.Justin Lebar2018-06-151-4/+4
| | | | | | | | | | | | | | Summary: Sending for presubmit review out of an abundance of caution; it would be bad to mess this up. Reviewers: sanjoy Subscribers: hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D48238 llvm-svn: 334875
* [SCEV] Use LLVM_MARK_AS_BITMASK_ENUM in SCEV.Justin Lebar2018-06-151-38/+34
| | | | | | | | | | | | | | | | | | Summary: Obviates the need for mask/clear/setFlags helpers. There are some expressions here which can be simplified, but to keep this easy to review, I have not simplified them in this patch. No functional change. Reviewers: sanjoy Subscribers: hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D48237 llvm-svn: 334874
* [SCEV] Fix a variable name, NFC.Justin Lebar2018-06-141-6/+6
| | | | llvm-svn: 334738
* [SCEV] Simplify zext/trunc idiom that appears when handling bitmasks.Justin Lebar2018-06-141-0/+26
| | | | | | | | | | | | | | | | | | | Summary: Specifically, we transform zext(2^K * (trunc X to iN)) to iM -> 2^K * (zext(trunc X to i{N-K}) to iM)<nuw> This is helpful because pulling the 2^K out of the zext allows further optimizations. Reviewers: sanjoy Subscribers: hiraditya, llvm-commits, timshen Differential Revision: https://reviews.llvm.org/D48158 llvm-svn: 334737
* [SCEV] Simplify trunc-of-add/mul to add/mul-of-trunc under more circumstances.Justin Lebar2018-06-141-32/+22
| | | | | | | | | | | | | | | | | | | | Summary: Previously we would do this simplification only if it did not introduce any new truncs (excepting new truncs which replace other cast ops). This change weakens this condition: If the number of truncs stays the same, but we're able to transform trunc(X + Y) to X + trunc(Y), that's still simpler, and it may open up additional transformations. While we're here, also clean up some duplicated code. Reviewers: sanjoy Subscribers: hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D48160 llvm-svn: 334736
* [SCEV] Fix indentation and combine two if statements in getMulExpr, NFC.Justin Lebar2018-06-141-15/+14
| | | | llvm-svn: 334735
* [SCEV] Add transform zext((A * B * ...)<nuw>) --> (zext(A) * zext(B) * ↵Justin Lebar2018-06-111-0/+12
| | | | | | | | | | | | ...)<nuw>. Reviewers: sanjoy Subscribers: hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D48041 llvm-svn: 334429
* [SCEV] Add nuw/nsw to mul ops in StrengthenNoWrapFlags where safe.Justin Lebar2018-06-111-6/+19
| | | | | | | | | | | | | Summary: Previously we would add them for adds, but not multiplies. Reviewers: sanjoy Subscribers: llvm-commits, hiraditya Differential Revision: https://reviews.llvm.org/D48038 llvm-svn: 334428
* Fix indentation in ScalarEvolution.cpp.Justin Lebar2018-06-111-26/+26
| | | | | | Whitespace-only change. (clang-formatted the whole block.) llvm-svn: 334427
* [SCEV] Canonicalize "A /u C1 /u C2" to "A /u (C1*C2)".Tim Shen2018-06-111-0/+15
| | | | | | | | | | | | Summary: FWIW InstCombine already folds this. Also avoid the case where C1*C2 overflows. Reviewers: sunfish, sanjoy Subscribers: hiraditya, bixia, llvm-commits Differential Revision: https://reviews.llvm.org/D47965 llvm-svn: 334425
* [SCEV] Look through zero-extends in howFarToZeroKrzysztof Parzyszek2018-06-081-1/+11
| | | | | | | | | | | | An expression like (zext i2 {(trunc i32 (1 + %B) to i2),+,1}<%while.body> to i32) will become zero exactly when the nested value becomes zero in its type. Strip injective operations from the input value in howFarToZero to make the value simpler. Differential Revision: https://reviews.llvm.org/D47951 llvm-svn: 334318
* Rename DEBUG macro to LLVM_DEBUG.Nicola Zaghen2018-05-141-50/+50
| | | | | | | | | | | | | | | | The DEBUG() macro is very generic so it might clash with other projects. The renaming was done as follows: - git grep -l 'DEBUG' | xargs sed -i 's/\bDEBUG\s\?(/LLVM_DEBUG(/g' - git diff -U0 master | ../clang/tools/clang-format/clang-format-diff.py -i -p1 -style LLVM - Manual change to APInt - Manually chage DOCS as regex doesn't match it. In the transition period the DEBUG() macro is still present and aliased to the LLVM_DEBUG() one. Differential Revision: https://reviews.llvm.org/D43624 llvm-svn: 332240
* SCEV] Do not use induction in isKnownPredicate for simplification umax.Serguei Katkov2018-05-101-5/+6
| | | | | | | | | | | | | | | | | | During simplification umax we trigger isKnownPredicate twice. As a first attempt it tries the induction. To do that it tries to get post increment of SCEV. Re-writing the SCEV may result in simplification of umax. If the SCEV contains a lot of umax operations this recursion becomes very slow. The added test demonstrates the slow behavior. To resolve this we use only simple ways to check whether the predicate is known. Reviewers: sanjoy, mkazantsev Reviewed By: sanjoy Subscribers: lebedev.ri, javed.absar, llvm-commits Differential Revision: https://reviews.llvm.org/D46046 llvm-svn: 331949
* Re-enable "[SCEV] Make computeExitLimit more simple and more powerful"Max Kazantsev2018-05-031-58/+17
| | | | | | | | | | | This patch was temporarily reverted because it has exposed bug 37229 on PowerPC platform. The bug is unrelated to the patch and was just a general bug in the optimization done for PowerPC platform only. The bug was fixed by the patch rL331410. This patch returns the disabled commit since the bug was fixed. llvm-svn: 331427
* IWYU for llvm-config.h in llvm, additions.Nico Weber2018-04-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | See r331124 for how I made a list of files missing the include. I then ran this Python script: for f in open('filelist.txt'): f = f.strip() fl = open(f).readlines() found = False for i in xrange(len(fl)): p = '#include "llvm/' if not fl[i].startswith(p): continue if fl[i][len(p):] > 'Config': fl.insert(i, '#include "llvm/Config/llvm-config.h"\n') found = True break if not found: print 'not found', f else: open(f, 'w').write(''.join(fl)) and then looked through everything with `svn diff | diffstat -l | xargs -n 1000 gvim -p` and tried to fix include ordering and whatnot. No intended behavior change. llvm-svn: 331184
* [SCEV] Touch the unsused stats variables for product build.Serguei Katkov2018-04-281-0/+3
| | | | | | This is a fix by elimination compiler warnings considered as errors. llvm-svn: 331103
* [SCEV] Reduce the number of invocation to non trivial getExact functionSerguei Katkov2018-04-281-2/+5
| | | | | | | | | | | | | | | | | The invocation of getExact in ScalarEvolution::getBackedgeTakenInfo is used only for getting statistic and for assert. Even if statistics is disabled, the code related to it will be eliminated the invocation to getExact itself will not be eliminated because it may have side-effects like creation of new SCEVs. So do invocation only when we collect statistics or executes asserts. Reviewers: mkazantsev, sanjoy, javed.absar Reviewed By: javed.absar Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D46178 llvm-svn: 331099
* [SCEV] Add trivial case handling for umin utilities. NFC.Serguei Katkov2018-04-271-2/+11
| | | | | | | | | Reviewers: sanjoy, mkazantsev Reviewed By: mkazantsev Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D46175 llvm-svn: 331022
* [SCEV] Introduce bulk umin creation utilitiesSerguei Katkov2018-04-271-19/+45
| | | | | | | | | | | | | | | | Add new umin creation method which accepts a list of operands. SCEV does not represents umin which is required in getExact, so it transforms umin to umax with not. As a result the transformation of tree of max to max with several operands does not work. We just use the new introduced method for creation umin from several operands. Reviewers: sanjoy, mkazantsev Reviewed By: sanjoy Subscribers: javed.absar, llvm-commits Differential Revision: https://reviews.llvm.org/D46047 llvm-svn: 331015
* Revert "[SCEV] Make computeExitLimit more simple and more powerful"Max Kazantsev2018-04-261-17/+58
| | | | | | | | | | | This reverts commit 023c8be90980e0180766196cba86f81608b35d38. This patch triggers miscompile of zlib on PowerPC platform. Most likely it is caused by some pre-backend PPC-specific pass, but we don't clearly know the reason yet. So we temporally revert this patch with intention to return it once the problem is resolved. See bug 37229 for details. llvm-svn: 330893
* [LoopSimplify] Fix incorrect SCEV invalidationMax Kazantsev2018-04-231-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | In the function `simplifyOneLoop` we optimistically assume that changes in the inner loop only affect this very loop and have no impact on its parents. In fact, after rL329047 has been merged, we can now calculate exit counts for outer loops which may depend on inner loops. Thus, we need to invalidate all parents when we do something to a loop. There is an evidence of incorrect behavior of `simplifyOneLoop`: when we insert `SE->verify()` check in the end of this funciton, it fails on a bunch of existing test, in particular: LLVM :: Transforms/LoopUnroll/peel-loop-not-forced.ll LLVM :: Transforms/LoopUnroll/peel-loop-pgo.ll LLVM :: Transforms/LoopUnroll/peel-loop.ll LLVM :: Transforms/LoopUnroll/peel-loop2.ll Note that previously we have fixed issues of this variety, see rL328483. This patch makes this function invalidate the outermost loop properly. Differential Revision: https://reviews.llvm.org/D45937 Reviewed By: chandlerc llvm-svn: 330576
* [NFC] Loosen restriction on preheader to fix buildbotMax Kazantsev2018-04-061-5/+5
| | | | llvm-svn: 329379
* [SCEV] Prove implications for SCEVUnknown PhisMax Kazantsev2018-04-041-0/+118
| | | | | | | | | | | | | | | | | This patch teaches SCEV how to prove implications for SCEVUnknown nodes that are Phis. If we need to prove `Pred` for `LHS, RHS`, and `LHS` is a Phi with possible incoming values `L1, L2, ..., LN`, then if we prove `Pred` for `(L1, RHS), (L2, RHS), ..., (LN, RHS)` then we can also prove it for `(LHS, RHS)`. If both `LHS` and `RHS` are Phis from the same block, it is sufficient to prove the predicate for values that come from the same predecessor block. The typical case that it handles is that we sometimes need to prove that `Phi(Len, Len - 1) >= 0` given that `Len > 0`. The new logic was added to `isImpliedViaOperations` and only uses it and non-recursive reasoning to prove the facts we need, so it should not hurt compile time a lot. Differential Revision: https://reviews.llvm.org/D44001 Reviewed By: anna llvm-svn: 329150
* [SCEV] Fix PR36974.Serguei Katkov2018-04-031-5/+6
| | | | | | | | | | | | | The patch changes the usage of dominate to properlyDominate to satisfy the condition !(a < a) while using std::max. It is actually NFC due to set data structure is used to keep the Loops and no two identical loops can be in collection. So in reality there is no difference between usage of dominate and properlyDominate in this particular case. However it might be changed so it is better to fix it. llvm-svn: 329051
* [SCEV] Make computeExitLimit more simple and more powerfulMax Kazantsev2018-04-031-58/+17
| | | | | | | | | | | | | | | | | | | | | | | Current implementation of `computeExitLimit` has a big piece of code the only purpose of which is to prove that after the execution of this block the latch will be executed. What it currently checks is actually a subset of situations where the exiting block dominates latch. This patch replaces all these checks for simple particular cases with domination check over loop's latch which is the only necessary condition of taking the exiting block into consideration. This change allows to calculate exact loop taken count for simple loops like for (int i = 0; i < 100; i++) { if (cond) {...} else {...} if (i > 50) break; . . . } Differential Revision: https://reviews.llvm.org/D44677 Reviewed By: efriedma llvm-svn: 329047
* [Analysis] Change std::sort to llvm::sort in response to r327219Mandeep Singh Grang2018-04-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | Summary: r327219 added wrappers to std::sort which randomly shuffle the container before sorting. This will help in uncovering non-determinism caused due to undefined sorting order of objects having the same key. To make use of that infrastructure we need to invoke llvm::sort instead of std::sort. Note: This patch is one of a series of patches to replace *all* std::sort to llvm::sort. Refer D44363 for a list of all the required patches. Reviewers: sanjoy, dexonsmith, hfinkel, RKSimon Reviewed By: dexonsmith Subscribers: david2050, llvm-commits Differential Revision: https://reviews.llvm.org/D44944 llvm-svn: 328925
OpenPOWER on IntegriCloud