summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms/GVN
Commit message (Collapse)AuthorAgeFilesLines
* [GVN] Don't perform scalar PRE on GEPsAlexandros Lamprineas2018-12-061-13/+61
| | | | | | | | | | | | | | | Partial Redundancy Elimination of GEPs prevents CodeGenPrepare from sinking the addressing mode computation of memory instructions back to its uses. The problem comes from the insertion of PHIs, which confuse CGP and make it bail. I've autogenerated the check lines of an existing test and added a store instruction to demonstrate the motivation behind this change. The store is now using the gep instead of a phi. Differential Revision: https://reviews.llvm.org/D55009 llvm-svn: 348496
* [Local] Keep K's range if K does not move when combining metadata.Florian Hahn2018-10-271-14/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As K has to dominate I, IIUC I's range metadata must be a subset of K's. After Eli's recent clarification to the LangRef, loading a value outside of the range is undefined behavior. Therefore if I's range contains elements outside of K's range and we would load one such value, K would cause undefined behavior. In cases like hoisting/sinking, we still want the most generic range over all code paths to/from the hoist/sink point. As suggested in the patches related to D47339, I will refactor the handling of those scenarios and try to decouple it from this function as follow up, once we switched to a similar handling of metadata in most of combineMetadata. I updated some tests checking mostly the merging of metadata to keep the metadata of to dominating load. The most interesting one is probably test8 in test/Transforms/JumpThreading/thread-loads.ll. It contained a comment about the alias metadata preventing us to eliminate the branch, but it seem like the actual problem currently is that we merge the ranges of both loads and cannot eliminate the icmp afterwards. With this patch, we manage to eliminate the icmp, as the range of the first load excludes 8. Reviewers: efriedma, nlopes, davide Reviewed By: efriedma Differential Revision: https://reviews.llvm.org/D51629 llvm-svn: 345456
* Make YAML quote forward slashes.Zachary Turner2018-10-121-3/+3
| | | | | | | | | | | | | | | | | If you have the string /usr/bin, prior to this patch it would not be quoted by our YAML serializer. But a string like C:\src would be, due to the presence of a backslash. This makes the quoting rules of basically every single file path different depending on the path syntax (posix vs. Windows). While technically not required by the YAML specification to quote forward slashes, when the behavior of paths is inconsistent it makes it difficult to portably write FileCheck lines that will work with either kind of path. Differential Revision: https://reviews.llvm.org/D53169 llvm-svn: 344359
* Revert "Make YAML quote forward slashes."Zachary Turner2018-10-121-3/+3
| | | | | | | | | | This reverts commit b86c16ad8c97dadc1f529da72a5bb74e9eaed344. This is being reverted because I forgot to write a useful commit message, so I'm going to resubmit it with an actual commit message. llvm-svn: 344358
* Make YAML quote forward slashes.Zachary Turner2018-10-121-3/+3
| | | | llvm-svn: 344357
* [GVN] Invalidate cached info for values replaced by equality propagationJohn Brawn2018-09-101-0/+72
| | | | | | | | | When GVN propagates an equality by replacing one value with another it also needs to invalidate the cached information for the value being replaced. Differential Revision: https://reviews.llvm.org/D51218 llvm-svn: 341820
* [GVN] Invalidate cached info for phis when setting dead predecessors to undefJohn Brawn2018-08-231-0/+38
| | | | | | | | | | When GVN sets the incoming value for a phi to undef because the incoming block is unreachable it needs to also invalidate the cached info for that phi in MemoryDependenceAnalysis, otherwise later queries will return stale information. Differential Revision: https://reviews.llvm.org/D51099 llvm-svn: 340529
* [GVN] Assign new value number to calls reading memory, if there is no MemDep ↵Florian Hahn2018-08-211-0/+29
| | | | | | | | | | | | | | | | | | | | | | info. Currently we assign the same value number to two calls reading the same memory location if we do not have MemoryDependence info. Without MemDep Info we cannot guarantee that there is no store between the two calls, so we have to assign a new number to the second call. It also adds a new option EnableMemDep to enable/disable running MemoryDependenceAnalysis and also renamed NoLoads to NoMemDepAnalysis to be more explicit what it does. As it also impacts calls that read memory, NoLoads is a bit confusing. Reviewers: efriedma, sebpop, john.brawn, wmi Reviewed By: efriedma Differential Revision: https://reviews.llvm.org/D50893 llvm-svn: 340319
* [MemDep] Use PhiValuesAnalysis to improve alias analysis resultsJohn Brawn2018-07-311-2/+4
| | | | | | | | | | This is being done in order to make GVN able to better optimize certain inputs. MemDep doesn't use PhiValues directly, but does need to notifiy it when things get invalidated. Differential Revision: https://reviews.llvm.org/D48489 llvm-svn: 338384
* [GVN] Don't use the eliminated load as an available value in phi constructionJohn Brawn2018-07-232-0/+121
| | | | | | | | | | | In ConstructSSAForLoadSet if an available value is actually the load that we're doing SSA construction to eliminate, then we can omit it as SSAUpdate will add in the value for the phi that will be replacing it anyway. This can result in simpler IR which can allow further optimisation. Differential Revision: https://reviews.llvm.org/D44160 llvm-svn: 337686
* llvm: Add support for "-fno-delete-null-pointer-checks"Manoj Gupta2018-07-091-0/+82
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Support for this option is needed for building Linux kernel. This is a very frequently requested feature by kernel developers. More details : https://lkml.org/lkml/2018/4/4/601 GCC option description for -fdelete-null-pointer-checks: This Assume that programs cannot safely dereference null pointers, and that no code or data element resides at address zero. -fno-delete-null-pointer-checks is the inverse of this implying that null pointer dereferencing is not undefined. This feature is implemented in LLVM IR in this CL as the function attribute "null-pointer-is-valid"="true" in IR (Under review at D47894). The CL updates several passes that assumed null pointer dereferencing is undefined to not optimize when the "null-pointer-is-valid"="true" attribute is present. Reviewers: t.p.northover, efriedma, jyknight, chandlerc, rnk, srhines, void, george.burgess.iv Reviewed By: efriedma, george.burgess.iv Subscribers: eraman, haicheng, george.burgess.iv, drinkcat, theraven, reames, sanjoy, xbolva00, llvm-commits Differential Revision: https://reviews.llvm.org/D47895 llvm-svn: 336613
* Implement strip.invariant.groupPiotr Padlewski2018-07-021-3/+15
| | | | | | | | | | | | | | | | Summary: This patch introduce new intrinsic - strip.invariant.group that was described in the RFC: Devirtualization v2 Reviewers: rsmith, hfinkel, nlopes, sanjoy, amharc, kuhar Subscribers: arsenm, nhaehnle, JDevlieghere, hiraditya, xbolva00, llvm-commits Differential Revision: https://reviews.llvm.org/D47103 Co-authored-by: Krzysztof Pszeniczny <krzysztof.pszeniczny@gmail.com> llvm-svn: 336073
* [GVN] Avoid casting a vector of size less than 8 bits to i8Matthew Voss2018-06-211-0/+34
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: A reprise of D25849. This crash was found through fuzzing some time ago and was documented in PR28879. No check for load size has been added due to the following tests: - Transforms/GVN/invariant.group.ll - Transforms/GVN/pr10820.ll These tests expect load sizes that are not a multiple of eight. Thanks to @davide for the original patch. Reviewers: nlopes, davide, RKSimon, reames, efriedma Reviewed By: efriedma Subscribers: davide, llvm-commits, Prazek Differential Revision: https://reviews.llvm.org/D48330 llvm-svn: 335294
* [Debugify] Move debug value intrinsics closer to their operand defsVedant Kumar2018-06-062-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before this patch, debugify would insert debug value intrinsics before the terminating instruction in a block. This had the advantage of being simple, but was a bit too simple/unrealistic. This patch teaches debugify to insert debug values immediately after their operand defs. This enables better testing of the compiler. For example, with this patch, `opt -debugify-each` is able to identify a vectorizer DI-invariance bug fixed in llvm.org/PR32761. In this bug, the vectorizer produced different output with/without debug info present. Reverting Davide's bugfix locally, I see: $ ~/scripts/opt-check-dbg-invar.sh ./bin/opt \ .../SLPVectorizer/AArch64/spillcost-di.ll -slp-vectorizer Comparing: -slp-vectorizer .../SLPVectorizer/AArch64/spillcost-di.ll Baseline: /var/folders/j8/t4w0bp8j6x1g6fpghkcb4sjm0000gp/T/tmp.iYYeL1kf With DI : /var/folders/j8/t4w0bp8j6x1g6fpghkcb4sjm0000gp/T/tmp.sQtQSeet 9,11c9,11 < %5 = getelementptr inbounds %0, %0* %2, i64 %0, i32 1 < %6 = bitcast i64* %4 to <2 x i64>* < %7 = load <2 x i64>, <2 x i64>* %6, align 8, !tbaa !0 --- > %5 = load i64, i64* %4, align 8, !tbaa !0 > %6 = getelementptr inbounds %0, %0* %2, i64 %0, i32 1 > %7 = load i64, i64* %6, align 8, !tbaa !5 12a13 > store i64 %5, i64* %8, align 8, !tbaa !0 14,15c15 < %10 = bitcast i64* %8 to <2 x i64>* < store <2 x i64> %7, <2 x i64>* %10, align 8, !tbaa !0 --- > store i64 %7, i64* %9, align 8, !tbaa !5 :: Found a test case ^ Running this over the *.ll files in tree, I found four additional examples which compile differently with/without DI present. I plan on filing bugs for these. llvm-svn: 334118
* Dissallow non-empty metadata for invariant.groupPiotr Padlewski2018-05-181-23/+15
| | | | | | | | | | | | | | | Summary: This feature is not needed, but it might be usefull in the future to use metadata to mark what which function should support it (and strip it when not). Reviewers: rsmith, sanjoy, amharc, kuhar Subscribers: hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D45419 llvm-svn: 332787
* [DebugInfo] Add DILabel metadata and intrinsic llvm.dbg.label.Shiva Chen2018-05-095-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to set breakpoints on labels and list source code around labels, we need collect debug information for labels, i.e., label name, the function label belong, line number in the file, and the address label located. In order to keep these information in LLVM IR and to allow backend to generate debug information correctly. We create a new kind of metadata for labels, DILabel. The format of DILabel is !DILabel(scope: !1, name: "foo", file: !2, line: 3) We hope to keep debug information as much as possible even the code is optimized. So, we create a new kind of intrinsic for label metadata to avoid the metadata is eliminated with basic block. The intrinsic will keep existing if we keep it from optimized out. The format of the intrinsic is llvm.dbg.label(metadata !1) It has only one argument, that is the DILabel metadata. The intrinsic will follow the label immediately. Backend could get the label metadata through the intrinsic's parameter. We also create DIBuilder API for labels to be used by Frontend. Frontend could use createLabel() to allocate DILabel objects, and use insertLabel() to insert llvm.dbg.label intrinsic in LLVM IR. Differential Revision: https://reviews.llvm.org/D45024 Patch by Hsiangkai Wang. llvm-svn: 331841
* Rename invariant.group.barrier to launder.invariant.groupPiotr Padlewski2018-05-031-4/+4
| | | | | | | | | | | | | | Summary: This is one of the initial commit of "RFC: Devirtualization v2" proposal: https://docs.google.com/document/d/16GVtCpzK8sIHNc2qZz6RN8amICNBtvjWUod2SujZVEo/edit?usp=sharing Reviewers: rsmith, amharc, kuhar, sanjoy Subscribers: arsenm, nhaehnle, javed.absar, hiraditya, llvm-commits Differential Revision: https://reviews.llvm.org/D45111 llvm-svn: 331448
* Mark MergedLoadStoreMotion as not preserving MemDep resultsBjorn Steinbrink2018-02-231-0/+22
| | | | | | | | | | | | | | | | | | Summary: MemDep caches results that signify that a dependence is non-local, and there is currently no way to invalidate such cache entries. Unfortunately, when MLSM sinks a store that can result in a non-local dependence becoming a local one, and then MemDep gives wrong answers. The easiest way out here is to just say that MLSM does indeed not preserve MemDep results. Reviewers: davide, Gerolf Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D43177 llvm-svn: 325880
* [GVN] Partially revert debug info salvage change (r325063)Vedant Kumar2018-02-161-1/+1
| | | | | | | | | | | | | In r325063, we salvaged debug values from dying instructions in GVN::processBlock() and GVN::performScalarPRE(). The change in performScalarPRE(), while correct, is unhelpful. It introduced a call to salvageDebugInfo() which was immediately followed by a RAUW, meaning it prevented the RAUW from efficiently updating dbg.value intrinsics. This commit reverts the mistake and tightens up the affected test case. llvm-svn: 325308
* [GVN] Salvage debug info from dead instsVedant Kumar2018-02-132-2/+11
| | | | | | | | | | This preserves an additional 581 unique source variables in a stage2 build of clang (according to `llvm-dwarfdump --statistics`). It increases the size of the .debug_loc section by 0.1% (or 87139 bytes). Differential Revision: https://reviews.llvm.org/D43255 llvm-svn: 325063
* [NFC] fix trivial typos in commentsHiroshi Inoue2018-01-241-1/+1
| | | | | | "the the" -> "the" llvm-svn: 323302
* Remove alignment argument from memcpy/memmove/memset in favour of alignment ↵Daniel Neilson2018-01-193-16/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | attributes (Step 1) Summary: This is a resurrection of work first proposed and discussed in Aug 2015: http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.html and initially landed (but then backed out) in Nov 2015: http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html The @llvm.memcpy/memmove/memset intrinsics currently have an explicit argument which is required to be a constant integer. It represents the alignment of the dest (and source), and so must be the minimum of the actual alignment of the two. This change is the first in a series that allows source and dest to each have their own alignments by using the alignment attribute on their arguments. In this change we: 1) Remove the alignment argument. 2) Add alignment attributes to the source & dest arguments. We, temporarily, require that the alignments for source & dest be equal. For example, code which used to read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 100, i32 4, i1 false) will now read call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 4 %dest, i8* align 4 %src, i32 100, i1 false) Downstream users may have to update their lit tests that check for @llvm.memcpy/memmove/memset call/declaration patterns. The following extended sed script may help with updating the majority of your tests, but it does not catch all possible patterns so some manual checking and updating will be required. s~declare void @llvm\.mem(set|cpy|move)\.p([^(]*)\((.*), i32, i1\)~declare void @llvm.mem\1.p\2(\3, i1)~g s~call void @llvm\.memset\.p([^(]*)i8\(i8([^*]*)\* (.*), i8 (.*), i8 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i8(i8\2* \3, i8 \4, i8 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i16\(i8([^*]*)\* (.*), i8 (.*), i16 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i16(i8\2* \3, i8 \4, i16 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i32\(i8([^*]*)\* (.*), i8 (.*), i32 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i32(i8\2* \3, i8 \4, i32 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i64\(i8([^*]*)\* (.*), i8 (.*), i64 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i64(i8\2* \3, i8 \4, i64 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i128\(i8([^*]*)\* (.*), i8 (.*), i128 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.memset.p\1i128(i8\2* \3, i8 \4, i128 \5, i1 \6)~g s~call void @llvm\.memset\.p([^(]*)i8\(i8([^*]*)\* (.*), i8 (.*), i8 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i8(i8\2* align \6 \3, i8 \4, i8 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i16\(i8([^*]*)\* (.*), i8 (.*), i16 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i16(i8\2* align \6 \3, i8 \4, i16 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i32\(i8([^*]*)\* (.*), i8 (.*), i32 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i32(i8\2* align \6 \3, i8 \4, i32 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i64\(i8([^*]*)\* (.*), i8 (.*), i64 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i64(i8\2* align \6 \3, i8 \4, i64 \5, i1 \7)~g s~call void @llvm\.memset\.p([^(]*)i128\(i8([^*]*)\* (.*), i8 (.*), i128 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.memset.p\1i128(i8\2* align \6 \3, i8 \4, i128 \5, i1 \7)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i8\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i8 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i8(i8\3* \4, i8\5* \6, i8 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i16\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i16 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i16(i8\3* \4, i8\5* \6, i16 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i32\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i32 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i32(i8\3* \4, i8\5* \6, i32 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i64\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i64 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i64(i8\3* \4, i8\5* \6, i64 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i128\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i128 (.*), i32 [01], i1 ([^)]*)\)~call void @llvm.mem\1.p\2i128(i8\3* \4, i8\5* \6, i128 \7, i1 \8)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i8\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i8 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i8(i8\3* align \8 \4, i8\5* align \8 \6, i8 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i16\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i16 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i16(i8\3* align \8 \4, i8\5* align \8 \6, i16 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i32\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i32 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i32(i8\3* align \8 \4, i8\5* align \8 \6, i32 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i64\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i64 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i64(i8\3* align \8 \4, i8\5* align \8 \6, i64 \7, i1 \9)~g s~call void @llvm\.mem(cpy|move)\.p([^(]*)i128\(i8([^*]*)\* (.*), i8([^*]*)\* (.*), i128 (.*), i32 ([0-9]*), i1 ([^)]*)\)~call void @llvm.mem\1.p\2i128(i8\3* align \8 \4, i8\5* align \8 \6, i128 \7, i1 \9)~g The remaining changes in the series will: Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing source and dest alignments. Step 3) Update Clang to use the new IRBuilder API. Step 4) Update Polly to use the new IRBuilder API. Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API, and those that use use MemIntrinsicInst::[get|set]Alignment() to use getDestAlignment() and getSourceAlignment() instead. Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the MemIntrinsicInst::[get|set]Alignment() methods. Reviewers: pete, hfinkel, lhames, reames, bollu Reviewed By: reames Subscribers: niosHD, reames, jholewinski, qcolombet, jfb, sanjoy, arsenm, dschuff, dylanmckay, mehdi_amini, sdardis, nemanjai, david2050, nhaehnle, javed.absar, sbc100, jgravelle-google, eraman, aheejin, kbarton, JDevlieghere, asb, rbar, johnrusso, simoncook, jordy.potman.lists, apazos, sabuasal, llvm-commits Differential Revision: https://reviews.llvm.org/D41675 llvm-svn: 322965
* [PRE] Add a bunch of test cases for LICM-like PRE patternsPhilip Reames2018-01-031-0/+168
| | | | | | These were inspired by a very old review I'm about to abandon (https://reviews.llvm.org/D7061). Several of the test cases from that worked without modification and expanding test coverage of such cases is always worthwhile. llvm-svn: 321764
* [Analysis] Generate more precise TBAA tags when one access encloses the otherIvan A. Kosarev2017-12-181-21/+40
| | | | | | | | | | | | | | There are cases when two tags with different base types denote accesses to the same direct or indirect member of a structure type. Currently, merging of such tags results in a tag that represents an access to an object that has the type of that member. This patch changes this so that if one of the accesses encloses the other, then the generic tag is the one of the enclosed access. Differential Revision: https://reviews.llvm.org/D39557 llvm-svn: 321019
* Hardware-assisted AddressSanitizer (llvm part).Evgeniy Stepanov2017-12-091-0/+27
| | | | | | | | | | | | | | | | | | | | | Summary: This is LLVM instrumentation for the new HWASan tool. It is basically a stripped down copy of ASan at this point, w/o stack or global support. Instrumenation adds a global constructor + runtime callbacks for every load and store. HWASan comes with its own IR attribute. A brief design document can be found in clang/docs/HardwareAssistedAddressSanitizerDesign.rst (submitted earlier). Reviewers: kcc, pcc, alekseyshl Subscribers: srhines, mehdi_amini, mgorny, javed.absar, eraman, llvm-commits, hiraditya Differential Revision: https://reviews.llvm.org/D40932 llvm-svn: 320217
* [GVN] Prevent ScalarPRE from hoisting across instructions that don't pass ↵Max Kazantsev2017-11-281-0/+125
| | | | | | | | | | | | control flow to successors This is to address a problem similar to those in D37460 for Scalar PRE. We should not PRE across an instruction that may not pass execution to its successor unless it is safe to speculatively execute it. Differential Revision: https://reviews.llvm.org/D38619 llvm-svn: 319147
* Let llvm.invariant.group.barrier accepts pointer to any address spaceYaxun Liu2017-11-161-4/+4
| | | | | | | | | | | llvm.invariant.group.barrier may accept pointers to arbitrary address space. This patch let it accept pointers to i8 in any address space and returns pointer to i8 in the same address space. Differential Revision: https://reviews.llvm.org/D39973 llvm-svn: 318413
* [GVN PRE] Patch the source for Phi node in PRESerguei Katkov2017-11-092-0/+88
| | | | | | | | | | | | | | | | | | | We must patch all existing incoming values of Phi node, otherwise it is possible that we can see poison where program does not expect to see it. This is the similar what GVN does. The added test test/Transforms/GVN/PRE/pre-jt-add.ll shows an example of wrong optimization done by jump threading due to GVN PRE did not patch existing incoming value. Reviewers: mkazantsev, wmi, dberlin, davide Reviewed By: dberlin Subscribers: efriedma, llvm-commits Differential Revision: https://reviews.llvm.org/D39637 llvm-svn: 317768
* Add an @llvm.sideeffect intrinsicDan Gohman2017-11-081-0/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements Chandler's idea [0] for supporting languages that require support for infinite loops with side effects, such as Rust, providing part of a solution to bug 965 [1]. Specifically, it adds an `llvm.sideeffect()` intrinsic, which has no actual effect, but which appears to optimization passes to have obscure side effects, such that they don't optimize away loops containing it. It also teaches several optimization passes to ignore this intrinsic, so that it doesn't significantly impact optimization in most cases. As discussed on llvm-dev [2], this patch is the first of two major parts. The second part, to change LLVM's semantics to have defined behavior on infinite loops by default, with a function attribute for opting into potential-undefined-behavior, will be implemented and posted for review in a separate patch. [0] http://lists.llvm.org/pipermail/llvm-dev/2015-July/088103.html [1] https://bugs.llvm.org/show_bug.cgi?id=965 [2] http://lists.llvm.org/pipermail/llvm-dev/2017-October/118632.html Differential Revision: https://reviews.llvm.org/D38336 llvm-svn: 317729
* Reapply "[GVN] Prevent LoadPRE from hoisting across instructions that don't ↵Max Kazantsev2017-10-314-0/+429
| | | | | | | | | | | | | | | | | | pass control flow to successors" This patch fixes the miscompile that happens when PRE hoists loads across guards and other instructions that don't always pass control flow to their successors. PRE is now prohibited to hoist across such instructions because there is no guarantee that the load standing after such instruction is still valid before such instruction. For example, a load from under a guard may be invalid before the guard in the following case: int array[LEN]; ... guard(0 <= index && index < LEN); use(array[index]); Differential Revision: https://reviews.llvm.org/D37460 llvm-svn: 316975
* Remove a test after revert of rL315440Max Kazantsev2017-10-171-146/+0
| | | | llvm-svn: 315977
* [NFC] Add test from bug 34937Max Kazantsev2017-10-171-0/+32
| | | | llvm-svn: 315976
* Revert 315440 on behalf of mkazantsevPhilip Reames2017-10-171-28/+0
| | | | | | | | | | | This patch reverts rL315440 because of the bug described at https://bugs.llvm.org/show_bug.cgi?id=34937 The fix for the bug is on review as D38944, but not yet ready. Given this is a regression reverting until a fix is ready is called for. Max would have done the revert himself, but is having trouble doing a build of fresh LLVM for some reason. I did the build and test to ensure the revert worked as expected on his behalf. llvm-svn: 315974
* [GVN] Prevent LoadPRE from hoisting across instructions that don't pass ↵Max Kazantsev2017-10-112-0/+174
| | | | | | | | | | | | | | | | | | control flow to successors This patch fixes the miscompile that happens when PRE hoists loads across guards and other instructions that don't always pass control flow to their successors. PRE is now prohibited to hoist across such instructions because there is no guarantee that the load standing after such instruction is still valid before such instruction. For example, a load from under a guard may be invalid before the guard in the following case: int array[LEN]; ... guard(0 <= index && index < LEN); use(array[index]); Differential Revision: https://reviews.llvm.org/D37460 llvm-svn: 315440
* [GVN] Don't replace constants with constants.Davide Italiano2017-10-111-0/+13
| | | | | | | | This fixes PR34908. Patch by Alex Crichton! Differential Revision: https://reviews.llvm.org/D38765 llvm-svn: 315429
* This patch fixes https://bugs.llvm.org/show_bug.cgi?id=32352 Vivek Pandya2017-09-151-1/+1
| | | | | | | | | | | It enables OptimizationRemarkEmitter::allowExtraAnalysis and MachineOptimizationRemarkEmitter::allowExtraAnalysis to return true not only for -fsave-optimization-record but when specific remarks are requested with command line options. The diagnostic handler used to be callback now this patch adds a class DiagnosticHandler. It has virtual method to provide custom diagnostic handler and methods to control which particular remarks are enabled. However LLVM-C API users can still provide callback function for diagnostic handler. llvm-svn: 313390
* This reverts r313381Vivek Pandya2017-09-151-1/+1
| | | | llvm-svn: 313387
* This patch fixes https://bugs.llvm.org/show_bug.cgi?id=32352 Vivek Pandya2017-09-151-1/+1
| | | | | | | | | | | It enables OptimizationRemarkEmitter::allowExtraAnalysis and MachineOptimizationRemarkEmitter::allowExtraAnalysis to return true not only for -fsave-optimization-record but when specific remarks are requested with command line options. The diagnostic handler used to be callback now this patch adds a class DiagnosticHandler. It has virtual method to provide custom diagnostic handler and methods to control which particular remarks are enabled. However LLVM-C API users can still provide callback function for diagnostic handler. llvm-svn: 313382
* Split opt-remark YAML and opt output testing on this testAdam Nemet2017-09-051-2/+5
| | | | | | This prepares for https://reviews.llvm.org/D33514 llvm-svn: 312544
* Keep Optimization Remark Yaml in NewPMSam Elliott2017-08-201-0/+3
| | | | | | | | | | | | | | | | | | | Summary: The New Pass Manager infrastructure was forgetting to keep around the optimization remark yaml file that the compiler might have been producing. This meant setting the option to '-' for stdout worked, but setting it to a filename didn't give file output (presumably it was deleted because compilation didn't explicitly keep it). This change just ensures that the file is kept if compilation succeeds. So far I have updated one of the optimization remark output tests to add a version with the new pass manager. It is my intention for this patch to also include changes to all tests that use `-opt-remark-output=` but I wanted to get the code patch ready for review while I was making all those changes. Fixes https://bugs.llvm.org/show_bug.cgi?id=33951 Reviewers: anemet, chandlerc Reviewed By: anemet, chandlerc Subscribers: javed.absar, chandlerc, fhahn, llvm-commits Differential Revision: https://reviews.llvm.org/D36906 llvm-svn: 311271
* [GVN] Remove stale entries in phitranslate cache when new phi is generated ↵Wei Mi2017-08-081-0/+45
| | | | | | | | | | | | | | | | | | for PRE When a new phi is generated for scalarpre of an expression, the phiTranslate cache will become stale: Before PRE, the candidate expression must not be available in a predecessor block, and phitranslate will cache the information. After PRE, the expression will become available in all predecessor blocks, so the related entries in phiTranslate cache becomes stale. The patch will simply remove the stale entries so phiTranslate can be recomputed next time. The stale entries in phitranslate cache will not affect correctness but will cause missing PRE opportunity for later instructions. Differential Revision: https://reviews.llvm.org/D36124 llvm-svn: 310421
* [GVN] Recommit the patch "Add phi-translate support in scalarpre"Wei Mi2017-07-283-4/+135
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Recommit after workaround the bug PR31652. Three bugs fixed in previous recommits: The first one is to use CurrentBlock instead of PREInstr's Parent as param of performScalarPREInsertion because the Parent of a clone instruction may be uninitialized. The second one is stop PRE when CurrentBlock to its predecessor is a backedge and an operand of CurInst is defined inside of CurrentBlock. The same value defined inside of loop in last iteration can not be regarded as available. The third one is an out-of-bound array access in a flipped if guard. Right now scalarpre doesn't have phi-translate support, so it will miss some simple pre opportunities. Like the following testcase, current scalarpre cannot recognize the last "a * b" is fully redundent because a and b used by the last "a * b" expr are both defined by phis. long a[100], b[100], g1, g2, g3; __attribute__((pure)) long goo(); void foo(long a, long b, long c, long d) { g1 = a * b; if (__builtin_expect(g2 > 3, 0)) { a = c; b = d; g2 = a * b; } g3 = a * b; // fully redundant. } The patch adds phi-translate support in scalarpre. This is only a temporary solution before the newpre based on newgvn is available. Differential Revision: https://reviews.llvm.org/D32252 llvm-svn: 309397
* Fix DebugLoc propagation for unreachable LoadInstWeiming Zhao2017-07-192-2/+81
| | | | | | | | | | | | | | Summary: Currently, when GVN creates a load and when InstCombine creates a new store for unreachable Load, the DebugLoc info gets lost. Reviewers: dberlin, davide, aprantl Reviewed By: aprantl Subscribers: davide, llvm-commits Differential Revision: https://reviews.llvm.org/D34639 llvm-svn: 308404
* Enhance synchscope representationKonstantin Zhuravlyov2017-07-111-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | OpenCL 2.0 introduces the notion of memory scopes in atomic operations to global and local memory. These scopes restrict how synchronization is achieved, which can result in improved performance. This change extends existing notion of synchronization scopes in LLVM to support arbitrary scopes expressed as target-specific strings, in addition to the already defined scopes (single thread, system). The LLVM IR and MIR syntax for expressing synchronization scopes has changed to use *syncscope("<scope>")*, where <scope> can be "singlethread" (this replaces *singlethread* keyword), or a target-specific name. As before, if the scope is not specified, it defaults to CrossThread/System scope. Implementation details: - Mapping from synchronization scope name/string to synchronization scope id is stored in LLVM context; - CrossThread/System and SingleThread scopes are pre-defined to efficiently check for known scopes without comparing strings; - Synchronization scope names are stored in SYNC_SCOPE_NAMES_BLOCK in the bitcode. Differential Revision: https://reviews.llvm.org/D21723 llvm-svn: 307722
* Revert "[GVN] Recommit the patch "Add phi-translate support in scalarpre"."Benjamin Kramer2017-07-033-135/+4
| | | | | | | This reverts commit r306313. This breaks selfhost at -O3 and PR33652. Let me know if you need additional information on reproducing the issue. llvm-svn: 307021
* [GVN] Recommit the patch "Add phi-translate support in scalarpre".Wei Mi2017-06-263-4/+135
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The recommit fixes three bugs: The first one is to use CurrentBlock instead of PREInstr's Parent as param of performScalarPREInsertion because the Parent of a clone instruction may be uninitialized. The second one is stop PRE when CurrentBlock to its predecessor is a backedge and an operand of CurInst is defined inside of CurrentBlock. The same value defined inside of loop in last iteration can not be regarded as available. The third one is an out-of-bound array access in a flipped if guard. Right now scalarpre doesn't have phi-translate support, so it will miss some simple pre opportunities. Like the following testcase, current scalarpre cannot recognize the last "a * b" is fully redundent because a and b used by the last "a * b" expr are both defined by phis. long a[100], b[100], g1, g2, g3; __attribute__((pure)) long goo(); void foo(long a, long b, long c, long d) { g1 = a * b; if (__builtin_expect(g2 > 3, 0)) { a = c; b = d; g2 = a * b; } g3 = a * b; // fully redundant. } The patch adds phi-translate support in scalarpre. This is only a temporary solution before the newpre based on newgvn is available. llvm-svn: 306313
* Revert rL305578. There is still some buildbot failure to be fixed.Wei Mi2017-06-163-135/+4
| | | | llvm-svn: 305603
* [GVN] Recommit the patch "Add phi-translate support in scalarpre".Wei Mi2017-06-163-4/+135
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The recommit fixes two bugs: The first one is to use CurrentBlock instead of PREInstr's Parent as param of performScalarPREInsertion because the Parent of a clone instruction may be uninitialized. The second one is stop PRE when CurrentBlock to its predecessor is a backedge and an operand of CurInst is defined inside of CurrentBlock. The same value defined inside of loop in last iteration can not be regarded as available. Right now scalarpre doesn't have phi-translate support, so it will miss some simple pre opportunities. Like the following testcase, current scalarpre cannot recognize the last "a * b" is fully redundent because a and b used by the last "a * b" expr are both defined by phis. long a[100], b[100], g1, g2, g3; __attribute__((pure)) long goo(); void foo(long a, long b, long c, long d) { g1 = a * b; if (__builtin_expect(g2 > 3, 0)) { a = c; b = d; g2 = a * b; } g3 = a * b; // fully redundant. } The patch adds phi-translate support in scalarpre. This is only a temporary solution before the newpre based on newgvn is available. Differential Revision: https://reviews.llvm.org/D32252 llvm-svn: 305578
* [BasicAA] Add test case that goes with r305481.Craig Topper2017-06-151-0/+53
| | | | | | Forgot to 'git add' the file. llvm-svn: 305483
* Revert rL304050. It may break sanitizer bootstrap. Revert it for now while ↵Wei Mi2017-05-313-109/+4
| | | | | | investigating. llvm-svn: 304350
OpenPOWER on IntegriCloud