summaryrefslogtreecommitdiffstats
path: root/llvm/lib/Transforms
Commit message (Collapse)AuthorAgeFilesLines
...
* [InstCombine] extend rotate-left-by-constant canonicalization to funnel shiftSanjay Patel2019-03-181-9/+10
| | | | | | | | | | | Follow-up to: rL356338 Rotates are a special case of funnel shift where the 2 input operands are the same value, but that does not need to be a restriction for the canonicalization when the shift amount is a constant. llvm-svn: 356369
* [InstCombine] canonicalize rotate right by constant to rotate leftSanjay Patel2019-03-171-7/+20
| | | | | | | | | | | This was noted as a backend problem: https://bugs.llvm.org/show_bug.cgi?id=41057 ...and subsequently fixed for x86: rL356121 But we should canonicalize these in IR for the benefit of all targets and improve IR analysis such as CSE. llvm-svn: 356338
* [SimplifyDemandedVec] Strengthen handling all undef lanes (particularly GEPs)Philip Reames2019-03-153-5/+31
| | | | | | | | | | | | A change of two parts: 1) A generic enhancement for all callers of SDVE to exploit the fact that if all lanes are undef, the result is undef. 2) A GEP specific piece to strengthen/fix the vector index undef element handling, and call into the generic infrastructure when visiting the GEP. The result is that we replace a vector gep with at least one undef in each lane with a undef. We can also do the same for vector intrinsics. Once the masked.load patch (D57372) has landed, I'll update to include call tests as well. Differential Revision: https://reviews.llvm.org/D57468 llvm-svn: 356293
* [LLVM-C] Expose the "Add Discriminators" Pass To LLVM-CRobert Widmann2019-03-151-0/+3
| | | | | | | | | | | | | | | | Summary: Add bindings to create a wrapped "Add Discriminators" pass. Now that we have debug info support, this is a handy transform to have. Reviewers: whitequark, deadalnix Reviewed By: whitequark Subscribers: dblaikie, aprantl, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D58624 llvm-svn: 356272
* [ThinLTO] Restructure AliasSummary to contain ValueInfo of AliaseeTeresa Johnson2019-03-151-6/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The AliasSummary previously contained the AliaseeGUID, which was only populated when reading the summary from bitcode. This patch changes it to instead hold the ValueInfo of the aliasee, and always populates it. This enables more efficient access to the ValueInfo (specifically in the recent patch r352438 which needed to perform an index hash table lookup using the aliasee GUID). As noted in the comments in AliasSummary, we no longer technically need to keep a pointer to the corresponding aliasee summary, since it could be obtained by walking the list of summaries on the ValueInfo looking for the summary in the same module. However, I am concerned that this would be inefficient when walking through the index during the thin link for various analyses. That can be reevaluated in the future. By always populating this new field, we can remove the guard and special handling for a 0 aliasee GUID when dumping the dot graph of the summary. An additional improvement in this patch is when reading the summaries from LLVM assembly we now set the AliaseeSummary field to the aliasee summary in that same module, which makes it consistent with the behavior when reading the summary from bitcode. Reviewers: pcc, mehdi_amini Subscribers: inglorion, eraman, steven_wu, dexonsmith, arphaman, llvm-commits Differential Revision: https://reviews.llvm.org/D57470 llvm-svn: 356268
* [LSR] Check for signed overflow in NarrowSearchSpaceByDetectingSupersets.Florian Hahn2019-03-151-1/+3
| | | | | | | | | | | | | | | | | We are adding a sign extended IR value to an int64_t, which can cause signed overflows, as in the attached test case, where we have a formula with BaseOffset = -1 and a constant with numeric_limits<int64_t>::min(). If the addition would overflow, skip the simplification for this formula. Note that the target triple is required to trigger the failure. Reviewers: qcolombet, gilr, kparzysz, efriedma Reviewed By: efriedma Differential Revision: https://reviews.llvm.org/D59211 llvm-svn: 356256
* AMDGPU: Remove intrinsic operand assertMatt Arsenault2019-03-141-5/+1
| | | | | | | | Before r355981, this was under LLVM_DEBUG. I don't think the assert is quite right, but this really should be a verifier check. Instcombine should not be asserting on this sort of thing. llvm-svn: 356219
* [InstCombine] canonicalize funnel shift constant shift amount to be modulo ↵Sanjay Patel2019-03-141-2/+13
| | | | | | | | | | | | | | | bitwidth The shift argument is defined to be modulo the bitwidth, so if that argument is a constant, we can always reduce the constant to its minimal form to allow better CSE and other follow-on transforms. We need to be careful to ignore constant expressions here, or we will likely infinite loop. I'm adding a general vector constant query for that case. Differential Revision: https://reviews.llvm.org/D59374 llvm-svn: 356192
* Fix for buildbotsSam Parker2019-03-141-11/+9
| | | | | | Remove unused private field. llvm-svn: 356135
* [NFC][LSR] Cleanup Cost APISam Parker2019-03-141-63/+54
| | | | | | | | | Create members for Loop, ScalarEvolution, DominatorTree, TargetTransformInfo and Formula. Differential Revision: https://reviews.llvm.org/D58389 llvm-svn: 356131
* IR: Add immarg attributeMatt Arsenault2019-03-123-23/+11
| | | | | | | | | | | | | | | | | This indicates an intrinsic parameter is required to be a constant, and should not be replaced with a non-constant value. Add the attribute to all AMDGPU and generic intrinsics that comments indicate it should apply to. I scanned other target intrinsics, but I don't see any obvious comments indicating which arguments are intended to be only immediates. This breaks one questionable testcase for the autoupgrade. I'm unclear on whether the autoupgrade is supposed to really handle declarations which were never valid. The verifier fails because the attributes now refer to a parameter past the end of the argument list. llvm-svn: 355981
* [SROA] Fix a crash when trying to convert a memset to an non-integral ↵Philip Reames2019-03-121-6/+17
| | | | | | | | | | | | pointer type The included test case currently crashes on tip of tree. Rather than adding a bailout, I chose to restructure the code so that the existing helper function could be used. Given that, the majority of the diff is NFC-ish, but the key difference is that canConvertValue returns false when only one side is a non-integral pointer. Thanks to Cherry Zhang for the test case. Differential Revision: https://reviews.llvm.org/D59000 llvm-svn: 355962
* [SanitizerCoverage] Avoid splitting critical edges when destination is a ↵Craig Topper2019-03-122-1/+5
| | | | | | | | | | | | basic block containing unreachable This patch adds a new option to SplitAllCriticalEdges and uses it to avoid splitting critical edges when the destination basic block ends with unreachable. Otherwise if we split the critical edge, sanitizer coverage will instrument the new block that gets inserted for the split. But since this block itself shouldn't be reachable this is pointless. These basic blocks will stick around and generate assembly, but they don't end in sane control flow and might get placed at the end of the function. This makes it look like one function has code that flows into the next function. This showed up while compiling the linux kernel with clang. The kernel has a tool called objtool that detected the code that appeared to flow from one function to the next. https://github.com/ClangBuiltLinux/linux/issues/351#issuecomment-461698884 Differential Revision: https://reviews.llvm.org/D57982 llvm-svn: 355947
* [format] \t => ' 'Liang Zou2019-03-121-1/+1
| | | | | | | | | | | | | | | | | | Summary: 1. \t => ' ' 2. test commit access Reviewers: Higuoxing, liangdzou Reviewed By: Higuoxing, liangdzou Subscribers: kristina, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59243 llvm-svn: 355924
* [SimplifyLibCalls] Simplify optimizePutsFangrui Song2019-03-121-10/+6
| | | | | | | | | | | The code might intend to replace puts("") with putchar('\n') even if the return value is used. It failed because use_empty() was used to guard the whole block. While returning '\n' (putchar('\n')) is technically correct (puts is only required to return a nonnegative number on success), doing this looks weird and there is really little benefit to optimize puts whose return value is used. So don't do that. llvm-svn: 355921
* Revert rL355906: [SLP] Remove redundancy of performing operand reordering ↵Simon Pilgrim2019-03-121-169/+82
| | | | | | | | | | | | | | | | | | twice: once in buildTree() and later in vectorizeTree(). This is a refactoring patch that removes the redundancy of performing operand reordering twice, once in buildTree() and later in vectorizeTree(). To achieve this we need to keep track of the operands within the TreeEntry struct while building the tree, and later in vectorizeTree() we are just accessing them from the TreeEntry in the right order. This patch is the first in a series of patches that will allow for better operand reordering across chains of instructions (e.g., a chain of ADDs), as presented here: https://www.youtube.com/watch?v=gIEn34LvyNo Patch by: @vporpo (Vasileios Porpodas) Differential Revision: https://reviews.llvm.org/D59059 ........ Reverted due to buildbot failures that I don't have time to track down. llvm-svn: 355913
* Try to fix SLPVectorizer BoUpSLP::BoEdgeInfo::dump visibility on non-debug ↵Simon Pilgrim2019-03-121-3/+1
| | | | | | builds llvm-svn: 355912
* [SLP] Remove redundancy of performing operand reordering twice: once in ↵Simon Pilgrim2019-03-121-82/+171
| | | | | | | | | | | | | | | buildTree() and later in vectorizeTree(). This is a refactoring patch that removes the redundancy of performing operand reordering twice, once in buildTree() and later in vectorizeTree(). To achieve this we need to keep track of the operands within the TreeEntry struct while building the tree, and later in vectorizeTree() we are just accessing them from the TreeEntry in the right order. This patch is the first in a series of patches that will allow for better operand reordering across chains of instructions (e.g., a chain of ADDs), as presented here: https://www.youtube.com/watch?v=gIEn34LvyNo Patch by: @vporpo (Vasileios Porpodas) Differential Revision: https://reviews.llvm.org/D59059 llvm-svn: 355906
* [SimplifyLibCalls] Fix comments about fputs, memchr, and s[n]printf. NFCFangrui Song2019-03-121-5/+7
| | | | llvm-svn: 355905
* Very minor typo. NFCKristina Brooks2019-03-121-1/+1
| | | | | | | | | | Typo `we we're` => `we were` in the pass EarlyCSE Patch by liangdzou (Liang ZOU) Differential Revision: https://reviews.llvm.org/D59241 llvm-svn: 355895
* Reland "Relax constraints for reduction vectorization"Sanjoy Das2019-03-123-27/+38
| | | | | | | | | | | | | | | | | | | | | Change from original commit: move test (that uses an X86 triple) into the X86 subdirectory. Original description: Gating vectorizing reductions on *all* fastmath flags seems unnecessary; `reassoc` should be sufficient. Reviewers: tvvikram, mkuper, kristof.beyls, sdesmalen, Ayal Reviewed By: sdesmalen Subscribers: dcaballe, huntergr, jmolloy, mcrosier, jlebar, bixia, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D57728 llvm-svn: 355889
* Revert "Relax constraints for reduction vectorization"Sanjoy Das2019-03-113-38/+27
| | | | | | This reverts commit r355868. Breaks hexagon. llvm-svn: 355873
* Relax constraints for reduction vectorizationSanjoy Das2019-03-113-27/+38
| | | | | | | | | | | | | | | | | | Summary: Gating vectorizing reductions on *all* fastmath flags seems unnecessary; `reassoc` should be sufficient. Reviewers: tvvikram, mkuper, kristof.beyls, sdesmalen, Ayal Reviewed By: sdesmalen Subscribers: dcaballe, huntergr, jmolloy, mcrosier, jlebar, bixia, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D57728 llvm-svn: 355868
* Remove esan.Nico Weber2019-03-113-894/+0
| | | | | | | | | | | It hasn't seen active development in years, and it hasn't reached a state where it was useful. Remove the code until someone is interested in working on it again. Differential Revision: https://reviews.llvm.org/D59133 llvm-svn: 355862
* [coroutines][PR40979] Ignore unreachable uses across suspend pointsBrian Gesiak2019-03-111-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Depends on https://reviews.llvm.org/D59069. https://bugs.llvm.org/show_bug.cgi?id=40979 describes a bug in which the -coro-split pass would assert that a use was across a suspend point from a definition. Normally this would mean that a value would "spill" across a suspend point and thus need to be stored in the coroutine frame. However, in this case the use was unreachable, and so it would not be necessary to store the definition on the frame. To prevent the assert, simply remove unreachable basic blocks from a coroutine function before computing spills. This avoids the assert reported in PR40979. Reviewers: GorNishanov, tks2103 Reviewed By: GorNishanov Subscribers: EricWF, jdoerfert, llvm-commits, lewissbaker Tags: #llvm Differential Revision: https://reviews.llvm.org/D59068 llvm-svn: 355852
* [Utils] Extract EliminateUnreachableBlocks (NFC)Brian Gesiak2019-03-111-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: Extract the functionality of eliminating unreachable basic blocks within a function, previously encapsulated within the -unreachableblockelim pass, and make it available as a function within BlockUtils.h. No functional change intended other than making the logic reusable. Exposing this logic makes it easier to implement https://reviews.llvm.org/D59068, which fixes coroutines bug https://bugs.llvm.org/show_bug.cgi?id=40979. Reviewers: mkazantsev, wmi, davidxl, silvas, davide Reviewed By: davide Subscribers: llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59069 llvm-svn: 355846
* [SimplifyCFG] Retain debug info when threading jumps with critical edgesJeremy Morse2019-03-111-1/+2
| | | | | | | | | | | | | | | | | | | | Fixes bug 38023: https://bugs.llvm.org/show_bug.cgi?id=38023 The SimplifyCFG pass will perform jump threading in some cases where doing so is trivial and would simplify the CFG. When folding a series of blocks with redundant conditional branches into an unconditional "critical edge" block, it does not keep the debug location associated with the previous conditional branch. This patch fixes the bug described by copying the debug info from the old conditional branch to the new unconditional branch instruction, and adds a regression test for the SimplifyCFG pass that covers this case. Patch by Stephen Tozer! Differential Revision: https://reviews.llvm.org/D59206 llvm-svn: 355833
* [JumpThreading] Retain debug info when replacing branch instructionsJeremy Morse2019-03-111-2/+5
| | | | | | | | | | | | | | | | | | | | Fixes bug 37966: https://bugs.llvm.org/show_bug.cgi?id=37966 The Jump Threading pass will replace certain conditional branch instructions with unconditional branches when it can prove that only one branch can occur. Prior to this patch, it would not carry the debug info from the old instruction to the new one. This patch fixes the bug described by copying the debug info from the conditional branch instruction to the new unconditional branch instruction, and adds a regression test for the Jump Threading pass that covers this case. Patch by Stephen Tozer! Differential Revision: https://reviews.llvm.org/D58963 llvm-svn: 355822
* [SelectionDAG] Allow the user to specify a memeq function.Clement Courbet2019-03-082-23/+49
| | | | | | | | | | | | | | | | | | | | | | | Summary: Right now, when we encounter a string equality check, e.g. `if (memcmp(a, b, s) == 0)`, we try to expand to a comparison if `s` is a small compile-time constant, and fall back on calling `memcmp()` else. This is sub-optimal because memcmp has to compute much more than equality. This patch replaces `memcmp(a, b, s) == 0` by `bcmp(a, b, s) == 0` on platforms that support `bcmp`. `bcmp` can be made much more efficient than `memcmp` because equality compare is trivially parallel while lexicographic ordering has a chain dependency. Subscribers: fedor.sergeev, jyknight, ckennelly, gchatelet, llvm-commits Differential Revision: https://reviews.llvm.org/D56593 llvm-svn: 355672
* [LSR] Attempt to increase the accuracy of LSR's setup costDavid Green2019-03-071-6/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | In some loops, we end up generating loop induction variables that look like: {(-1 * (zext i16 (%i0 * %i1) to i32))<nsw>,+,1} As opposed to the simpler: {(zext i16 (%i0 * %i1) to i32),+,-1} i.e we count up from -limit to 0, not the simpler counting down from limit to 0. This is because the scores, as LSR calculates them, are the same and the second is filtered in place of the first. We end up with a redundant SUB from 0 in the code. This patch tries to make the calculation of the setup cost a little more thoroughly, recursing into the scev members to better approximate the setup required. The cost function for comparing LSR costs is: return std::tie(C1.NumRegs, C1.AddRecCost, C1.NumIVMuls, C1.NumBaseAdds, C1.ScaleCost, C1.ImmCost, C1.SetupCost) < std::tie(C2.NumRegs, C2.AddRecCost, C2.NumIVMuls, C2.NumBaseAdds, C2.ScaleCost, C2.ImmCost, C2.SetupCost); So this will only alter results if none of the other variables turn out to be different. Differential Revision: https://reviews.llvm.org/D58770 llvm-svn: 355597
* [BDCE] Optimize find+insert with early insertFangrui Song2019-03-071-5/+5
| | | | llvm-svn: 355583
* [LoopRotate] fix crash encountered with callbrNick Desaulniers2019-03-061-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: While implementing inlining support for callbr (https://bugs.llvm.org/show_bug.cgi?id=40722), I hit a crash in Loop Rotation when trying to build the entire x86 Linux kernel (drivers/char/random.c). This is a small fix up to r353563. Test case is drivers/char/random.c (with callbr's inlined), then ran through creduce, then `opt -opt-bisect-limit=<limit>`, then bugpoint. Thanks to Craig Topper for immediately spotting the fix, and teaching me how to fish. Reviewers: craig.topper, jyknight Reviewed By: craig.topper Subscribers: hiraditya, llvm-commits, srhines Tags: #llvm Differential Revision: https://reviews.llvm.org/D58929 llvm-svn: 355564
* [InstCombine] Fold add nsw + sadd.with.overflowNikita Popov2019-03-062-10/+42
| | | | | | | | | | | | | Fold `add nsw` and `sadd.with.overflow` with constants if the addition does not overflow. Part of https://bugs.llvm.org/show_bug.cgi?id=38146. Patch by Dan Robertson. Differential Revision: https://reviews.llvm.org/D58881 llvm-svn: 355530
* [msan] Instrument x86 BMI intrinsics.Evgeniy Stepanov2019-03-041-0/+31
| | | | | | | | | | | | | | | | Summary: They simply shuffle bits. MSan needs to do the same with shadow bits, after making sure that the shuffle mask is fully initialized. Reviewers: pcc, vitalybuka Subscribers: hiraditya, #sanitizers, llvm-commits Tags: #sanitizers, #llvm Differential Revision: https://reviews.llvm.org/D58858 llvm-svn: 355348
* [NFC] Fix PGO link error in shared libs buildJordan Rupprecht2019-03-041-1/+8
| | | | llvm-svn: 355346
* [ConstantHoisting] avoid hang/crash from unreachable blocks (PR40930)Sanjay Patel2019-03-041-1/+6
| | | | | | | | | | | | | | | | | I'm not too familiar with this pass, so there might be a better solution, but this appears to fix the degenerate: PR40930 PR40931 PR40932 PR40934 ...without affecting any real-world code. As we've seen in several other passes, when we have unreachable blocks, they can contain semi-bogus IR and/or cause unexpected conditions. We would not typically expect these patterns to make it this far, but we have to guard against them anyway. llvm-svn: 355337
* [PGO] Context sensitive PGO (part 3)Rong Xu2019-03-041-9/+34
| | | | | | | | Part 3 of CSPGO changes (mostly related to PassMananger). Differential Revision: https://reviews.llvm.org/D54175 llvm-svn: 355330
* [InstCombine] Mark debug values as unavailable after DCE.Davide Italiano2019-03-041-1/+2
| | | | | | Fixes PR40838. llvm-svn: 355301
* [InstCombine] move add after smin/smaxSanjay Patel2019-03-021-3/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Follow-up to rL355221. This isn't specifically called for within PR14613, but we'll get there eventually if it's not already requested in some other bug report. https://rise4fun.com/Alive/5b0 Name: smax Pre: WillNotOverflowSignedSub(C1,C0) %a = add nsw i8 %x, C0 %cond = icmp sgt i8 %a, C1 %r = select i1 %cond, i8 %a, i8 C1 => %c2 = icmp sgt i8 %x, C1-C0 %u2 = select i1 %c2, i8 %x, i8 C1-C0 %r = add nsw i8 %u2, C0 Name: smin Pre: WillNotOverflowSignedSub(C1,C0) %a = add nsw i32 %x, C0 %cond = icmp slt i32 %a, C1 %r = select i1 %cond, i32 %a, i32 C1 => %c2 = icmp slt i32 %x, C1-C0 %u2 = select i1 %c2, i32 %x, i32 C1-C0 %r = add nsw i32 %u2, C0 llvm-svn: 355272
* [InstCombine] Extend saturating idempotent atomicrmw transform to FPPhilip Reames2019-03-011-2/+10
| | | | | | | | I'm assuming that the nan propogation logic for InstructonSimplify's handling of fadd and fsub is correct, and applying the same to atomicrmw. Differential Revision: https://reviews.llvm.org/D58836 llvm-svn: 355222
* [InstCombine] move add after umin/umaxSanjay Patel2019-03-011-0/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the motivating cases from PR14613: https://bugs.llvm.org/show_bug.cgi?id=14613 ...moving the add enables us to narrow the min/max which eliminates zext/trunc which enables signficantly better vectorization. But that bug is still not completely fixed. https://rise4fun.com/Alive/5KQ Name: umax Pre: C1 u>= C0 %a = add nuw i8 %x, C0 %cond = icmp ugt i8 %a, C1 %r = select i1 %cond, i8 %a, i8 C1 => %c2 = icmp ugt i8 %x, C1-C0 %u2 = select i1 %c2, i8 %x, i8 C1-C0 %r = add nuw i8 %u2, C0 Name: umin Pre: C1 u>= C0 %a = add nuw i32 %x, C0 %cond = icmp ult i32 %a, C1 %r = select i1 %cond, i32 %a, i32 C1 => %c2 = icmp ult i32 %x, C1-C0 %u2 = select i1 %c2, i32 %x, i32 C1-C0 %r = add nuw i32 %u2, C0 llvm-svn: 355221
* [LICM] Infer proper alignment from loads during scalar promotionPhilip Reames2019-03-011-3/+23
| | | | | | | | | | This patch fixes an issue where we would compute an unnecessarily small alignment during scalar promotion when no store is not to be guaranteed to execute, but we've proven load speculation safety. Since speculating a load requires proving the existing alignment is valid at the new location (see Loads.cpp), we can use the alignment fact from the load. For non-atomics, this is a performance problem. For atomics, this is a correctness issue, though an *incredibly* rare one to see in practice. For atomics, we might not be able to lower an improperly aligned load or store (i.e. i32 align 1). If such an instruction makes it all the way to codegen, we *may* fail to codegen the operation, or we may simply generate a slow call to a library function. The part that makes this super hard to see in practice is that the memory location actually *is* well aligned, and instcombine knows that. So, to see a failure, you have to have a) hit the bug in LICM, b) somehow hit a depth limit in InstCombine/ValueTracking to avoid fixing the alignment, and c) then have generated an instruction which fails codegen rather than simply emitting a slow libcall. All around, pretty hard to hit. Differential Revision: https://reviews.llvm.org/D58809 llvm-svn: 355217
* [InstCombine] Extend "idempotent" atomicrmw optimizations to floating pointPhilip Reames2019-03-011-2/+17
| | | | | | | | | | An idempotent atomicrmw is one that does not change memory in the process of execution. We have already added handling for the various integer operations; this patch extends the same handling to floating point operations which were recently added to IR. Note: At the moment, we canonicalize idempotent fsub to fadd when ordering requirements prevent us from using a load. As discussed in the review, I will be replacing this with canonicalizing both floating point ops to integer ops in the near future. Differential Revision: https://reviews.llvm.org/D58251 llvm-svn: 355210
* Hide two unused debugging methods, NFCI.Jonas Hahnfeld2019-03-011-0/+4
| | | | | | | | GCC correctly moans that PlainCFGBuilder::isExternalDef(llvm::Value*) and StackSafetyDataFlowAnalysis::verifyFixedPoint() are defined but not used in Release builds. Hide them behind 'ifndef NDEBUG'. llvm-svn: 355205
* Try to fix NetBSD buildbot breakage introduced in D57463.Manman Ren2019-03-011-0/+1
| | | | | | By including the header file in the source. llvm-svn: 355202
* [ConstantHoisting] Call cleanup() in ConstantHoistingPass::runImpl to avoid ↵Fangrui Song2019-03-011-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | dangling elements in ConstIntInfoVec for new PM Summary: ConstIntInfoVec contains elements extracted from the previous function. In new PM, releaseMemory() is not called and the dangling elements can cause segfault in findConstantInsertionPoint. Rename releaseMemory() to cleanup() to deliver the idea that it is mandatory and call cleanup() in ConstantHoistingPass::runImpl to fix this. Reviewers: ormris, zzheng, dmgreen, wmi Reviewed By: ormris, wmi Subscribers: llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D58589 llvm-svn: 355174
* [sancov] Instrument reachable blocks that end in unreachableReid Kleckner2019-02-281-3/+3
| | | | | | | | | | | | | | | | | | Summary: These sorts of blocks often contain calls to noreturn functions, like longjmp, throw, or trap. If they don't end the program, they are "interesting" from the perspective of sanitizer coverage, so we should instrument them. This was discussed in https://reviews.llvm.org/D57982. Reviewers: kcc, vitalybuka Subscribers: llvm-commits, craig.topper, efriedma, morehouse, hiraditya Tags: #llvm Differential Revision: https://reviews.llvm.org/D58740 llvm-svn: 355152
* Add a module pass for order file instrumentationManman Ren2019-02-284-0/+219
| | | | | | | | | | | | | | | The basic idea of the pass is to use a circular buffer to log the execution ordering of the functions. We only log the function when it is first executed. We use a 8-byte hash to log the function symbol name. In this pass, we add three global variables: (1) an order file buffer: a circular buffer at its own llvm section. (2) a bitmap for each module: one byte for each function to say if the function is already executed. (3) a global index to the order file buffer. At the function prologue, if the function has not been executed (by checking the bitmap), log the function hash, then atomically increase the index. Differential Revision: https://reviews.llvm.org/D57463 llvm-svn: 355133
* [PGO] Context sensitive PGO (part 2)Rong Xu2019-02-283-5/+10
| | | | | | | | | | | Part 2 of CSPGO changes (mostly related to ProfileSummary). Note that I use a default parameter in setProfileSummary() and getSummary(). This is to break the dependency in clang. I will make the parameter explicit after changing clang in a separated patch. Differential Revision: https://reviews.llvm.org/D54175 llvm-svn: 355131
* [InstCombine] fold adds of constants separated by sext/zextSanjay Patel2019-02-281-8/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | This is part of a transform that may be done in the backend: D13757 ...but it should always be beneficial to fold this sooner in IR for all targets. https://rise4fun.com/Alive/vaiW Name: sext add nsw %add = add nsw i8 %i, C0 %ext = sext i8 %add to i32 %r = add i32 %ext, C1 => %s = sext i8 %i to i32 %r = add i32 %s, sext(C0)+C1 Name: zext add nuw %add = add nuw i8 %i, C0 %ext = zext i8 %add to i16 %r = add i16 %ext, C1 => %s = zext i8 %i to i16 %r = add i16 %s, zext(C0)+C1 llvm-svn: 355118
OpenPOWER on IntegriCloud