summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* [llvm] Change 2 instances of std::sort to llvm::sortMandeep Singh Grang2018-07-162-2/+2
| | | | llvm-svn: 337192
* [InstCombine] Fold 'check for [no] signed truncation' patternRoman Lebedev2018-07-161-0/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: [[ https://bugs.llvm.org/show_bug.cgi?id=38149 | PR38149 ]] As discussed in https://reviews.llvm.org/D49179#1158957 and later, the IR for 'check for [no] signed truncation' pattern can be improved: https://rise4fun.com/Alive/gBf ^ that pattern will be produced by Implicit Integer Truncation sanitizer, https://reviews.llvm.org/D48958 https://bugs.llvm.org/show_bug.cgi?id=21530 in signed case, therefore it is probably a good idea to improve it. Proofs for this transform: https://rise4fun.com/Alive/mgu This transform is surprisingly frustrating. This does not deal with non-splat shift amounts, or with undef shift amounts. I've outlined what i think the solution should be: ``` // Potential handling of non-splats: for each element: // * if both are undef, replace with constant 0. // Because (1<<0) is OK and is 1, and ((1<<0)>>1) is also OK and is 0. // * if both are not undef, and are different, bailout. // * else, only one is undef, then pick the non-undef one. ``` The DAGCombine will reverse this transform, see https://reviews.llvm.org/D49266 Reviewers: spatel, craig.topper Reviewed By: spatel Subscribers: JDevlieghere, rkruppe, llvm-commits Differential Revision: https://reviews.llvm.org/D49320 llvm-svn: 337190
* [RegAlloc] Skip global splitting if the live range is huge and its spill isWei Mi2018-07-161-0/+19
| | | | | | | | | | | | | | | | | | | | | | | trivially rematerializable. We run into a case where machineLICM hoists a large number of live ranges outside of a big loop because it thinks those live ranges are trivially rematerializable. In regalloc, global splitting is tried out first for those live ranges before they are spilled and rematerialized. Because the global splitting algorithm is quadratic, increasing a lot of global splitting candidates causes huge compile time increase (50s to 1400s on my local machine when compiling a module). However, we think for live ranges which are very large and are trivially rematerialiable, it is better to just skip global splitting so as to save compile time with little chance of sacrificing performance. We uses the segment size of live range to indirectly evaluate whether the global splitting of the live range can introduce high cost, and use an option as a knob to adjust the size limit threshold. Differential Revision: https://reviews.llvm.org/D49353 llvm-svn: 337186
* Restore "[ThinLTO] Ensure we always select the same function copy to import"Teresa Johnson2018-07-162-71/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit r337081, therefore restoring r337050 (and fix in r337059), with test fix for bot failure described after the original description below. In order to always import the same copy of a linkonce function, even when encountering it with different thresholds (a higher one then a lower one), keep track of the summary we decided to import. This ensures that the backend only gets a single definition to import for each GUID, so that it doesn't need to choose one. Move the largest threshold the GUID was considered for import into the current module out of the ImportMap (which is part of a larger map maintained across the whole index), and into a new map just maintained for the current module we are computing imports for. This saves some memory since we no longer have the thresholds maintained across the whole index (and throughout the in-process backends when doing a normal non-distributed ThinLTO build), at the cost of some additional information being maintained for each invocation of ComputeImportForModule (the selected summary pointer for each import). There is an additional map lookup for each callee being considered for importing, however, this was able to subsume a map lookup in the Worklist iteration that invokes computeImportForFunction. We also are able to avoid calling selectCallee if we already failed to import at the same or higher threshold. I compared the run time and peak memory for the SPEC2006 471.omnetpp benchmark (running in-process ThinLTO backends), as well as for a large internal benchmark with a distributed ThinLTO build (so just looking at the thin link time/memory). Across a number of runs with and without this change there was no significant change in the time and memory. (I tried a few other variations of the change but they also didn't improve time or peak memory). The new commit removes a test that no longer makes sense (Transforms/FunctionImport/hotness_based_import2.ll), as exposed by the reverse-iteration bot. The test depends on the order of processing the summary call edges, and actually depended on the old problematic behavior of selecting more than one summary for a given GUID when encountered with different thresholds. There was no guarantee even before that we would eventually pick the linkonce copy with the hottest call edges, it just happened to work with the test and the old code, and there was no guarantee that we would end up importing the selected version of the copy that had the hottest call edges (since the backend would effectively import only one of the selected copies). Reviewers: davidxl Subscribers: mehdi_amini, inglorion, llvm-commits Differential Revision: https://reviews.llvm.org/D48670 llvm-svn: 337184
* [x86/SLH] Completely rework how we sink post-load hardening past dataChandler Carruth2018-07-161-24/+184
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | invariant instructions to be both more correct and much more powerful. While testing, I continued to find issues with sinking post-load hardening. Unfortunately, it was amazingly hard to create any useful tests of this because we were mostly sinking across copies and other loading instructions. The fact that we couldn't sink past normal arithmetic was really a big oversight. So first, I've ported roughly the same set of instructions from the data invariant loads to also have their non-loading varieties understood to be data invariant. I've also added a few instructions that came up so often it again made testing complicated: inc, dec, and lea. With this, I was able to shake out a few nasty bugs in the validity checking. We need to restrict to hardening single-def instructions with defined registers that match a particular form: GPRs that don't have a NOREX constraint directly attached to their register class. The (tiny!) test case included catches all of the issues I was seeing (once we can sink the hardening at all) except for the NOREX issue. The only test I have there is horrible. It is large, inexplicable, and doesn't even produce an error unless you try to emit encodings. I can keep looking for a way to test it, but I'm out of ideas really. Thanks to Ben for giving me at least a sanity-check review. I'll follow up with Craig to go over this more thoroughly post-commit, but without it SLH crashes everywhere so landing it for now. Differential Revision: https://reviews.llvm.org/D49378 llvm-svn: 337177
* [mips] Eliminate the usage of hasStdEnc in MipsPat.Simon Atanasyan2018-07-167-161/+206
| | | | | | | | | | | Instead, the pattern is tagged with the correct predicate when it is declared. Some patterns have been duplicated as necessary. Patch by Simon Dardis. Differential revision: https://reviews.llvm.org/D48365 llvm-svn: 337171
* [MIPS GlobalISel] Select instructions to load and store i32 on stackPetar Jovanovic2018-07-163-2/+88
| | | | | | | | | | | Add code for selection of G_LOAD, G_STORE, G_GEP, G_FRAMEINDEX and G_CONSTANT. Support loads and stores of i32 values. Patch by Petar Avramovic. Differential Revision: https://reviews.llvm.org/D48957 llvm-svn: 337168
* [X86][AArch64][DAGCombine] Unfold 'check for [no] signed truncation' patternRoman Lebedev2018-07-163-0/+113
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: [[ https://bugs.llvm.org/show_bug.cgi?id=38149 | PR38149 ]] As discussed in https://reviews.llvm.org/D49179#1158957 and later, the IR for 'check for [no] signed truncation' pattern can be improved: https://rise4fun.com/Alive/gBf ^ that pattern will be produced by Implicit Integer Truncation sanitizer, https://reviews.llvm.org/D48958 https://bugs.llvm.org/show_bug.cgi?id=21530 in signed case, therefore it is probably a good idea to improve it. But the IR-optimal patter does not lower efficiently, so we want to undo it.. This handles the simple pattern. There is a second pattern with predicate and constants inverted. NOTE: we do not check uses here. we always do the transform. Reviewers: spatel, craig.topper, RKSimon, javed.absar Reviewed By: spatel Subscribers: kristof.beyls, llvm-commits Differential Revision: https://reviews.llvm.org/D49266 llvm-svn: 337166
* [Sparc] Use the correct encoding for ta 3Daniel Cederman2018-07-161-1/+1
| | | | | | | | | | | | | | | Summary: The old encoding generated a "tn %g1 + 3" instruction instead of the expected "ta 3". Reviewers: venkatra, jyknight Reviewed By: jyknight Subscribers: fedor.sergeev, jrtc27, llvm-commits Differential Revision: https://reviews.llvm.org/D49171 llvm-svn: 337165
* [Sparc] Use the names .rem and .urem instead of __modsi3 and __umodsi3Daniel Cederman2018-07-161-0/+3
| | | | | | | | | | | | | | Summary: These are the names used in libgcc. Reviewers: venkatra, jyknight, ekedaigle Reviewed By: jyknight Subscribers: joerg, fedor.sergeev, jrtc27, llvm-commits Differential Revision: https://reviews.llvm.org/D48915 llvm-svn: 337164
* [Sparc] Generate ta 1 for the @llvm.debugtrap intrinsicDaniel Cederman2018-07-162-0/+4
| | | | | | | | | | | | | | | Summary: Software trap number one is the trap used for breakpoints in the Sparc ABI. Reviewers: jyknight, venkatra Reviewed By: jyknight Subscribers: fedor.sergeev, jrtc27, llvm-commits Differential Revision: https://reviews.llvm.org/D48637 llvm-svn: 337163
* Avoid losing Hi part when expanding VAARG nodes on big endian machinesDaniel Cederman2018-07-161-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | Summary: If the high part of the load is not used the offset to the next element will not be set correctly. For example, on Sparc V8, the following code will read val2 from offset 4 instead of 8. ``` int val = __builtin_va_arg(va, long long); int val2 = __builtin_va_arg(va, int); ``` Reviewers: jyknight Reviewed By: jyknight Subscribers: fedor.sergeev, jrtc27, llvm-commits Differential Revision: https://reviews.llvm.org/D48595 llvm-svn: 337161
* [x86/SLH] Fix a bug where we would try to post-load harden non-GPRs.Chandler Carruth2018-07-161-13/+25
| | | | | | | | | | | | | | | | Found cases that hit the assert I added. This patch factors the validity checking into a nice helper routine and calls it when deciding to harden post-load, and asserts it when doing so later. I've added tests for the various ways of loading a floating point type, as well as loading all vector permutations. Even though many of these go to identical instructions, it seems good to somewhat comprehensively test them. I'm confident there will be more fixes needed here, I'll try to add tests each time as I get this predicate adjusted. llvm-svn: 337160
* MSan: minor fixes, NFCAlexander Potapenko2018-07-161-7/+6
| | | | | | | - remove an extra space after |ID| declaration - drop the unused |FirstInsn| parameter in getShadowOriginPtrUserspace() llvm-svn: 337159
* [AccelTable] Provide DWARF5AccelTableStaticData for dsymutil.Jonas Devlieghere2018-07-161-39/+81
| | | | | | | | | | | For dsymutil we want to store offsets in the accelerator table entries rather than DIE pointers. In addition, we need a way to communicate which CU a DIE belongs to. This patch provides support for both of these issues. Differential revision: https://reviews.llvm.org/D49102 llvm-svn: 337158
* [x86/SLH] Extract another small helper function, add better comments andChandler Carruth2018-07-161-23/+34
| | | | | | use better terminology. NFC. llvm-svn: 337157
* [AMDGPU][Waitcnt] Re-apply fix "comparison of integers of different signs" ↵Mark Searles2018-07-161-1/+1
| | | | | | | | | | build error" Re-apply "[AMDGPU][Waitcnt] fix "comparison of integers of different signs" build error"" ( fe0a456510131f268e388c4a18a92f575c0db183 ), which was inadvertantly reverted via 2b2ee080f0164485562593b1b87291a48cea4a9a . llvm-svn: 337156
* [MSan] factor userspace-specific declarations into createUserspaceApi(). NFCAlexander Potapenko2018-07-161-38/+53
| | | | | | | | | | This patch introduces createUserspaceApi() that creates function/global declarations for symbols used by MSan in the userspace. This is a step towards the upcoming KMSAN implementation patch. Reviewed at https://reviews.llvm.org/D49292 llvm-svn: 337155
* run post-RA hazard recognizer pass lateMark Searles2018-07-162-4/+8
| | | | | | | | | | | | | Memory legalizer, waitcnt, and shrink passes can perturb the instructions, which means that the post-RA hazard recognizer pass should run after them. Otherwise, one of those passes may invalidate the work done by the hazard recognizer. Note that this has adverse side-effect that any consecutive S_NOP 0's, emitted by the hazard recognizer, will not be shrunk into a single S_NOP <N>. This should be addressed in a follow-on patch. Differential Revision: https://reviews.llvm.org/D49288 llvm-svn: 337154
* Revert "[AMDGPU][Waitcnt] fix "comparison of integers of different signs" ↵Mark Searles2018-07-161-1/+1
| | | | | | | | build error" This reverts commit fe0a456510131f268e388c4a18a92f575c0db183. llvm-svn: 337153
* [MemorySSAUpdater] Remove deleted trivial Phis from active worksetAlexandros Lamprineas2018-07-161-7/+12
| | | | | | | | | | | | | Bug fix for PR37808. The regression test is a reduced version of the original reproducer attached to the bug report. As stated in the report, the problem was that InsertedPHIs was keeping dangling pointers to deleted Memory-Phis. MemoryPhis are created eagerly and sometimes get zapped shortly afterwards. I've used WeakVH instead of an expensive removal operation from the active workset. Differential Revision: https://reviews.llvm.org/D48372 llvm-svn: 337149
* [X86] Merge the FR128 and VR128 regclass since they have identical spill and ↵Craig Topper2018-07-167-298/+328
| | | | | | | | | | alignment characteristics. This unfortunately requires a bunch of bitcasts to be added added to SUBREG_TO_REG, COPY_TO_REGCLASS, and instructions in output patterns. Otherwise tablegen seems to default to picking f128 and then we fail when something tries to get the register class for f128 which isn't always valid. The test changes are because we were previously mixing fr128 and vr128 due to contrainRegClass finding FR128 first and passes like live range shrinking weren't handling that well. llvm-svn: 337147
* [x86/SLH] Fix an unused variable warning in release builds afterChandler Carruth2018-07-161-0/+1
| | | | | | r337144. llvm-svn: 337145
* [x86/SLH] Teach speculative load hardening to correctly harden theChandler Carruth2018-07-162-17/+92
| | | | | | | | | | | | | | | | | | | | | | indices used by AVX2 and AVX-512 gather instructions. The index vector is hardened by broadcasting the predicate state into a vector register and then or-ing. We don't even have to worry about EFLAGS here. I've added a test for all of the gather intrinsics to make sure that we don't miss one. A particularly interesting creation is the gather prefetch, which needs to be marked as potentially "loading" to get the correct behavior. It's a memory access in many ways, and is actually relevant for SLH. Based on discussion with Craig in review, I've moved it to be `mayLoad` and `mayStore` rather than generic side effects. This matches how we model other prefetch instructions. Many thanks to Craig for the review here. Differential Revision: https://reviews.llvm.org/D49336 llvm-svn: 337144
* [InstCombine] add more SPFofSPF foldingChen Zheng2018-07-162-24/+44
| | | | | | Differential Revision: https://reviews.llvm.org/D49238 llvm-svn: 337143
* [InstCombine] fold icmp pred (sub 0, X) C for vector typeChen Zheng2018-07-161-2/+2
| | | | | | Differential Revision: https://reviews.llvm.org/D49283 llvm-svn: 337141
* Recommit r335794 "Add support for generating a call graph profile from ↵Michael J. Spencer2018-07-166-7/+181
| | | | | | Branch Frequency Info." with fix for removed functions. llvm-svn: 337140
* [x86/SLH] Extract one of the bits of logic to its own function. NFC.Chandler Carruth2018-07-151-43/+48
| | | | | | | This is just a refactoring to start cleaning up the code here and make it more readable and approachable. llvm-svn: 337138
* [X86] Add custom execution domain fixing for 128/256-bit integer logic ↵Craig Topper2018-07-151-0/+85
| | | | | | | | | | | | operations with AVX512F, but not AVX512DQ. AVX512F only has integer domain logic instructions. AVX512DQ added FP domain logic instructions. Execution domain fixing runs before EVEX->VEX. So if we have AVX512F and not AVX512DQ we fail to do execution domain switching of the logic operations. This leads to mismatches in execution domain and more test differences. This patch adds custom domain fixing that switches EVEX integer logic operations to VEX fp logic operations if XMM16-31 are not used. llvm-svn: 337137
* [X86] Add load patterns for cases where we select X86Movss/X86Movsd to blend ↵Craig Topper2018-07-151-0/+32
| | | | | | | | instructions. This allows us to fold the load during isel without waiting for the peephole pass to do it. llvm-svn: 337136
* [X86] Use 128-bit blends instead vmovss/vmovsd for 512-bit vzmovl patterns ↵Craig Topper2018-07-151-12/+39
| | | | | | to match AVX. llvm-svn: 337135
* [X86] Use 128-bit ops for 256-bit vzmovl patterns.Craig Topper2018-07-151-10/+17
| | | | | | | | 128-bit ops implicitly zero the upper bits. This should address the comment about domain crossing for the integer version without AVX2 since we can use a 128-bit VBLENDW without AVX2. The only bad thing I see here is that we failed to reuse an vxorps in some of the tests, but I think that's already known issue. llvm-svn: 337134
* [DAGCombiner] fix typo in comment; NFCSanjay Patel2018-07-151-1/+1
| | | | llvm-svn: 337132
* [InstCombine] Corrections in comments for division transformation (NFC)Sanjay Patel2018-07-151-3/+3
| | | | | | | | | | The actual code seems to be correct, but the comments were misleading. Patch by Aaron Puchert! Differential Revision: https://reviews.llvm.org/D49276 llvm-svn: 337131
* [DAGCombiner] extend(ifpositive(X)) -> shift-right (not X)Sanjay Patel2018-07-151-0/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is almost the same as an existing IR canonicalization in instcombine, so I'm assuming this is a good early generic DAG combine too. The motivation comes from reduced bit-hacking for select-of-constants in IR after rL331486. We want to restore that functionality in the DAG as noted in the commit comments for that change and the llvm-dev discussion here: http://lists.llvm.org/pipermail/llvm-dev/2018-July/124433.html The PPC and AArch tests show that those targets are already doing something similar. x86 will be neutral in the minimal case and generally better when this pattern is extended with other ops as shown in the signbit-shift.ll tests. Note the asymmetry: we don't include the (extend (ifneg X)) transform because it already exists in SimplifySelectCC(), and that is verified in the later unchanged tests in the signbit-shift.ll files. Without the 'not' op, the general transform to use a shift is always a win because that's a single instruction. Alive proofs: https://rise4fun.com/Alive/ysli Name: if pos, get -1 %c = icmp sgt i16 %x, -1 %r = sext i1 %c to i16 => %n = xor i16 %x, -1 %r = ashr i16 %n, 15 Name: if pos, get 1 %c = icmp sgt i16 %x, -1 %r = zext i1 %c to i16 => %n = xor i16 %x, -1 %r = lshr i16 %n, 15 Differential Revision: https://reviews.llvm.org/D48970 llvm-svn: 337130
* [InstSimplify] fold minnum/maxnum with NaN argSanjay Patel2018-07-151-0/+8
| | | | | | | | | | | | | | | This fold is repeated/misplaced in instcombine, but I'm not sure if it's safe to remove that yet because some other folds appear to be asserting that the transform has occurred within instcombine itself. This isn't the best fix for PR37776, but it probably hides the bug with the given code example: https://bugs.llvm.org/show_bug.cgi?id=37776 We have another test to demonstrate the more general bug. llvm-svn: 337127
* [llvm-mca][BtVer2] teach how to identify false dependencies on partially writtenAndrea Di Biagio2018-07-151-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | registers. The goal of this patch is to improve the throughput analysis in llvm-mca for the case where instructions perform partial register writes. On x86, partial register writes are quite difficult to model, mainly because different processors tend to implement different register merging schemes in hardware. When the code contains partial register writes, the IPC (instructions per cycles) estimated by llvm-mca tends to diverge quite significantly from the observed IPC (using perf). Modern AMD processors (at least, from Bulldozer onwards) don't rename partial registers. Quoting Agner Fog's microarchitecture.pdf: " The processor always keeps the different parts of an integer register together. For example, AL and AH are not treated as independent by the out-of-order execution mechanism. An instruction that writes to part of a register will therefore have a false dependence on any previous write to the same register or any part of it." This patch is a first important step towards improving the analysis of partial register updates. It changes the semantic of RegisterFile descriptors in tablegen, and teaches llvm-mca how to identify false dependences in the presence of partial register writes (for more details: see the new code comments in include/Target/TargetSchedule.h - class RegisterFile). This patch doesn't address the case where a write to a part of a register is followed by a read from the whole register. On Intel chips, high8 registers (AH/BH/CH/DH)) can be stored in separate physical registers. However, a later (dirty) read of the full register (example: AX/EAX) triggers a merge uOp, which adds extra latency (and potentially affects the pipe usage). This is a very interesting article on the subject with a very informative answer from Peter Cordes: https://stackoverflow.com/questions/45660139/how-exactly-do-partial-registers-on-haswell-skylake-perform-writing-al-seems-to In future, the definition of RegisterFile can be extended with extra information that may be used to identify delays caused by merge opcodes triggered by a dirty read of a partial write. Differential Revision: https://reviews.llvm.org/D49196 llvm-svn: 337123
* [AVR] Document some public functionsDylan McKay2018-07-151-0/+2
| | | | llvm-svn: 337122
* [X86] Add some optsize patterns for 256-bit X86vzmovl.Craig Topper2018-07-151-0/+19
| | | | | | These patterns use VMOVSS/SD. Without optsize we use BLENDI instead. llvm-svn: 337119
* [NFC][InstCombine] foldICmpWithLowBitMaskedVal(): update comments.Roman Lebedev2018-07-141-2/+3
| | | | | | | | All predicates are handled. There does not seem to be any other possible folds here. There are some more folds possible with inverted mask though. llvm-svn: 337112
* [InstCombine] Fold x & (-1 >> y) s< x to x s> (-1 >> y)Roman Lebedev2018-07-141-0/+6
| | | | | | | | | | https://bugs.llvm.org/show_bug.cgi?id=38123 https://rise4fun.com/Alive/I3O This pattern is not commutative! We must make sure not to fold the commuted version! llvm-svn: 337111
* [InstCombine] Fold x & (-1 >> y) s>= x to x s<= (-1 >> y)Roman Lebedev2018-07-141-0/+6
| | | | | | | | | | https://bugs.llvm.org/show_bug.cgi?id=38123 https://rise4fun.com/Alive/I3O This pattern is not commutative! We must make sure not to fold the commuted version! llvm-svn: 337109
* [InstCombine] Fold x s<= x & (-1 >> y) to x s<= (-1 >> y)Roman Lebedev2018-07-141-0/+6
| | | | | | | | | | https://bugs.llvm.org/show_bug.cgi?id=38123 https://rise4fun.com/Alive/I3O This pattern is not commutative! We must make sure not to fold the commuted version! llvm-svn: 337107
* [InstCombine] Fold x s> x & (-1 >> y) to x s> (-1 >> y)Roman Lebedev2018-07-141-0/+6
| | | | | | | | | | https://bugs.llvm.org/show_bug.cgi?id=38123 https://rise4fun.com/Alive/I3O This pattern is not commutative! We must make sure not to fold the commuted version! llvm-svn: 337105
* [InstCombine] Fold x u<= x & C to x u<= CRoman Lebedev2018-07-141-0/+5
| | | | | | | | | | https://bugs.llvm.org/show_bug.cgi?id=38123 https://rise4fun.com/Alive/Fqp This pattern is not commutative. But InstSimplify will already have taken care of the 'commutative' variant. llvm-svn: 337102
* [InstCombine] Fold x u> x & C to x u> CRoman Lebedev2018-07-141-0/+5
| | | | | | | | | | https://bugs.llvm.org/show_bug.cgi?id=38123 https://rise4fun.com/Alive/JvS This pattern is not commutative. But InstSimplify will already have taken care of the 'commutative' variant. llvm-svn: 337100
* [InstCombine] Fold x & (-1 >> y) u< x to x u> (-1 >> y)Roman Lebedev2018-07-141-0/+5
| | | | | | | | | | https://bugs.llvm.org/show_bug.cgi?id=38123 https://rise4fun.com/Alive/ocb This pattern is not commutative. But InstSimplify will already have taken care of the 'commutative' variant. llvm-svn: 337098
* [InstCombine] Fold x & (-1 >> y) u>= x to x u<= (-1 >> y)Roman Lebedev2018-07-141-0/+5
| | | | | | | | | | https://bugs.llvm.org/show_bug.cgi?id=38123 https://rise4fun.com/Alive/azI This pattern is not commutative. But InstSimplify will already have taken care of the 'commutative' variant. llvm-svn: 337096
* [MachineOutliner] Check the last instruction from the sequence when updating ↵Francis Visoiu Mistrih2018-07-141-1/+1
| | | | | | | | | | | | | | | | | liveness The MachineOutliner was doing an std::for_each from the call (inserted before the outlined sequence) to the iterator at the end of the sequence. std::for_each needs the iterator past the end, so the last instruction was not taken into account when propagating the liveness information. This fixes the machine verifier issue in machine-outliner-disubprogram.ll. Differential Revision: https://reviews.llvm.org/D49295 llvm-svn: 337090
* [x86/SLH] Fix an issue where we wouldn't harden any loads if we foundChandler Carruth2018-07-141-3/+3
| | | | | | | | | no conditions. This is only valid to do if we're hardening calls and rets with LFENCE which results in an LFENCE guarding the entire entry block for us. llvm-svn: 337089
OpenPOWER on IntegriCloud