summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* More fixes for subreg join failure in RegCoalescerTim Renouf2018-07-171-4/+21
| | | | | | | | | | | | | | | | | | | | | | Summary: Part of the adjustCopiesBackFrom method wasn't correctly dealing with SubRange intervals when updating. 2 changes. The first to ensure that bogus SubRange Segments aren't propagated when encountering Segments of the form [1234r, 1234d:0) when preparing to merge value numbers. These can be removed in this case. The second forces a shrinkToUses call if SubRanges end on the copy index (instead of just the parent register). V2: Addressed review comments, plus MIR test instead of ll test Subscribers: MatzeB, qcolombet, nhaehnle Differential Revision: https://reviews.llvm.org/D40308 Change-Id: I1d2b2b4beea802fce11da01edf71feb2064aab05 llvm-svn: 337273
* [AArch64][SVE] Asm: Support for predicated FP operations (FP immediate)Sander de Smalen2018-07-171-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | This patch completes support for the following floating point instructions that take FP immediates: FADD* (addition) FSUB (subtract) FSUBR (subtract reverse form) FMUL* (multiplication) FMAX* (maximum) FMAXNM (maximum number) FMIN (maximum) FMINNM (maximum number) All operations are predicated and take a FP immediate operand, e.g. fadd z0.h, p0/m, z0.h, #0.5 fmin z0.s, p0/m, z0.s, #1.0 ^___________^ (tied) * Instructions added in a previous patch. llvm-svn: 337272
* Don't assert that a size_t fits into 64bit.Joerg Sonnenberger2018-07-171-1/+0
| | | | | | Avoids tautological compare warnings on 32bit platforms. llvm-svn: 337269
* [LLVM-C] Fix name mangling on AggressiveInstCombinewhitequark2018-07-171-0/+1
| | | | | | | | | | | | | | Similarly to rL336736, at least one more C API function does not properly get declared as extern "C" due to a missing header, causing name mangling and linking errors. This patch fixes calls to LLVMAddAggressiveInstCombinerPass(). Differential Revision: https://reviews.llvm.org/D49416 Reviewed By: whitequark llvm-svn: 337264
* [LLVM-C] Add target triple normalization to the C API.whitequark2018-07-171-0/+4
| | | | | | | | | | | | | | | | | | rL333307 was introduced to remove automatic target triple normalization when calling sys::getDefaultTargetTriple(), arguing that users of the latter already called Triple::normalize() if necessary. However, users of the C API currently have no way of doing target triple normalization. This patch introduces an LLVMNormalizeTargetTriple function to the C API which wraps Triple::normalize() and can be used on the result of LLVMGetDefaultTargetTriple to achieve the same effect. Differential Revision: https://reviews.llvm.org/D49414 Reviewed By: whitequark llvm-svn: 337263
* [AArch64][SVE] Asm: Support for predicated FP operations.Sander de Smalen2018-07-172-0/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for the following floating point instructions: FABD (absolute difference) FADD (addition) FSUB (subtract) FSUBR (subtract reverse form) FDIV (divide) FDIVR (divide reverse form) FMAX (maximum) FMAXNM (maximum number) FMIN (minimum) FMINNM (minimum number) FSCALE (adjust exponent) FMULX (multiply extended) All operations are predicated and binary form, e.g. fadd z0.h, p0/m, z0.h, z1.h ^___________^ (tied) Supporting 16, 32 and 64-bit FP elements. llvm-svn: 337259
* [DAGCombiner] Call SimplifyDemandedVectorElts from EXTRACT_VECTOR_ELTSimon Pilgrim2018-07-172-14/+49
| | | | | | | | If we are only extracting vector elements via EXTRACT_VECTOR_ELT(s) we may be able to use SimplifyDemandedVectorElts to avoid unnecessary vector ops. Differential Revision: https://reviews.llvm.org/D49262 llvm-svn: 337258
* Fix MSVC "result of 32-bit shift implicitly converted to 64 bits" warning. NFCI.Simon Pilgrim2018-07-171-2/+2
| | | | llvm-svn: 337257
* [AArch64][SVE] Asm: Support for SPLICE instruction.Sander de Smalen2018-07-172-0/+26
| | | | | | | | | | | | | | | | The SPLICE instruction splices two vectors into one vector using a predicate. It copies the active elements from the first vector, and then fills the remaining elements with the low-numbered elements from the second vector. The instruction has the following form, e.g. splice z0.b, p0, z0.b, z1.b for 8-bit elements. It also supports 16, 32 and 64-bit elements. llvm-svn: 337253
* [AArch64][SVE] Asm: Support for EXT instruction.Sander de Smalen2018-07-172-0/+21
| | | | | | | | | | | | | This patch adds an instruction that allows extracting a vector from a pair of vectors, given an immediate index that describes the element position to extract from. The instruction has the following assembly: ext z0.b, z0.b, z1.b, #imm where #imm is an immediate between 0 and 255. llvm-svn: 337251
* [X86] Properly qualify some MOVSS/MOVSD patterns with OptSize.Craig Topper2018-07-171-12/+13
| | | | | | These are integer versions of patterns that I already fixed for floating point. llvm-svn: 337240
* [Sparc] Do not depend on icc for ta 1Daniel Cederman2018-07-171-2/+2
| | | | | | | | | | The ta instruction will always trap, regardless of the value of the integer condition codes. TRAPri is marked as using icc, so we cannot use a pattern for TRAPri to implement ta 1, as verify-machineinstrs can complain that icc is not defined. Instead we implement ta 1 the same way as ta 5. llvm-svn: 337236
* [X86] Add full set of patterns for turning ceil/floor/trunc/rint/nearbyint ↵Craig Topper2018-07-171-178/+197
| | | | | | | | into rndscale with loads, broadcast, and masking. This amounts to pretty ridiculous number of patterns. Ideally we'd canonicalize the X86ISD::VRNDSCALE earlier to reuse those patterns. I briefly looked into doing that, but some strict FP operations could still get converted to rint and nearbyint during isel. It's probably still worthwhile to look into. This patch is meant as a starting point to work from. llvm-svn: 337234
* [X86] Add a missing FMA3 scalar intrinsic pattern.Craig Topper2018-07-161-0/+7
| | | | | | This allows us to use 231 form to fold an insertelement on the add input to the fma. There is technically no software intrinsic that can use this until AVX512F, but it can be manually built up from other intrinsics. llvm-svn: 337223
* [WebAssembly] Remove ELF file support.Sam Clegg2018-07-1622-352/+56
| | | | | | | | | This support was partial and temporary. Now that we have wasm object file support its no longer needed. Differential Revision: https://reviews.llvm.org/D48744 llvm-svn: 337222
* [Intrinsics] define funnel shift IR intrinsics + DAG builder supportSanjay Patel2018-07-161-0/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | As discussed here: http://lists.llvm.org/pipermail/llvm-dev/2018-May/123292.html http://lists.llvm.org/pipermail/llvm-dev/2018-July/124400.html We want to add rotate intrinsics because the IR expansion of that pattern is 4+ instructions, and we can lose pieces of the pattern before it gets to the backend. Generalizing the operation by allowing 2 different input values (plus the 3rd shift/rotate amount) gives us a "funnel shift" operation which may also be a single hardware instruction. Initially, I thought we needed to define new DAG nodes for these ops, and I spent time working on that (much larger patch), but then I concluded that we don't need it. At least as a first step, we have all of the backend support necessary to match these ops...because it was required. And shepherding these through the IR optimizer is the primary concern, so the IR intrinsics are likely all that we'll ever need. There was also a question about converting the intrinsics to the existing ROTL/ROTR DAG nodes (along with improving the oversized shift documentation). Again, I don't think that's strictly necessary (as the test results here prove). That can be an efficiency improvement as a small follow-up patch. So all we're left with is documentation, definition of the IR intrinsics, and DAG builder support. Differential Revision: https://reviews.llvm.org/D49242 llvm-svn: 337221
* Add missing includes.Zachary Turner2018-07-161-0/+2
| | | | llvm-svn: 337218
* [LLVMDemangle] Move some utility classes to header files.Zachary Turner2018-07-165-198/+257
| | | | | | | | | | | | In a followup I'm looking to add a Microsoft demangler. Doing so needs a lot of the same utility classes and feature test macros which are already implemented in ItaniumDemangle.cpp. So move all of these things into header files so that they can be re-used by a new demangler. Differential Revision: https://reviews.llvm.org/D49399 llvm-svn: 337217
* [CodeGen] Fix inconsistent declaration parameter nameFangrui Song2018-07-1637-89/+89
| | | | llvm-svn: 337200
* [AMDGPU] [AMDGPU] Support a fdot2 pattern.Farhana Aleen2018-07-166-1/+88
| | | | | | | | | | | | | | | Summary: Optimize fma((float)S0.x, (float)S1.x fma((float)S0.y, (float)S1.y, z)) -> fdot2((v2f16)S0, (v2f16)S1, (float)z) Author: FarhanaAleen Reviewed By: rampitec, b-sumner Subscribers: AMDGPU Differential Revision: https://reviews.llvm.org/D49146 llvm-svn: 337198
* [llvm] Change 2 instances of std::sort to llvm::sortMandeep Singh Grang2018-07-162-2/+2
| | | | llvm-svn: 337192
* [InstCombine] Fold 'check for [no] signed truncation' patternRoman Lebedev2018-07-161-0/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: [[ https://bugs.llvm.org/show_bug.cgi?id=38149 | PR38149 ]] As discussed in https://reviews.llvm.org/D49179#1158957 and later, the IR for 'check for [no] signed truncation' pattern can be improved: https://rise4fun.com/Alive/gBf ^ that pattern will be produced by Implicit Integer Truncation sanitizer, https://reviews.llvm.org/D48958 https://bugs.llvm.org/show_bug.cgi?id=21530 in signed case, therefore it is probably a good idea to improve it. Proofs for this transform: https://rise4fun.com/Alive/mgu This transform is surprisingly frustrating. This does not deal with non-splat shift amounts, or with undef shift amounts. I've outlined what i think the solution should be: ``` // Potential handling of non-splats: for each element: // * if both are undef, replace with constant 0. // Because (1<<0) is OK and is 1, and ((1<<0)>>1) is also OK and is 0. // * if both are not undef, and are different, bailout. // * else, only one is undef, then pick the non-undef one. ``` The DAGCombine will reverse this transform, see https://reviews.llvm.org/D49266 Reviewers: spatel, craig.topper Reviewed By: spatel Subscribers: JDevlieghere, rkruppe, llvm-commits Differential Revision: https://reviews.llvm.org/D49320 llvm-svn: 337190
* [RegAlloc] Skip global splitting if the live range is huge and its spill isWei Mi2018-07-161-0/+19
| | | | | | | | | | | | | | | | | | | | | | | trivially rematerializable. We run into a case where machineLICM hoists a large number of live ranges outside of a big loop because it thinks those live ranges are trivially rematerializable. In regalloc, global splitting is tried out first for those live ranges before they are spilled and rematerialized. Because the global splitting algorithm is quadratic, increasing a lot of global splitting candidates causes huge compile time increase (50s to 1400s on my local machine when compiling a module). However, we think for live ranges which are very large and are trivially rematerialiable, it is better to just skip global splitting so as to save compile time with little chance of sacrificing performance. We uses the segment size of live range to indirectly evaluate whether the global splitting of the live range can introduce high cost, and use an option as a knob to adjust the size limit threshold. Differential Revision: https://reviews.llvm.org/D49353 llvm-svn: 337186
* Restore "[ThinLTO] Ensure we always select the same function copy to import"Teresa Johnson2018-07-162-71/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit r337081, therefore restoring r337050 (and fix in r337059), with test fix for bot failure described after the original description below. In order to always import the same copy of a linkonce function, even when encountering it with different thresholds (a higher one then a lower one), keep track of the summary we decided to import. This ensures that the backend only gets a single definition to import for each GUID, so that it doesn't need to choose one. Move the largest threshold the GUID was considered for import into the current module out of the ImportMap (which is part of a larger map maintained across the whole index), and into a new map just maintained for the current module we are computing imports for. This saves some memory since we no longer have the thresholds maintained across the whole index (and throughout the in-process backends when doing a normal non-distributed ThinLTO build), at the cost of some additional information being maintained for each invocation of ComputeImportForModule (the selected summary pointer for each import). There is an additional map lookup for each callee being considered for importing, however, this was able to subsume a map lookup in the Worklist iteration that invokes computeImportForFunction. We also are able to avoid calling selectCallee if we already failed to import at the same or higher threshold. I compared the run time and peak memory for the SPEC2006 471.omnetpp benchmark (running in-process ThinLTO backends), as well as for a large internal benchmark with a distributed ThinLTO build (so just looking at the thin link time/memory). Across a number of runs with and without this change there was no significant change in the time and memory. (I tried a few other variations of the change but they also didn't improve time or peak memory). The new commit removes a test that no longer makes sense (Transforms/FunctionImport/hotness_based_import2.ll), as exposed by the reverse-iteration bot. The test depends on the order of processing the summary call edges, and actually depended on the old problematic behavior of selecting more than one summary for a given GUID when encountered with different thresholds. There was no guarantee even before that we would eventually pick the linkonce copy with the hottest call edges, it just happened to work with the test and the old code, and there was no guarantee that we would end up importing the selected version of the copy that had the hottest call edges (since the backend would effectively import only one of the selected copies). Reviewers: davidxl Subscribers: mehdi_amini, inglorion, llvm-commits Differential Revision: https://reviews.llvm.org/D48670 llvm-svn: 337184
* [x86/SLH] Completely rework how we sink post-load hardening past dataChandler Carruth2018-07-161-24/+184
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | invariant instructions to be both more correct and much more powerful. While testing, I continued to find issues with sinking post-load hardening. Unfortunately, it was amazingly hard to create any useful tests of this because we were mostly sinking across copies and other loading instructions. The fact that we couldn't sink past normal arithmetic was really a big oversight. So first, I've ported roughly the same set of instructions from the data invariant loads to also have their non-loading varieties understood to be data invariant. I've also added a few instructions that came up so often it again made testing complicated: inc, dec, and lea. With this, I was able to shake out a few nasty bugs in the validity checking. We need to restrict to hardening single-def instructions with defined registers that match a particular form: GPRs that don't have a NOREX constraint directly attached to their register class. The (tiny!) test case included catches all of the issues I was seeing (once we can sink the hardening at all) except for the NOREX issue. The only test I have there is horrible. It is large, inexplicable, and doesn't even produce an error unless you try to emit encodings. I can keep looking for a way to test it, but I'm out of ideas really. Thanks to Ben for giving me at least a sanity-check review. I'll follow up with Craig to go over this more thoroughly post-commit, but without it SLH crashes everywhere so landing it for now. Differential Revision: https://reviews.llvm.org/D49378 llvm-svn: 337177
* [mips] Eliminate the usage of hasStdEnc in MipsPat.Simon Atanasyan2018-07-167-161/+206
| | | | | | | | | | | Instead, the pattern is tagged with the correct predicate when it is declared. Some patterns have been duplicated as necessary. Patch by Simon Dardis. Differential revision: https://reviews.llvm.org/D48365 llvm-svn: 337171
* [MIPS GlobalISel] Select instructions to load and store i32 on stackPetar Jovanovic2018-07-163-2/+88
| | | | | | | | | | | Add code for selection of G_LOAD, G_STORE, G_GEP, G_FRAMEINDEX and G_CONSTANT. Support loads and stores of i32 values. Patch by Petar Avramovic. Differential Revision: https://reviews.llvm.org/D48957 llvm-svn: 337168
* [X86][AArch64][DAGCombine] Unfold 'check for [no] signed truncation' patternRoman Lebedev2018-07-163-0/+113
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: [[ https://bugs.llvm.org/show_bug.cgi?id=38149 | PR38149 ]] As discussed in https://reviews.llvm.org/D49179#1158957 and later, the IR for 'check for [no] signed truncation' pattern can be improved: https://rise4fun.com/Alive/gBf ^ that pattern will be produced by Implicit Integer Truncation sanitizer, https://reviews.llvm.org/D48958 https://bugs.llvm.org/show_bug.cgi?id=21530 in signed case, therefore it is probably a good idea to improve it. But the IR-optimal patter does not lower efficiently, so we want to undo it.. This handles the simple pattern. There is a second pattern with predicate and constants inverted. NOTE: we do not check uses here. we always do the transform. Reviewers: spatel, craig.topper, RKSimon, javed.absar Reviewed By: spatel Subscribers: kristof.beyls, llvm-commits Differential Revision: https://reviews.llvm.org/D49266 llvm-svn: 337166
* [Sparc] Use the correct encoding for ta 3Daniel Cederman2018-07-161-1/+1
| | | | | | | | | | | | | | | Summary: The old encoding generated a "tn %g1 + 3" instruction instead of the expected "ta 3". Reviewers: venkatra, jyknight Reviewed By: jyknight Subscribers: fedor.sergeev, jrtc27, llvm-commits Differential Revision: https://reviews.llvm.org/D49171 llvm-svn: 337165
* [Sparc] Use the names .rem and .urem instead of __modsi3 and __umodsi3Daniel Cederman2018-07-161-0/+3
| | | | | | | | | | | | | | Summary: These are the names used in libgcc. Reviewers: venkatra, jyknight, ekedaigle Reviewed By: jyknight Subscribers: joerg, fedor.sergeev, jrtc27, llvm-commits Differential Revision: https://reviews.llvm.org/D48915 llvm-svn: 337164
* [Sparc] Generate ta 1 for the @llvm.debugtrap intrinsicDaniel Cederman2018-07-162-0/+4
| | | | | | | | | | | | | | | Summary: Software trap number one is the trap used for breakpoints in the Sparc ABI. Reviewers: jyknight, venkatra Reviewed By: jyknight Subscribers: fedor.sergeev, jrtc27, llvm-commits Differential Revision: https://reviews.llvm.org/D48637 llvm-svn: 337163
* Avoid losing Hi part when expanding VAARG nodes on big endian machinesDaniel Cederman2018-07-161-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | Summary: If the high part of the load is not used the offset to the next element will not be set correctly. For example, on Sparc V8, the following code will read val2 from offset 4 instead of 8. ``` int val = __builtin_va_arg(va, long long); int val2 = __builtin_va_arg(va, int); ``` Reviewers: jyknight Reviewed By: jyknight Subscribers: fedor.sergeev, jrtc27, llvm-commits Differential Revision: https://reviews.llvm.org/D48595 llvm-svn: 337161
* [x86/SLH] Fix a bug where we would try to post-load harden non-GPRs.Chandler Carruth2018-07-161-13/+25
| | | | | | | | | | | | | | | | Found cases that hit the assert I added. This patch factors the validity checking into a nice helper routine and calls it when deciding to harden post-load, and asserts it when doing so later. I've added tests for the various ways of loading a floating point type, as well as loading all vector permutations. Even though many of these go to identical instructions, it seems good to somewhat comprehensively test them. I'm confident there will be more fixes needed here, I'll try to add tests each time as I get this predicate adjusted. llvm-svn: 337160
* MSan: minor fixes, NFCAlexander Potapenko2018-07-161-7/+6
| | | | | | | - remove an extra space after |ID| declaration - drop the unused |FirstInsn| parameter in getShadowOriginPtrUserspace() llvm-svn: 337159
* [AccelTable] Provide DWARF5AccelTableStaticData for dsymutil.Jonas Devlieghere2018-07-161-39/+81
| | | | | | | | | | | For dsymutil we want to store offsets in the accelerator table entries rather than DIE pointers. In addition, we need a way to communicate which CU a DIE belongs to. This patch provides support for both of these issues. Differential revision: https://reviews.llvm.org/D49102 llvm-svn: 337158
* [x86/SLH] Extract another small helper function, add better comments andChandler Carruth2018-07-161-23/+34
| | | | | | use better terminology. NFC. llvm-svn: 337157
* [AMDGPU][Waitcnt] Re-apply fix "comparison of integers of different signs" ↵Mark Searles2018-07-161-1/+1
| | | | | | | | | | build error" Re-apply "[AMDGPU][Waitcnt] fix "comparison of integers of different signs" build error"" ( fe0a456510131f268e388c4a18a92f575c0db183 ), which was inadvertantly reverted via 2b2ee080f0164485562593b1b87291a48cea4a9a . llvm-svn: 337156
* [MSan] factor userspace-specific declarations into createUserspaceApi(). NFCAlexander Potapenko2018-07-161-38/+53
| | | | | | | | | | This patch introduces createUserspaceApi() that creates function/global declarations for symbols used by MSan in the userspace. This is a step towards the upcoming KMSAN implementation patch. Reviewed at https://reviews.llvm.org/D49292 llvm-svn: 337155
* run post-RA hazard recognizer pass lateMark Searles2018-07-162-4/+8
| | | | | | | | | | | | | Memory legalizer, waitcnt, and shrink passes can perturb the instructions, which means that the post-RA hazard recognizer pass should run after them. Otherwise, one of those passes may invalidate the work done by the hazard recognizer. Note that this has adverse side-effect that any consecutive S_NOP 0's, emitted by the hazard recognizer, will not be shrunk into a single S_NOP <N>. This should be addressed in a follow-on patch. Differential Revision: https://reviews.llvm.org/D49288 llvm-svn: 337154
* Revert "[AMDGPU][Waitcnt] fix "comparison of integers of different signs" ↵Mark Searles2018-07-161-1/+1
| | | | | | | | build error" This reverts commit fe0a456510131f268e388c4a18a92f575c0db183. llvm-svn: 337153
* [MemorySSAUpdater] Remove deleted trivial Phis from active worksetAlexandros Lamprineas2018-07-161-7/+12
| | | | | | | | | | | | | Bug fix for PR37808. The regression test is a reduced version of the original reproducer attached to the bug report. As stated in the report, the problem was that InsertedPHIs was keeping dangling pointers to deleted Memory-Phis. MemoryPhis are created eagerly and sometimes get zapped shortly afterwards. I've used WeakVH instead of an expensive removal operation from the active workset. Differential Revision: https://reviews.llvm.org/D48372 llvm-svn: 337149
* [X86] Merge the FR128 and VR128 regclass since they have identical spill and ↵Craig Topper2018-07-167-298/+328
| | | | | | | | | | alignment characteristics. This unfortunately requires a bunch of bitcasts to be added added to SUBREG_TO_REG, COPY_TO_REGCLASS, and instructions in output patterns. Otherwise tablegen seems to default to picking f128 and then we fail when something tries to get the register class for f128 which isn't always valid. The test changes are because we were previously mixing fr128 and vr128 due to contrainRegClass finding FR128 first and passes like live range shrinking weren't handling that well. llvm-svn: 337147
* [x86/SLH] Fix an unused variable warning in release builds afterChandler Carruth2018-07-161-0/+1
| | | | | | r337144. llvm-svn: 337145
* [x86/SLH] Teach speculative load hardening to correctly harden theChandler Carruth2018-07-162-17/+92
| | | | | | | | | | | | | | | | | | | | | | indices used by AVX2 and AVX-512 gather instructions. The index vector is hardened by broadcasting the predicate state into a vector register and then or-ing. We don't even have to worry about EFLAGS here. I've added a test for all of the gather intrinsics to make sure that we don't miss one. A particularly interesting creation is the gather prefetch, which needs to be marked as potentially "loading" to get the correct behavior. It's a memory access in many ways, and is actually relevant for SLH. Based on discussion with Craig in review, I've moved it to be `mayLoad` and `mayStore` rather than generic side effects. This matches how we model other prefetch instructions. Many thanks to Craig for the review here. Differential Revision: https://reviews.llvm.org/D49336 llvm-svn: 337144
* [InstCombine] add more SPFofSPF foldingChen Zheng2018-07-162-24/+44
| | | | | | Differential Revision: https://reviews.llvm.org/D49238 llvm-svn: 337143
* [InstCombine] fold icmp pred (sub 0, X) C for vector typeChen Zheng2018-07-161-2/+2
| | | | | | Differential Revision: https://reviews.llvm.org/D49283 llvm-svn: 337141
* Recommit r335794 "Add support for generating a call graph profile from ↵Michael J. Spencer2018-07-166-7/+181
| | | | | | Branch Frequency Info." with fix for removed functions. llvm-svn: 337140
* [x86/SLH] Extract one of the bits of logic to its own function. NFC.Chandler Carruth2018-07-151-43/+48
| | | | | | | This is just a refactoring to start cleaning up the code here and make it more readable and approachable. llvm-svn: 337138
* [X86] Add custom execution domain fixing for 128/256-bit integer logic ↵Craig Topper2018-07-151-0/+85
| | | | | | | | | | | | operations with AVX512F, but not AVX512DQ. AVX512F only has integer domain logic instructions. AVX512DQ added FP domain logic instructions. Execution domain fixing runs before EVEX->VEX. So if we have AVX512F and not AVX512DQ we fail to do execution domain switching of the logic operations. This leads to mismatches in execution domain and more test differences. This patch adds custom domain fixing that switches EVEX integer logic operations to VEX fp logic operations if XMM16-31 are not used. llvm-svn: 337137
* [X86] Add load patterns for cases where we select X86Movss/X86Movsd to blend ↵Craig Topper2018-07-151-0/+32
| | | | | | | | instructions. This allows us to fold the load during isel without waiting for the peephole pass to do it. llvm-svn: 337136
OpenPOWER on IntegriCloud