summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
* [ARM] Add bitcast/extract_subvec. of fp16 vectorsDiogo N. Sampaio2019-04-291-91/+144
| | | | | | | | | | | | | | | | | | | | Summary: This patch adds some basic operations for fp16 vectors, such as bitcast from fp16 to i16, required to perform extract_subvector (also added here) and extract_element. Reviewers: SjoerdMeijer, DavidSpickett, t.p.northover, ostannard Reviewed By: ostannard Subscribers: javed.absar, kristof.beyls, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D60618 llvm-svn: 359433
* [ARM] Add v4f16 and v8f16 types to the CallingConvDiogo N. Sampaio2019-04-291-18/+18
| | | | | | | | | | | | | | | | | | | | | Summary: The Procedure Call Standard for the Arm Architecture states that float16x4_t and float16x8_t behave just as uint16x4_t and uint16x8_t for argument passing. This patch adds the fp16 vectors to the ARMCallingConv.td file. Reviewers: miyuki, ostannard Reviewed By: ostannard Subscribers: ostannard, javed.absar, kristof.beyls, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D60720 llvm-svn: 359431
* Try to use /proc on FreeBSD for getExecutablePathDavid Chisnall2019-04-291-1/+14
| | | | | | | | | | | Currently, clang's libTooling passes this function a fake argv0, which means that no libTooling tools can find the standard headers on FreeBSD. With this change, these will now work on any FreeBSD systems that have procfs mounted. This isn't the right fix for the libTooling issue, but it does bring the FreeBSD implementation of getExecutablePath closer to the Linux and macOS implementations. llvm-svn: 359427
* [DebugInfo] Terminate more location-list ranges at the end of blocksJeremy Morse2019-04-292-20/+82
| | | | | | | | | | | | | | | | This patch fixes PR40795, where constant-valued variable locations can "leak" into blocks placed at higher addresses. The root of this is that DbgEntityHistoryCalculator terminates all register variable locations at the end of each block, but not constant-value variable locations. Fixing this requires constant-valued DBG_VALUE instructions to be broadcast into all blocks where the variable location remains valid, as documented in the LiveDebugValues section of SourceLevelDebugging.rst, and correct termination in DbgEntityHistoryCalculator. Differential Revision: https://reviews.llvm.org/D59431 llvm-svn: 359426
* [DWARF] Fix dump of local/foreign TU lists in .debug_namesFangrui Song2019-04-291-2/+3
| | | | | | Differential Revision: https://reviews.llvm.org/D61241 llvm-svn: 359425
* [DWARF] Delete a redundant check in getFileNameByIndex()Fangrui Song2019-04-291-2/+1
| | | | llvm-svn: 359422
* [X86] Remove some intel syntax aliases on (v)cvtpd2(u)dq, (v)cvtpd2ps, ↵Craig Topper2019-04-292-42/+160
| | | | | | | | | | | | | | | | | | | (v)cvt(u)qq2ps. Add 'x' and'y' suffix aliases to masked version of the same in att syntax. The 128/256 bit version of these instructions require an 'x' or 'y' suffix to disambiguate the memory form in att syntax. We were allowing the same suffix in intel syntax, but it appears gas does not do that. gas does allow the 'x' and 'y' suffix on register and broadcast forms even though its not needed. We were allowing it on unmasked register form, but not on masked versions or on masked or unmasked broadcast form. While there fix some test coverage holes so they can be extended with the 'x' and 'y' suffix tests. llvm-svn: 359418
* llvm-cvtres: Attempt to make llvm-cvtres/duplicate.test work on big-endian ↵Nico Weber2019-04-291-12/+14
| | | | | | systems llvm-svn: 359414
* [X86][SSE] combineExtractVectorElt - add early-out to return zero/undef for ↵Simon Pilgrim2019-04-281-2/+4
| | | | | | out-of-range extraction indices. llvm-svn: 359406
* [ConstantRange] Add makeExactNoWrapRegion()Nikita Popov2019-04-282-6/+12
| | | | | | | | | | | | | | | | | | | | | | | I got confused on the terminology, and the change in D60598 was not correct. I was thinking of "exact" in terms of the result being non-approximate. However, the relevant distinction here is whether the result is * Largest range such that: Forall Y in Other: Forall X in Result: X BinOp Y does not wrap. (makeGuaranteedNoWrapRegion) * Smallest range such that: Forall Y in Other: Forall X not in Result: X BinOp Y wraps. (A hypothetical makeAllowedNoWrapRegion) * Both. (makeExactNoWrapRegion) I'm adding a separate makeExactNoWrapRegion method accepting a single APInt (same as makeExactICmpRegion) and using it in the places where the guarantee is relevant. Differential Revision: https://reviews.llvm.org/D60960 llvm-svn: 359402
* [X86][AVX] Combine non-lane crossing binary shuffles using X86ISD::VPERMV3Simon Pilgrim2019-04-281-0/+22
| | | | | | Some of the combines might be further improved if we lower more shuffles with X86ISD::VPERMV3 directly, instead of waiting to combine the results. llvm-svn: 359400
* [DAGCombiner] try repeated fdiv divisor transform before building estimateSanjay Patel2019-04-281-3/+3
| | | | | | | | | | | | | | | | | | This was originally part of D61028, but it's an independent diff. If we try the repeated divisor reciprocal transform before producing an estimate sequence, then we have an opportunity to use scalar fdiv. On x86, the trade-off is 1 divss vs. 5 vector FP ops in the default estimate sequence. On recent chips (Skylake, Ryzen), the full-precision division is only 3 cycle throughput, so that's probably the better perf default option and avoids problems from x86's inaccurate estimates. The last 2 tests show that users still have the option to override the defaults by using the function attributes for reciprocal estimates, but those patterns are potentially made faster by converting the vector ops (including ymm ops) to scalar math. Differential Revision: https://reviews.llvm.org/D61149 llvm-svn: 359398
* [X86][SSE] Optimize llvm.experimental.vector.reduce.xor.vXi1 parity ↵Simon Pilgrim2019-04-281-2/+12
| | | | | | | | | | reduction (PR38840) An xor reduction of a bool vector can be optimized to a parity check of the MOVMSK/BITCAST'd integer - if the population count is odd return 1, else return 0. Differential Revision: https://reviews.llvm.org/D61230 llvm-svn: 359396
* [X86] Remove (V)MOV64toSDrr/m and (V)MOVDI2SSrr/m. Use 128-bit result ↵Craig Topper2019-04-283-70/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MOVD/MOVQ and COPY_TO_REGCLASS instead Summary: The register form of these instructions are CodeGenOnly instructions that cover GR32->FR32 and GR64->FR64 bitcasts. There is a similar set of instructions for the opposite bitcast. Due to the patterns using bitcasts these instructions get marked as "bitcast" machine instructions as well. The peephole pass is able to look through these as well as other copies to try to avoid register bank copies. Because FR32/FR64/VR128 are all coalescable to each other we can end up in a situation where a GR32->FR32->VR128->FR64->GR64 sequence can be reduced to GR32->GR64 which the copyPhysReg code can't handle. To prevent this, this patch removes one set of the 'bitcast' instructions. So now we can only go GR32->VR128->FR32 or GR64->VR128->FR64. The instruction that converts from GR32/GR64->VR128 has no special significance to the peephole pass and won't be looked through. I guess the other option would be to add support to copyPhysReg to just promote the GR32->GR64 to a GR64->GR64 copy. The upper bits were basically undefined anyway. But removing the CodeGenOnly instruction in favor of one that won't be optimized seemed safer. I deleted the peephole test because it couldn't be made to work with the bitcast instructions removed. The load version of the instructions were unnecessary as the pattern that selects them contains a bitcasted load which should never happen. Fixes PR41619. Reviewers: RKSimon, spatel Reviewed By: RKSimon Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D61223 llvm-svn: 359392
* Revert rL359389: [X86][SSE] Add support for <64 x i1> bool reductionSimon Pilgrim2019-04-271-12/+10
| | | | | | | | Minor generalization of the existing <32 x i1> pre-AVX2 split code. ........ Causing irregular buildbot failures. llvm-svn: 359391
* [X86][SSE] Add support for <64 x i1> bool reductionSimon Pilgrim2019-04-271-10/+12
| | | | | | Minor generalization of the existing <32 x i1> pre-AVX2 split code. llvm-svn: 359389
* [X86][AVX512] Improve vector bool reductionsSimon Pilgrim2019-04-271-11/+22
| | | | | | As predicate masks are legal on AVX512 targets, we avoid MOVMSK in these cases, but we can just bitcast the bool vector to the integer equivalent directly - avoiding expansion of the reduction to a shuffle pattern. llvm-svn: 359386
* [DJB] Fix variable case after D61178Fangrui Song2019-04-271-3/+3
| | | | llvm-svn: 359381
* [X86][AVX] Merge mask select with shuffles across extract_subvector (PR40332)Simon Pilgrim2019-04-271-4/+44
| | | | | | | | Fixes PR40332 in the limited case where we're selecting between a target shuffle and a zero vector. We can extend this in the future to handle more opcodes and non-zero selections. llvm-svn: 359378
* [MCA] Add field `IsEliminated` to class Instruction. NFCIAndrea Di Biagio2019-04-271-3/+3
| | | | llvm-svn: 359377
* [X86] Use MOVQ for i64 atomic_stores when SSE2 is enabledCraig Topper2019-04-275-19/+72
| | | | | | | | | | | | | | | | Summary: If we have SSE2 we can use a MOVQ to store 64-bits and avoid falling back to a cmpxchg8b loop. If its a seq_cst store we need to insert an mfence after the store. Reviewers: spatel, RKSimon, reames, jfb, efriedma Reviewed By: RKSimon Subscribers: hiraditya, dexonsmith, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D60546 llvm-svn: 359368
* Revert "AMDGPU: Split block for si_end_cf"Mark Searles2019-04-275-128/+17
| | | | | | | | | | This reverts commit 7a6ef3004655dd86d722199c471ae78c28e31bb4. We discovered some internal test failures, so reverting for now. Differential Revision: https://reviews.llvm.org/D61213 llvm-svn: 359363
* [AMDGPU] gfx1010 VOPC implementationStanislav Mekhanoshin2019-04-268-361/+696
| | | | | | Differential Revision: https://reviews.llvm.org/D61208 llvm-svn: 359358
* [ORC] Add a 'plugin' interface to ObjectLinkingLayer for events/configuration.Lang Hames2019-04-262-51/+153
| | | | | | | | | | | | | ObjectLinkingLayer::Plugin provides event notifications when objects are loaded, emitted, and removed. It also provides a modifyPassConfig callback that allows plugins to modify the JITLink pass configuration. This patch moves eh-frame registration into its own plugin, and teaches llvm-jitlink to only add that plugin when performing execution runs on non-Windows platforms. This should allow us to re-enable the test case that was removed in r359198. llvm-svn: 359357
* [GlobalISel][AArch64] Use getConstantVRegValWithLookThrough for extractsJessica Paquette2019-04-261-38/+6
| | | | | | | | | | | | getConstantVRegValWithLookThrough does the same thing as the getConstantValueForReg function, and has more visibility across GISel. Plus, it supports looking through G_TRUNC, G_SEXT, and G_ZEXT. So, we get better code reuse and more functionality for free by using it. Add some test cases to select-extract-vector-elt.mir to show that we can now look through those instructions. llvm-svn: 359351
* [AsmPrinter] refactor to support %c w/ GlobalAddress'Nick Desaulniers2019-04-2618-82/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Targets like ARM, MSP430, PPC, and SystemZ have complex behavior when printing the address of a MachineOperand::MO_GlobalAddress. Move that handling into a new overriden method in each base class. A virtual method was added to the base class for handling the generic case. Refactors a few subclasses to support the target independent %a, %c, and %n. The patch also contains small cleanups for AVRAsmPrinter and SystemZAsmPrinter. It seems that NVPTXTargetLowering is possibly missing some logic to transform GlobalAddressSDNodes for TargetLowering::LowerAsmOperandForConstraint to handle with "i" extended inline assembly asm constraints. Fixes: - https://bugs.llvm.org/show_bug.cgi?id=41402 - https://github.com/ClangBuiltLinux/linux/issues/449 Reviewers: echristo, void Reviewed By: void Subscribers: void, craig.topper, jholewinski, dschuff, jyknight, dylanmckay, sdardis, nemanjai, javed.absar, sbc100, jgravelle-google, eraman, kristof.beyls, hiraditya, aheejin, kbarton, fedor.sergeev, jrtc27, atanasyan, jsji, llvm-commits, kees, tpimh, nathanchance, peter.smith, srhines Tags: #llvm Differential Revision: https://reviews.llvm.org/D60887 llvm-svn: 359337
* [X86][AVX] Fold extract_subvector(broadcast(x)) -> broadcast(x) iff x has ↵Simon Pilgrim2019-04-261-0/+7
| | | | | | one use llvm-svn: 359332
* [AArch64][GlobalISel] Select G_BSWAP for vectors of s32 and s64Jessica Paquette2019-04-262-1/+38
| | | | | | | | | There are instructions for these, so mark them as legal. Select the correct instruction in AArch64InstructionSelector.cpp. Update select-bswap.mir and arm64-rev.ll to reflect the changes. llvm-svn: 359331
* [AMDGPU] gfx1010 VOP3 and VOP3P implementationStanislav Mekhanoshin2019-04-264-102/+281
| | | | | | Differential Revision: https://reviews.llvm.org/D61202 llvm-svn: 359328
* [DAGCombine] Cleanup visitEXTRACT_SUBVECTOR. NFCI.Simon Pilgrim2019-04-261-10/+11
| | | | | | Use ArrayRef::slice, reduce some rather awkward long lines for legibility and run clang-format. llvm-svn: 359326
* [ConstantRange] Add abs() supportNikita Popov2019-04-261-0/+31
| | | | | | | | | | | | | | Add support for abs() to ConstantRange. This will allow to handle SPF_ABS select flavor in LVI and will also come in handy as a primitive for the srem implementation. The implementation is slightly tricky, because a) abs of signed min is signed min and b) sign-wrapped ranges may have an abs() that is smaller than a full range, so we need to explicitly handle them. Differential Revision: https://reviews.llvm.org/D61084 llvm-svn: 359321
* [X86] Sink NoRegister creation for unused Base/Index registers into ↵Craig Topper2019-04-261-36/+26
| | | | | | getAddressOperands. NFCI llvm-svn: 359318
* [X86] Segment registers should have i16 type not i32.Craig Topper2019-04-261-1/+1
| | | | | | Probably doesn't really matter, but was inconsistent with the rest of the code. llvm-svn: 359317
* [AMDGPU] gfx1010 VOP2 changesStanislav Mekhanoshin2019-04-266-154/+605
| | | | | | Differential Revision: https://reviews.llvm.org/D61156 llvm-svn: 359316
* [PowerPC] Update P9 vector costs for insert/extract elementRoland Froese2019-04-261-0/+29
| | | | | | | | | | | The PPC vector cost model values for insert/extract element reflect older processors that lacked vector insert/extract and move-to/move-from VSR instructions. Update getVectorInstrCost to give appropriate values for when the newer instructions are present. Differential Revision: https://reviews.llvm.org/D60160 llvm-svn: 359313
* s/Dwarf 5/DWARF v5/ NFCFangrui Song2019-04-261-1/+1
| | | | llvm-svn: 359307
* Fix Wparentheses warning. NFCI.Simon Pilgrim2019-04-261-5/+5
| | | | llvm-svn: 359299
* [X86][SSE] Pull out OR(EXTRACTELT(X,0),OR(EXTRACTELT(X,1),...)) matching ↵Simon Pilgrim2019-04-261-50/+59
| | | | | | | | | | code from LowerVectorAllZeroTest Create a matchBitOpReduction helper that checks for the pattern with any opcode. First step towards reusing this code to recognize other scalar reduction patterns. llvm-svn: 359296
* caseFoldingDjbHash: simplify and make the US-ASCII fast path fasterFangrui Song2019-04-261-19/+16
| | | | | | | | | The slow path (with at least one non US-ASCII) will be slower but that doesn't matter. Differential Revision: https://reviews.llvm.org/D61178 llvm-svn: 359294
* [X86][SSE] Disable shouldFoldConstantShiftPairToMask for btver1/btver2 ↵Simon Pilgrim2019-04-264-2/+26
| | | | | | | | | | targets (PR40758) As detailed on PR40758, Bobcat/Jaguar can perform vector immediate shifts on the same pipes as vector ANDs with the same latency - so it doesn't make sense to replace a shl+lshr with a shift+and pair as it requires an additional mask (with the extra constant pool, loading and register pressure costs). Differential Revision: https://reviews.llvm.org/D61068 llvm-svn: 359293
* [X86][AVX] Combine shuffles extracted from a common vectorSimon Pilgrim2019-04-261-0/+45
| | | | | | | | A small step towards combining shuffles across vector sizes - this recognizes when a shuffle's operands are all extracted from the same larger source and tries to combine to an unary shuffle of that source instead. Fixes one of the test cases from PR34380. Differential Revision: https://reviews.llvm.org/D60512 llvm-svn: 359292
* [InferAddressSpaces] Add AS parameter to the pass factorySven van Haastregt2019-04-261-6/+11
| | | | | | | | | | | | | | This enables the pass to be used in the absence of TargetTransformInfo. When the argument isn't passed, the factory defaults to UninitializedAddressSpace and the flat address space is obtained from the TargetTransformInfo as before this change. Existing users won't have to change. Patch by Kevin Petit. Differential Revision: https://reviews.llvm.org/D60602 llvm-svn: 359290
* Fix alignment in AArch64InstructionSelector::emitConstantPoolEntry()Hans Wennborg2019-04-261-1/+1
| | | | | | | | | | | | | | The code was using the alignment of a pointer to the value, not the alignment of the constant itself. Maybe we got away with it so far because the pointer alignment is fairly high, but we did end up under-aligning <16 x i8> vectors, which was caught in the Chromium build after lld stopped over-aligning the .rodata.cst16 section in r356428. (See crbug.com/953815) Differential revision: https://reviews.llvm.org/D61124 llvm-svn: 359287
* [GlobalISel] Fix inserting copies in the right position for reg definitionsMarcello Maggioni2019-04-262-12/+38
| | | | | | | | | | | | | When constrainRegClass is called if the constraining happens on a use the COPY needs to be inserted before the instruction that contains the MachineOperand, but if we are constraining a definition it actually needs to be added after the instruction. In addition, the COPY needs to have its operands flipped (in the use case we are copying from the old unconstrained register to the new constrained register, while in the definition case we are copying from the new constrained register that the instruction defines to the old unconstrained register). llvm-svn: 359282
* [GlobalOpt] Swap the expensive check for cold calls with the cheap TTI checkJustin Bogner2019-04-261-2/+2
| | | | | | | | | | | isValidCandidateForColdCC is much more expensive than TTI.useColdCCForColdCall, which by default just returns false. Avoid doing this work if we're not going to look at the answer anyway. This change is NFC, but I see significant compile time improvements on some code with pathologically many functions. llvm-svn: 359253
* [ORC] Remove symbols from dependency lists when failing materialization.Lang Hames2019-04-251-2/+29
| | | | | | | | | When failing materialization of a symbol X, remove X from the dependants list of any of X's dependencies. This ensures that when X's dependencies are emitted (or fail themselves) they do not try to access the no-longer-existing MaterializationInfo for X. llvm-svn: 359252
* [CUDA] Implemented _[bi]mma* builtins.Artem Belevich2019-04-251-0/+2
| | | | | | | | | | | | | | | | These builtins provide access to the new integer and sub-integer variants of MMA (matrix multiply-accumulate) instructions provided by CUDA-10.x on sm_75 (AKA Turing) GPUs. Also added a feature for PTX 6.4. While Clang/LLVM does not generate any PTX instructions that need it, we still need to pass it through to ptxas in order to be able to compile code that uses the new 'mma' instruction as inline assembly (e.g used by NVIDIA's CUTLASS library https://github.com/NVIDIA/cutlass/blob/master/cutlass/arch/mma.h#L101) Differential Revision: https://reviews.llvm.org/D60279 llvm-svn: 359248
* PTX 6.3 extends `wmma` instruction to support s8/u8/s4/u4/b1 -> s32.Artem Belevich2019-04-253-54/+243
| | | | | | | | | | | | All of the new instructions are still handled mostly by tablegen. I've slightly refactored the code to drive intrinsic/instruction generation from a master list of supported variants, so all irregularities have to be implemented in one place only. The test generation script wmma.py has been refactored in a similar way. Differential Revision: https://reviews.llvm.org/D60015 llvm-svn: 359247
* [NVPTX] generate correct MMA instruction mnemonics with PTX63+.Artem Belevich2019-04-254-129/+170
| | | | | | | | | | | PTX 6.3 requires using ".aligned" in the MMA instruction names. In order to generate correct name, now we pass current PTX version to each instruction as an extra constant operand and InstPrinter adjusts its output accordingly. Differential Revision: https://reviews.llvm.org/D59393 llvm-svn: 359246
* [NVPTX] Refactor generation of MMA intrinsics and instructions. NFC.Artem Belevich2019-04-251-329/+183
| | | | | | | | | | | | | | | Generalized constructions of 'fragments' of MMA operations to provide common primitives for construction of the ops. This will make it easier to add new variants of the instructions that operate on integer types. Use nested foreach loops which makes it possible to better control naming of the intrinsics. This patch does not affect LLVM's output, so there are no test changes. Differential Revision: https://reviews.llvm.org/D59389 llvm-svn: 359245
OpenPOWER on IntegriCloud