summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* [X86] Remove the _alt forms of (V)CMP instructions. Use a combination of ↵Craig Topper2019-03-189-234/+376
| | | | | | | | | | custom printing and custom parsing to achieve the same result and more Similar to previous change done for VPCOM and VPCMP Differential Revision: https://reviews.llvm.org/D59468 llvm-svn: 356384
* [DAG] Cleanup unused node in SimplifySelectCC.Nirav Dave2019-03-181-8/+7
| | | | | | | | | | | | | | | | Delete temporarily constructed node uses for analysis after it's use, holding onto original input nodes. Ideally this would be rewritten without making nodes, but this appears relatively complex. Reviewers: spatel, RKSimon, craig.topper Subscribers: jdoerfert, hiraditya, deadalnix, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D57921 llvm-svn: 356382
* [AMDGPU] Add an experimental buffer fat pointer address space.Neil Henning2019-03-185-20/+26
| | | | | | | | | | | | Add an experimental buffer fat pointer address space that is currently unhandled in the backend. This commit reserves address space 7 as a non-integral pointer repsenting the 160-bit fat pointer (128-bit buffer descriptor + 32-bit offset) that is heavily used in graphics workloads using the AMDGPU backend. Differential Revision: https://reviews.llvm.org/D58957 llvm-svn: 356373
* [InstCombine] allow general vector constants for funnel shift to shift ↵Sanjay Patel2019-03-181-20/+13
| | | | | | | | | | | | | transforms Follow-up to: rL356338 rL356369 We can calculate an arbitrary vector constant minus the bitwidth, so there's no need to limit this transform to scalars and splats. llvm-svn: 356372
* [InstCombine] extend rotate-left-by-constant canonicalization to funnel shiftSanjay Patel2019-03-181-9/+10
| | | | | | | | | | | Follow-up to: rL356338 Rotates are a special case of funnel shift where the 2 input operands are the same value, but that does not need to be a restriction for the canonicalization when the shift amount is a constant. llvm-svn: 356369
* [DebugInfo] Ignore bitcasts when lowering stack arg dbg.valuesDavid Stenberg2019-03-181-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: Look past bitcasts when looking for parameter debug values that are described by frame-index loads in `EmitFuncArgumentDbgValue()`. In the attached test case we would be left with an undef `DBG_VALUE` for the parameter without this patch. A similar fix was done for parameters passed in registers in D13005. This fixes PR40777. Reviewers: aprantl, vsk, jmorse Reviewed By: aprantl Subscribers: bjope, javed.absar, jdoerfert, llvm-commits Tags: #debug-info, #llvm Differential Revision: https://reviews.llvm.org/D58831 llvm-svn: 356363
* [AArch64] Fix bug 35094 atomicrmw on Armv8.1-A+lseChristof Douma2019-03-181-1/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixes https://bugs.llvm.org/show_bug.cgi?id=35094 The Dead register definition pass should leave alone the atomicrmw instructions on AArch64 (LTE extension). The reason is the following statement in the Arm ARM: "The ST<OP> instructions, and LD<OP> instructions where the destination register is WZR or XZR, are not regarded as doing a read for the purpose of a DMB LD barrier." A good example was given in the gcc thread by Will Deacon (linked in the bugzilla ticket 35094): P0 (atomic_int* y,atomic_int* x) { atomic_store_explicit(x,1,memory_order_relaxed); atomic_thread_fence(memory_order_release); atomic_store_explicit(y,1,memory_order_relaxed); } P1 (atomic_int* y,atomic_int* x) { atomic_fetch_add_explicit(y,1,memory_order_relaxed); // STADD atomic_thread_fence(memory_order_acquire); int r0 = atomic_load_explicit(x,memory_order_relaxed); } P2 (atomic_int* y) { int r1 = atomic_load_explicit(y,memory_order_relaxed); } My understanding is that it is forbidden for r0 == 0 and r1 == 2 after this test has executed. However, if the relaxed add in P1 compiles to STADD and the subsequent acquire fence is compiled as DMB LD, then we don't have any ordering guarantees in P1 and the forbidden result could be observed. Change-Id: I419f9f9df947716932038e1100c18d10a96408d0 llvm-svn: 356360
* [X86] Hopefully fix a tautological compare warning in printVecCompareInstr.Craig Topper2019-03-182-2/+2
| | | | llvm-svn: 356359
* [X86] Make ADD*_DB post-RA pseudos and expand them in expandPostRAPseudo.Craig Topper2019-03-184-32/+27
| | | | | | | | | These are used to help convert OR->LEA when needed to avoid avoid a copy. They aren't need after register allocation. Happens to remove an ugly goto from X86MCCodeEmitter.cpp llvm-svn: 356356
* [X86] Add tab character to the custom printing of VPCMP and VPCOM instructions.Craig Topper2019-03-182-0/+4
| | | | | | All the other instructions are printed with a preceeding tab. llvm-svn: 356355
* [X86] Merge printf32mem/printi32mem into a single printdwordmem. Do the same ↵Craig Topper2019-03-176-97/+60
| | | | | | | | | | | | for all other printing functions. The only thing the print methods currently need to know is the string to print for the memory size in intel syntax. This patch merges the functions based on this string. If we ever need something else in the future, its easy to split them back out. This reduces the number of cases in the assembly printers. It shrinks the intel printer to only use 7 bytes per instruction instead of 8. llvm-svn: 356352
* [CodeGen] Defined MVTs v3i32, v3f32, v5i32, v5f32Tim Renouf2019-03-171-0/+8
| | | | | | | | | AMDGPU would like to use these MVTs. Differential Revision: https://reviews.llvm.org/D58901 Change-Id: I6125fea810d7cc62a4b4de3d9904255a1233ae4e llvm-svn: 356351
* [CodeGen] Prepare for introduction of v3 and v5 MVTsTim Renouf2019-03-173-2/+38
| | | | | | | | | | | | | | | | | | AMDGPU would like to have MVTs for v3i32, v3f32, v5i32, v5f32. This commit does not add them, but makes preparatory changes: * Exclude non-legal non-power-of-2 vector types from ComputeRegisterProp mechanism in TargetLoweringBase::getTypeConversion. * Cope with SETCC and VSELECT for odd-width i1 vector when the other vectors are legal type. Some of this patch is from Matt Arsenault, also of AMD. Differential Revision: https://reviews.llvm.org/D58899 Change-Id: Ib5f23377dbef511be3a936211a0b9f94e46331f8 llvm-svn: 356350
* [ARM] Check that CPSR does not have other usesDavid Green2019-03-171-1/+5
| | | | | | | Fix up rL356335 by checking that CPSR is not read between the compare and the branch. llvm-svn: 356349
* RegAllocFast: Add hint to debug printingMatt Arsenault2019-03-171-1/+2
| | | | llvm-svn: 356348
* AMDGPU: Partially fix default device for HSAMatt Arsenault2019-03-174-7/+14
| | | | | | | | | | | | | | | | | | There are a few different issues, mostly stemming from using generation based checks for anything instead of subtarget features. Stop adding flat-address-space as a feature for HSA, as it should only be a device property. This was incorrectly allowing flat instructions to select for SI. Increase the default generation for HSA to avoid the encoding error when emitting objects. This has some other side effects from various checks which probably should be separate subtarget features (in the cost model and for dealing with the DS offset folding issue). Partial fix for bug 41070. It should probably be an error to try using amdhsa without flat support. llvm-svn: 356347
* [ConstantRange] Add assertion for KnownBits validity; NFCNikita Popov2019-03-171-0/+2
| | | | | | Following the suggestion in D59475. llvm-svn: 356346
* [ValueTracking] Use ConstantRange overflow check for signed add; NFCNikita Popov2019-03-171-48/+8
| | | | | | | | | | | | | | | | This is the same change as rL356290, but for signed add. It replaces the existing ripple logic with the overflow logic in ConstantRange. This is NFC in that it should return NeverOverflow in exactly the same cases as the previous implementation. However, it does make computeOverflowForSignedAdd() more powerful by now also determining AlwaysOverflows conditions. As none of its consumers handle this yet, this has no impact on optimization. Making use of AlwaysOverflows in with.overflow folding will be handled as a followup. Differential Revision: https://reviews.llvm.org/D59450 llvm-svn: 356345
* [X86] Remove the _alt forms of AVX512 VPCMP instructions. Use a combination ↵Craig Topper2019-03-178-222/+347
| | | | | | | | | | of custom printing and custom parsing to achieve the same result and more Similar to the previous patch for VPCOM. Differential Revision: https://reviews.llvm.org/D59398 llvm-svn: 356344
* [X86] Remove the _alt forms of XOP VPCOM instructions. Use a combination of ↵Craig Topper2019-03-1710-69/+149
| | | | | | | | | | | | | | | | | | | | custom printing and custom parsing to achieve the same result and more Previously we had a regular form of the instruction used when the immediate was 0-7. And _alt form that allowed the full 8 bit immediate. Codegen would always use the 0-7 form since the immediate was always checked to be in range. Assembly parsing would use the 0-7 form when a mnemonic like vpcomtrueb was used. If the immediate was specified directly the _alt form was used. The disassembler would prefer to use the 0-7 form instruction when the immediate was in range and the _alt form otherwise. This way disassembly would print the most readable form when possible. The assembly parsing for things like vpcomtrueb relied on splitting the mnemonic into 3 pieces. A "vpcom" prefix, an immediate representing the "true", and a suffix of "b". The tablegenerated printing code would similarly print a "vpcom" prefix, decode the immediate into a string, and then print "b". The _alt form on the other hand parsed and printed like any other instruction with no specialness. With this patch we drop to one form and solve the disassembly printing issue by doing custom printing when the immediate is 0-7. The parsing code has been tweaked to turn "vpcomtrueb" into "vpcomb" and then the immediate for the "true" is inserted either before or after the other operands depending on at&t or intel syntax. I'd rather not do the custom printing, but I tried using an InstAlias for each possible mnemonic for all 8 immediates for all 16 combinations of element size, signedness, and memory/register. The code emitted into printAliasInstr ended up checking the number of operands, the register class of each operand, and the immediate for all 256 aliases. This was repeated for both the at&t and intel printer. Despite a lot of common checks between all of the aliases, when compiled with clang at least this commonality was not well optimized. Nor do all the checks seem necessary. Since I want to do a similar thing for vcmpps/pd/ss/sd which have 32 immediate values and 3 encoding flavors, 3 register sizes, etc. This didn't seem to scale well for clang binary size. So custom printing seemed a better trade off. I also considered just using the InstAlias for the matching and not the printing. But that seemed like it would add a lot of extra rows to the matcher table. Especially given that the 32 immediates for vpcmpps have 46 strings associated with them. Differential Revision: https://reviews.llvm.org/D59398 llvm-svn: 356343
* [AMDGPU] Prepare for introduction of v3 and v5 MVTsTim Renouf2019-03-171-3/+4
| | | | | | | | | | | | | | | | | | | AMDGPU would like to have MVTs for v3i32, v3f32, v5i32, v5f32. This commit does not add them, but makes preparatory changes: * Fixed assumptions of power-of-2 vector type in kernel arg handling, and added v5 kernel arg tests and v3/v5 shader arg tests. * Added v5 tests for cost analysis. * Added vec3/vec5 arg test cases. Some of this patch is from Matt Arsenault, also of AMD. Differential Revision: https://reviews.llvm.org/D58928 Change-Id: I7279d6b4841464d2080eb255ef3c589e268eabcd llvm-svn: 356342
* [ARM] Fixed an assumption of power-of-2 vector MVTTim Renouf2019-03-171-6/+6
| | | | | | | | | | | | I am about to introduce some non-power-of-2 width vector MVTs. This commit fixes a power-of-2 assumption that my forthcoming change would otherwise break, as shown by test/CodeGen/ARM/vcvt_combine.ll and vdiv_combine.ll. Differential Revision: https://reviews.llvm.org/D58927 Change-Id: I56a282e365d3874ab0621e5bdef98a612f702317 llvm-svn: 356341
* [ConstantRange] Add fromKnownBits() methodNikita Popov2019-03-172-11/+27
| | | | | | | | | | | | | | Following the suggestion in D59450, I'm moving the code for constructing a ConstantRange from KnownBits out of ValueTracking, which also allows us to test this code independently. I'm adding this method to ConstantRange rather than KnownBits (which would have been a bit nicer API wise) to avoid creating a dependency from Support to IR, where ConstantRange lives. Differential Revision: https://reviews.llvm.org/D59475 llvm-svn: 356339
* [InstCombine] canonicalize rotate right by constant to rotate leftSanjay Patel2019-03-171-7/+20
| | | | | | | | | | | This was noted as a backend problem: https://bugs.llvm.org/show_bug.cgi?id=41057 ...and subsequently fixed for x86: rL356121 But we should canonicalize these in IR for the benefit of all targets and improve IR analysis such as CSE. llvm-svn: 356338
* [ARM] Search backwards for CMP when combining into CBZDavid Green2019-03-171-35/+59
| | | | | | | | | | | The constant island pass currently only looks at the instruction immediately before a branch for a CMP to fold into a CBZ/CBNZ. This extends it to search backwards for the instruction that defines CPSR. We need to ensure that the register is not overridden between the CMP and the branch. Differential Revision: https://reviews.llvm.org/D59317 llvm-svn: 356336
* [DAGCombine] Fold (x & ~y) | y patternsNikita Popov2019-03-171-0/+22
| | | | | | | | | | | | | Fold (x & ~y) | y and it's four commuted variants to x | y. This pattern can in particular appear when a vselect c, x, -1 is expanded to (x & ~c) | (-1 & c) and combined to (x & ~c) | c. This change has some overlap with D59066, which avoids creating a vselect of this form in the first place during uaddsat expansion. Differential Revision: https://reviews.llvm.org/D59174 llvm-svn: 356333
* [TargetLowering] improve the default expansion of uaddsat/usubsatSanjay Patel2019-03-171-0/+11
| | | | | | | | | | | | | | | This is a subset of what was proposed in: D59006 ...and may overlap with test changes from: D59174 ...but it seems like a good general optimization to turn selects into bitwise-logic when possible because we never know exactly what can happen at this stage of DAG combining depending on how the target has defined things. Differential Revision: https://reviews.llvm.org/D59066 llvm-svn: 356332
* [RISCV][NFC] Factor out matchRegisterNameHelper in RISCVAsmParser.cppAlex Bradbury2019-03-171-11/+17
| | | | | | Contains common logic to match a string to a register name. llvm-svn: 356330
* [RISCV] Fix RISCVAsmParser::ParseRegister and add testsAlex Bradbury2019-03-171-5/+7
| | | | | | | | | | | RISCVAsmParser::ParseRegister is called from AsmParser::parseRegisterOrNumber, which in turn is called when processing CFI directives. The RISC-V implementation wasn't setting RegNo, and so was incorrect. This patch address that and adds cfi directive tests that demonstrate the fix. A follow-up patch will factor out the register parsing logic shared between ParseRegister and parseRegister. llvm-svn: 356329
* [DAGCombine] combineShuffleOfScalars - handle non-zero SCALAR_TO_VECTOR ↵Simon Pilgrim2019-03-161-2/+2
| | | | | | | | indices (PR41097) rL356292 reduces the size of scalar_to_vector if we know the upper bits are undef - which means that shuffles may find they are suddenly referencing scalar_to_vector elements other than zero - so make sure we handle this as undef. llvm-svn: 356327
* [BPF] Add BTF Var and DataSec SupportYonghong Song2019-03-164-41/+208
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Two new kinds, BTF_KIND_VAR and BTF_KIND_DATASEC, are added. BTF_KIND_VAR has the following specification: btf_type.name: var name btf_type.info: type kind btf_type.type: var type // btf_type is followed by one u32 u32: varinfo (currently, only 0 - static, 1 - global allocated in elf sections) Not all globals are supported in this patch. The following globals are supported: . static variables with or without section attributes . global variables with section attributes The inclusion of globals with section attributes is for future potential extraction of key/value type id's from map definition. BTF_KIND_DATASEC has the following specification: btf_type.name: section name associated with variable or one of .data/.bss/.readonly btf_type.info: type kind and vlen for # of variables btf_type.size: 0 #vlen number of the following: u32: id of corresponding BTF_KIND_VAR u32: in-session offset of the var u32: the size of memory var occupied At the time of debug info emission, the data section size is unknown, so the btf_type.size = 0 for BTF_KIND_DATASEC. The loader can patch it during loading time. The in-session offseet of the var is only available for static variables. For global variables, the loader neeeds to assign the global variable symbol value in symbol table to in-section offset. The size of memory is used to specify the amount of the memory a variable occupies. Typically, it equals to the type size, but for certain structures, e.g., struct tt { int a; int b; char c[]; }; static volatile struct tt s2 = {3, 4, "abcdefghi"}; The static variable s2 has size of 20. Note that for BTF_KIND_DATASEC name, the section name does not contain object name. The compiler does have input module name. For example, two cases below: . clang -target bpf -O2 -g -c test.c The compiler knows the input file (module) is test.c and can generate sec name like test.data/test.bss etc. . clang -target bpf -O2 -g -emit-llvm -c test.c -o - | llc -march=bpf -filetype=obj -o test.o The llc compiler has the input file as stdin, and would generate something like stdin.data/stdin.bss etc. which does not really make sense. For any user specificed section name, e.g., static volatile int a __attribute__((section("id1"))); static volatile const int b __attribute__((section("id2"))); The DataSec with name "id1" and "id2" does not contain information whether the section is readonly or not. The loader needs to check the corresponding elf section flags for such information. A simple example: -bash-4.4$ cat t.c int g1; int g2 = 3; const int g3 = 4; static volatile int s1; struct tt { int a; int b; char c[]; }; static volatile struct tt s2 = {3, 4, "abcdefghi"}; static volatile const int s3 = 4; int m __attribute__((section("maps"), used)) = 4; int test() { return g1 + g2 + g3 + s1 + s2.a + s3 + m; } -bash-4.4$ clang -target bpf -O2 -g -S t.c Checking t.s, 4 BTF_KIND_VAR's are generated (s1, s2, s3 and m). 4 BTF_KIND_DATASEC's are generated with names ".data", ".bss", ".rodata" and "maps". Signed-off-by: Yonghong Song <yhs@fb.com> Differential Revision: https://reviews.llvm.org/D59441 llvm-svn: 356326
* [X86][SSE] Constant fold PEXTRB/PEXTRW/EXTRACT_VECTOR_ELT nodes.Simon Pilgrim2019-03-161-17/+26
| | | | | | Replaces existing i1-only fold. llvm-svn: 356325
* [X86] Add SimplifyDemandedBitsForTargetNode support for PEXTRB/PEXTRWSimon Pilgrim2019-03-161-0/+28
| | | | | | Improved constant folding for PEXTRB/PEXTRW will be added in a future commit llvm-svn: 356324
* [WebAssembly] Make rethrow take an except_ref type argumentHeejin Ahn2019-03-165-38/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: In the new wasm EH proposal, `rethrow` takes an `except_ref` argument. This change was missing in r352598. This patch adds `llvm.wasm.rethrow.in.catch` intrinsic. This is an intrinsic that's gonna eventually be lowered to wasm `rethrow` instruction, but this intrinsic can appear only within a catchpad or a cleanuppad scope. Also this intrinsic needs to be invokable - otherwise EH pad successor for it will not be correctly generated in clang. This also adds lowering logic for this intrinsic in `SelectionDAGBuilder::visitInvoke`. This routine is basically a specialized and simplified version of `SelectionDAGBuilder::visitTargetIntrinsic`, but we can't use it because if is only for `CallInst`s. This deletes the previous `llvm.wasm.rethrow` intrinsic and related tests, which was meant to be used within a `__cxa_rethrow` library function. Turned out this needs some more logic, so the intrinsic for this purpose will be added later. LateEHPrepare takes a result value of `catch` and inserts it into matching `rethrow` as an argument. `RETHROW_IN_CATCH` is a pseudo instruction that serves as a link between `llvm.wasm.rethrow.in.catch` and the real wasm `rethrow` instruction. To generate a `rethrow` instruction, we need an `except_ref` argument, which is generated from `catch` instruction. But `catch` instrutions are added in LateEHPrepare pass, so we use `RETHROW_IN_CATCH`, which takes no argument, until we are able to correctly lower it to `rethrow` in LateEHPrepare. Reviewers: dschuff Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59352 llvm-svn: 356316
* [WebAssembly] Method order change in LateEHPrepare (NFC)Heejin Ahn2019-03-161-38/+40
| | | | | | | | | | | | | | | | | | Summary: Currently the order of these methods does not matter, but the following CL needs to have this order changed. Merging the order change and the semantics change within a CL complicates the diff, so submitting the order change first. Reviewers: dschuff Subscribers: sbc100, jgravelle-google, sunfish, jdoerfert, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59342 llvm-svn: 356315
* [WebAssembly] Irreducible control flow rewriteHeejin Ahn2019-03-161-245/+280
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: Rewrite WebAssemblyFixIrreducibleControlFlow to a simpler and cleaner design, which directly computes reachability and other properties itself. This avoids previous complexity and bugs. (The new graph analyses are very similar to how the Relooper algorithm would find loop entries and so forth.) This fixes a few bugs, including where we had a false positive and thought fannkuch was irreducible when it was not, which made us much larger and slower there, and a reverse bug where we missed irreducibility. On fannkuch, we used to be 44% slower than asm2wasm and are now 4% faster. Reviewers: aheejin Subscribers: jdoerfert, mgrang, dschuff, sbc100, jgravelle-google, sunfish, llvm-commits Differential Revision: https://reviews.llvm.org/D58919 Patch by Alon Zakai (kripken) llvm-svn: 356313
* [GlobalISel] Make isel verification checks of vregs run under NDEBUG only.Amara Emerson2019-03-161-4/+4
| | | | llvm-svn: 356309
* [TimePasses] allow -time-passes reporting into a custom streamFedor Sergeev2019-03-151-1/+10
| | | | | | | | | | | | | | | | | | | | | | TimePassesHandler object (implementation of time-passes for new pass manager) gains ability to report into a stream customizable per-instance (per pipeline). Intended use is to specify separate time-passes output stream per each compilation, setting up TimePasses member of StandardInstrumentation during PassBuilder setup. That allows to get independent non-overlapping pass-times reports for parallel independent compilations (in JIT-like setups). By default it still puts timing reports into the info-output-file stream (created by CreateInfoOutputFile every time report is requested). Unit-test added for non-default case, and it also allowed to discover that print() does not work as declared - it did not reset the timers, leading to yet another report being printed into the default stream. Fixed print() to actually reset timers according to what was declared in print's comments before. Reviewed By: philip.pfaffe Differential Revision: https://reviews.llvm.org/D59366 llvm-svn: 356305
* [GlobalISel] Allow MachineIRBuilder to build subregister copies.Amara Emerson2019-03-152-43/+26
| | | | | | | | | | | | This relaxes some asserts about sizes, and adds an optional subreg parameter to buildCopy(). Also update AArch64 instruction selector to use this in places where we previously used MachineInstrBuilder manually. Differential Revision: https://reviews.llvm.org/D59434 llvm-svn: 356304
* [ARM] Add MachineVerifier logic for some Thumb1 instructions.Eli Friedman2019-03-151-0/+25
| | | | | | | | | | | tMOVr and tPUSH/tPOP/tPOP_RET have register constraints which can't be expressed in TableGen, so check them explicitly. I've unfortunately run into issues with both of these recently; hopefully this saves some time for someone else in the future. Differential Revision: https://reviews.llvm.org/D59383 llvm-svn: 356303
* [X86] X86ISelLowering::combineSextInRegCmov(): also handle i8 CMOV'sRoman Lebedev2019-03-151-12/+34
| | | | | | | | | | | | | | | | | | | | Summary: As noted by @andreadb in https://reviews.llvm.org/D59035#inline-525780 If we have `sext (trunc (cmov C0, C1) to i8)`, we can instead do `cmov (sext (trunc C0 to i8)), (sext (trunc C1 to i8))` Reviewers: craig.topper, andreadb, RKSimon Reviewed By: craig.topper Subscribers: llvm-commits, andreadb Tags: #llvm Differential Revision: https://reviews.llvm.org/D59412 llvm-svn: 356301
* [X86] Promote i8 CMOV's (PR40965)Roman Lebedev2019-03-151-2/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: @mclow.lists brought up this issue up in IRC, it came up during implementation of libc++ `std::midpoint()` implementation (D59099) https://godbolt.org/z/oLrHBP Currently LLVM X86 backend only promotes i8 CMOV if it came from 2x`trunc`. This differential proposes to always promote i8 CMOV. There are several concerns here: * Is this actually more performant, or is it just the ASM that looks cuter? * Does this result in partial register stalls? * What about branch predictor? # Indeed, performance should be the main point here. Let's look at a simple microbenchmark: {F8412076} ``` #include "benchmark/benchmark.h" #include <algorithm> #include <cmath> #include <cstdint> #include <iterator> #include <limits> #include <random> #include <type_traits> #include <utility> #include <vector> // Future preliminary libc++ code, from Marshall Clow. namespace std { template <class _Tp> __inline _Tp midpoint(_Tp __a, _Tp __b) noexcept { using _Up = typename std::make_unsigned<typename remove_cv<_Tp>::type>::type; int __sign = 1; _Up __m = __a; _Up __M = __b; if (__a > __b) { __sign = -1; __m = __b; __M = __a; } return __a + __sign * _Tp(_Up(__M - __m) >> 1); } } // namespace std template <typename T> std::vector<T> getVectorOfRandomNumbers(size_t count) { std::random_device rd; std::mt19937 gen(rd()); std::uniform_int_distribution<T> dis(std::numeric_limits<T>::min(), std::numeric_limits<T>::max()); std::vector<T> v; v.reserve(count); std::generate_n(std::back_inserter(v), count, [&dis, &gen]() { return dis(gen); }); assert(v.size() == count); return v; } struct RandRand { template <typename T> static std::pair<std::vector<T>, std::vector<T>> Gen(size_t count) { return std::make_pair(getVectorOfRandomNumbers<T>(count), getVectorOfRandomNumbers<T>(count)); } }; struct ZeroRand { template <typename T> static std::pair<std::vector<T>, std::vector<T>> Gen(size_t count) { return std::make_pair(std::vector<T>(count, T(0)), getVectorOfRandomNumbers<T>(count)); } }; template <class T, class Gen> void BM_StdMidpoint(benchmark::State& state) { const size_t Length = state.range(0); const std::pair<std::vector<T>, std::vector<T>> Data = Gen::template Gen<T>(Length); const std::vector<T>& a = Data.first; const std::vector<T>& b = Data.second; assert(a.size() == Length && b.size() == a.size()); benchmark::ClobberMemory(); benchmark::DoNotOptimize(a); benchmark::DoNotOptimize(a.data()); benchmark::DoNotOptimize(b); benchmark::DoNotOptimize(b.data()); for (auto _ : state) { for (size_t i = 0; i < Length; i++) { const auto calculated = std::midpoint(a[i], b[i]); benchmark::DoNotOptimize(calculated); } } state.SetComplexityN(Length); state.counters["midpoints"] = benchmark::Counter(Length, benchmark::Counter::kIsIterationInvariant); state.counters["midpoints/sec"] = benchmark::Counter(Length, benchmark::Counter::kIsIterationInvariantRate); const size_t BytesRead = 2 * sizeof(T) * Length; state.counters["bytes_read/iteration"] = benchmark::Counter(BytesRead, benchmark::Counter::kDefaults, benchmark::Counter::OneK::kIs1024); state.counters["bytes_read/sec"] = benchmark::Counter( BytesRead, benchmark::Counter::kIsIterationInvariantRate, benchmark::Counter::OneK::kIs1024); } template <typename T> static void CustomArguments(benchmark::internal::Benchmark* b) { const size_t L2SizeBytes = 2 * 1024 * 1024; // What is the largest range we can check to always fit within given L2 cache? const size_t MaxLen = L2SizeBytes / /*total bufs*/ 2 / /*maximal elt size*/ sizeof(T) / /*safety margin*/ 2; b->RangeMultiplier(2)->Range(1, MaxLen)->Complexity(benchmark::oN); } // Both of the values are random. // The comparison is unpredictable. BENCHMARK_TEMPLATE(BM_StdMidpoint, int32_t, RandRand) ->Apply(CustomArguments<int32_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, uint32_t, RandRand) ->Apply(CustomArguments<uint32_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, int64_t, RandRand) ->Apply(CustomArguments<int64_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, uint64_t, RandRand) ->Apply(CustomArguments<uint64_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, int16_t, RandRand) ->Apply(CustomArguments<int16_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, uint16_t, RandRand) ->Apply(CustomArguments<uint16_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, int8_t, RandRand) ->Apply(CustomArguments<int8_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, uint8_t, RandRand) ->Apply(CustomArguments<uint8_t>); // One value is always zero, and another is bigger or equal than zero. // The comparison is predictable. BENCHMARK_TEMPLATE(BM_StdMidpoint, uint32_t, ZeroRand) ->Apply(CustomArguments<uint32_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, uint64_t, ZeroRand) ->Apply(CustomArguments<uint64_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, uint16_t, ZeroRand) ->Apply(CustomArguments<uint16_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, uint8_t, ZeroRand) ->Apply(CustomArguments<uint8_t>); ``` ``` $ ~/src/googlebenchmark/tools/compare.py --no-utest benchmarks ./llvm-cmov-bench-OLD ./llvm-cmov-bench-NEW RUNNING: ./llvm-cmov-bench-OLD --benchmark_out=/tmp/tmp5a5qjm 2019-03-06 21:53:31 Running ./llvm-cmov-bench-OLD Run on (8 X 4000 MHz CPU s) CPU Caches: L1 Data 16K (x8) L1 Instruction 64K (x4) L2 Unified 2048K (x4) L3 Unified 8192K (x1) Load Average: 1.78, 1.81, 1.36 ---------------------------------------------------------------------------------------------------- Benchmark Time CPU Iterations UserCounters<...> ---------------------------------------------------------------------------------------------------- <...> BM_StdMidpoint<int32_t, RandRand>/131072 300398 ns 300404 ns 2330 bytes_read/iteration=1024k bytes_read/sec=3.25083G/s midpoints=305.398M midpoints/sec=436.319M/s BM_StdMidpoint<int32_t, RandRand>_BigO 2.29 N 2.29 N BM_StdMidpoint<int32_t, RandRand>_RMS 2 % 2 % <...> BM_StdMidpoint<uint32_t, RandRand>/131072 300433 ns 300433 ns 2330 bytes_read/iteration=1024k bytes_read/sec=3.25052G/s midpoints=305.398M midpoints/sec=436.278M/s BM_StdMidpoint<uint32_t, RandRand>_BigO 2.29 N 2.29 N BM_StdMidpoint<uint32_t, RandRand>_RMS 2 % 2 % <...> BM_StdMidpoint<int64_t, RandRand>/65536 169857 ns 169858 ns 4121 bytes_read/iteration=1024k bytes_read/sec=5.74929G/s midpoints=270.074M midpoints/sec=385.828M/s BM_StdMidpoint<int64_t, RandRand>_BigO 2.59 N 2.59 N BM_StdMidpoint<int64_t, RandRand>_RMS 3 % 3 % <...> BM_StdMidpoint<uint64_t, RandRand>/65536 169770 ns 169771 ns 4125 bytes_read/iteration=1024k bytes_read/sec=5.75223G/s midpoints=270.336M midpoints/sec=386.026M/s BM_StdMidpoint<uint64_t, RandRand>_BigO 2.59 N 2.59 N BM_StdMidpoint<uint64_t, RandRand>_RMS 3 % 3 % <...> BM_StdMidpoint<int16_t, RandRand>/262144 591169 ns 591179 ns 1182 bytes_read/iteration=1024k bytes_read/sec=1.65189G/s midpoints=309.854M midpoints/sec=443.426M/s BM_StdMidpoint<int16_t, RandRand>_BigO 2.25 N 2.25 N BM_StdMidpoint<int16_t, RandRand>_RMS 1 % 1 % <...> BM_StdMidpoint<uint16_t, RandRand>/262144 591264 ns 591274 ns 1184 bytes_read/iteration=1024k bytes_read/sec=1.65162G/s midpoints=310.378M midpoints/sec=443.354M/s BM_StdMidpoint<uint16_t, RandRand>_BigO 2.25 N 2.25 N BM_StdMidpoint<uint16_t, RandRand>_RMS 1 % 1 % <...> BM_StdMidpoint<int8_t, RandRand>/524288 2983669 ns 2983689 ns 235 bytes_read/iteration=1024k bytes_read/sec=335.156M/s midpoints=123.208M midpoints/sec=175.718M/s BM_StdMidpoint<int8_t, RandRand>_BigO 5.69 N 5.69 N BM_StdMidpoint<int8_t, RandRand>_RMS 0 % 0 % <...> BM_StdMidpoint<uint8_t, RandRand>/524288 2668398 ns 2668419 ns 262 bytes_read/iteration=1024k bytes_read/sec=374.754M/s midpoints=137.363M midpoints/sec=196.479M/s BM_StdMidpoint<uint8_t, RandRand>_BigO 5.09 N 5.09 N BM_StdMidpoint<uint8_t, RandRand>_RMS 0 % 0 % <...> BM_StdMidpoint<uint32_t, ZeroRand>/131072 300887 ns 300887 ns 2331 bytes_read/iteration=1024k bytes_read/sec=3.24561G/s midpoints=305.529M midpoints/sec=435.619M/s BM_StdMidpoint<uint32_t, ZeroRand>_BigO 2.29 N 2.29 N BM_StdMidpoint<uint32_t, ZeroRand>_RMS 2 % 2 % <...> BM_StdMidpoint<uint64_t, ZeroRand>/65536 169634 ns 169634 ns 4102 bytes_read/iteration=1024k bytes_read/sec=5.75688G/s midpoints=268.829M midpoints/sec=386.338M/s BM_StdMidpoint<uint64_t, ZeroRand>_BigO 2.59 N 2.59 N BM_StdMidpoint<uint64_t, ZeroRand>_RMS 3 % 3 % <...> BM_StdMidpoint<uint16_t, ZeroRand>/262144 592252 ns 592255 ns 1182 bytes_read/iteration=1024k bytes_read/sec=1.64889G/s midpoints=309.854M midpoints/sec=442.62M/s BM_StdMidpoint<uint16_t, ZeroRand>_BigO 2.26 N 2.26 N BM_StdMidpoint<uint16_t, ZeroRand>_RMS 1 % 1 % <...> BM_StdMidpoint<uint8_t, ZeroRand>/524288 987295 ns 987309 ns 711 bytes_read/iteration=1024k bytes_read/sec=1012.85M/s midpoints=372.769M midpoints/sec=531.028M/s BM_StdMidpoint<uint8_t, ZeroRand>_BigO 1.88 N 1.88 N BM_StdMidpoint<uint8_t, ZeroRand>_RMS 1 % 1 % RUNNING: ./llvm-cmov-bench-NEW --benchmark_out=/tmp/tmpPvwpfW 2019-03-06 21:56:58 Running ./llvm-cmov-bench-NEW Run on (8 X 4000 MHz CPU s) CPU Caches: L1 Data 16K (x8) L1 Instruction 64K (x4) L2 Unified 2048K (x4) L3 Unified 8192K (x1) Load Average: 1.17, 1.46, 1.30 ---------------------------------------------------------------------------------------------------- Benchmark Time CPU Iterations UserCounters<...> ---------------------------------------------------------------------------------------------------- <...> BM_StdMidpoint<int32_t, RandRand>/131072 300878 ns 300880 ns 2324 bytes_read/iteration=1024k bytes_read/sec=3.24569G/s midpoints=304.611M midpoints/sec=435.629M/s BM_StdMidpoint<int32_t, RandRand>_BigO 2.29 N 2.29 N BM_StdMidpoint<int32_t, RandRand>_RMS 2 % 2 % <...> BM_StdMidpoint<uint32_t, RandRand>/131072 300231 ns 300226 ns 2330 bytes_read/iteration=1024k bytes_read/sec=3.25276G/s midpoints=305.398M midpoints/sec=436.578M/s BM_StdMidpoint<uint32_t, RandRand>_BigO 2.29 N 2.29 N BM_StdMidpoint<uint32_t, RandRand>_RMS 2 % 2 % <...> BM_StdMidpoint<int64_t, RandRand>/65536 170819 ns 170777 ns 4115 bytes_read/iteration=1024k bytes_read/sec=5.71835G/s midpoints=269.681M midpoints/sec=383.752M/s BM_StdMidpoint<int64_t, RandRand>_BigO 2.60 N 2.60 N BM_StdMidpoint<int64_t, RandRand>_RMS 3 % 3 % <...> BM_StdMidpoint<uint64_t, RandRand>/65536 171705 ns 171708 ns 4106 bytes_read/iteration=1024k bytes_read/sec=5.68733G/s midpoints=269.091M midpoints/sec=381.671M/s BM_StdMidpoint<uint64_t, RandRand>_BigO 2.62 N 2.62 N BM_StdMidpoint<uint64_t, RandRand>_RMS 3 % 3 % <...> BM_StdMidpoint<int16_t, RandRand>/262144 592510 ns 592516 ns 1182 bytes_read/iteration=1024k bytes_read/sec=1.64816G/s midpoints=309.854M midpoints/sec=442.425M/s BM_StdMidpoint<int16_t, RandRand>_BigO 2.26 N 2.26 N BM_StdMidpoint<int16_t, RandRand>_RMS 1 % 1 % <...> BM_StdMidpoint<uint16_t, RandRand>/262144 614823 ns 614823 ns 1180 bytes_read/iteration=1024k bytes_read/sec=1.58836G/s midpoints=309.33M midpoints/sec=426.373M/s BM_StdMidpoint<uint16_t, RandRand>_BigO 2.33 N 2.33 N BM_StdMidpoint<uint16_t, RandRand>_RMS 4 % 4 % <...> BM_StdMidpoint<int8_t, RandRand>/524288 1073181 ns 1073201 ns 650 bytes_read/iteration=1024k bytes_read/sec=931.791M/s midpoints=340.787M midpoints/sec=488.527M/s BM_StdMidpoint<int8_t, RandRand>_BigO 2.05 N 2.05 N BM_StdMidpoint<int8_t, RandRand>_RMS 1 % 1 % BM_StdMidpoint<uint8_t, RandRand>/524288 1071010 ns 1071020 ns 653 bytes_read/iteration=1024k bytes_read/sec=933.689M/s midpoints=342.36M midpoints/sec=489.522M/s BM_StdMidpoint<uint8_t, RandRand>_BigO 2.05 N 2.05 N BM_StdMidpoint<uint8_t, RandRand>_RMS 1 % 1 % <...> BM_StdMidpoint<uint32_t, ZeroRand>/131072 300413 ns 300416 ns 2330 bytes_read/iteration=1024k bytes_read/sec=3.2507G/s midpoints=305.398M midpoints/sec=436.302M/s BM_StdMidpoint<uint32_t, ZeroRand>_BigO 2.29 N 2.29 N BM_StdMidpoint<uint32_t, ZeroRand>_RMS 2 % 2 % <...> BM_StdMidpoint<uint64_t, ZeroRand>/65536 169667 ns 169669 ns 4123 bytes_read/iteration=1024k bytes_read/sec=5.75568G/s midpoints=270.205M midpoints/sec=386.257M/s BM_StdMidpoint<uint64_t, ZeroRand>_BigO 2.59 N 2.59 N BM_StdMidpoint<uint64_t, ZeroRand>_RMS 3 % 3 % <...> BM_StdMidpoint<uint16_t, ZeroRand>/262144 591396 ns 591404 ns 1184 bytes_read/iteration=1024k bytes_read/sec=1.65126G/s midpoints=310.378M midpoints/sec=443.257M/s BM_StdMidpoint<uint16_t, ZeroRand>_BigO 2.26 N 2.26 N BM_StdMidpoint<uint16_t, ZeroRand>_RMS 1 % 1 % <...> BM_StdMidpoint<uint8_t, ZeroRand>/524288 1069421 ns 1069413 ns 655 bytes_read/iteration=1024k bytes_read/sec=935.092M/s midpoints=343.409M midpoints/sec=490.258M/s BM_StdMidpoint<uint8_t, ZeroRand>_BigO 2.04 N 2.04 N BM_StdMidpoint<uint8_t, ZeroRand>_RMS 0 % 0 % Comparing ./llvm-cmov-bench-OLD to ./llvm-cmov-bench-NEW Benchmark Time CPU Time Old Time New CPU Old CPU New ---------------------------------------------------------------------------------------------------------------------------------------- <...> BM_StdMidpoint<int32_t, RandRand>/131072 +0.0016 +0.0016 300398 300878 300404 300880 <...> BM_StdMidpoint<uint32_t, RandRand>/131072 -0.0007 -0.0007 300433 300231 300433 300226 <...> BM_StdMidpoint<int64_t, RandRand>/65536 +0.0057 +0.0054 169857 170819 169858 170777 <...> BM_StdMidpoint<uint64_t, RandRand>/65536 +0.0114 +0.0114 169770 171705 169771 171708 <...> BM_StdMidpoint<int16_t, RandRand>/262144 +0.0023 +0.0023 591169 592510 591179 592516 <...> BM_StdMidpoint<uint16_t, RandRand>/262144 +0.0398 +0.0398 591264 614823 591274 614823 <...> BM_StdMidpoint<int8_t, RandRand>/524288 -0.6403 -0.6403 2983669 1073181 2983689 1073201 <...> BM_StdMidpoint<uint8_t, RandRand>/524288 -0.5986 -0.5986 2668398 1071010 2668419 1071020 <...> BM_StdMidpoint<uint32_t, ZeroRand>/131072 -0.0016 -0.0016 300887 300413 300887 300416 <...> BM_StdMidpoint<uint64_t, ZeroRand>/65536 +0.0002 +0.0002 169634 169667 169634 169669 <...> BM_StdMidpoint<uint16_t, ZeroRand>/262144 -0.0014 -0.0014 592252 591396 592255 591404 <...> BM_StdMidpoint<uint8_t, ZeroRand>/524288 +0.0832 +0.0832 987295 1069421 987309 1069413 ``` What can we tell from the benchmark? * `BM_StdMidpoint<[u]int8_t, RandRand>` indeed has the worst performance. * All `BM_StdMidpoint<uint{8,16,32}_t, ZeroRand>` are all performant, even the 8-bit case. That is because there we are computing mid point between zero and some random number, thus if the branch predictor is in use, it is in optimal situation. * Promoting 8-bit CMOV did improve performance of `BM_StdMidpoint<[u]int8_t, RandRand>`, by -59%..-64%. # What about branch predictor? * `BM_StdMidpoint<uint8_t, ZeroRand>` was faster than `BM_StdMidpoint<uint{16,32,64}_t, ZeroRand>`, which may mean that well-predicted branch is better than `cmov`. * Promoting 8-bit CMOV degraded performance of `BM_StdMidpoint<uint8_t, ZeroRand>`, `cmov` is up to +10% worse than well-predicted branch. * However, i do not believe this is a concern. If the branch is well predicted, then the PGO will also say that it is well predicted, and LLVM will happily expand cmov back into branch: https://godbolt.org/z/P5ufig # What about partial register stalls? I'm not really able to answer that. What i can say is that if the branch is unpredictable (if it is predictable, then use PGO and you'll have branch) in ~50% of cases you will have to pay branch misprediction penalty. ``` $ grep -i MispredictPenalty X86Sched*.td X86SchedBroadwell.td: let MispredictPenalty = 16; X86SchedHaswell.td: let MispredictPenalty = 16; X86SchedSandyBridge.td: let MispredictPenalty = 16; X86SchedSkylakeClient.td: let MispredictPenalty = 14; X86SchedSkylakeServer.td: let MispredictPenalty = 14; X86ScheduleBdVer2.td: let MispredictPenalty = 20; // Minimum branch misdirection penalty. X86ScheduleBtVer2.td: let MispredictPenalty = 14; // Minimum branch misdirection penalty X86ScheduleSLM.td: let MispredictPenalty = 10; X86ScheduleZnver1.td: let MispredictPenalty = 17; ``` .. which it can be as small as 10 cycles and as large as 20 cycles. Partial register stalls do not seem to be an issue for AMD CPU's. For intel CPU's, they should be around ~5 cycles? Is that actually an issue here? I'm not sure. In short, i'd say this is an improvement, at least on this microbenchmark. Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=40965 | PR40965 ]]. Reviewers: craig.topper, RKSimon, spatel, andreadb, nikic Reviewed By: craig.topper, andreadb Subscribers: jfb, jdoerfert, llvm-commits, mclow.lists Tags: #llvm, #libc Differential Revision: https://reviews.llvm.org/D59035 llvm-svn: 356300
* [AArch64] Turn BIC immediate creation into a DAG combineNikita Popov2019-03-152-44/+44
| | | | | | | | | | | Switch BIC immediate creation for vector ANDs from custom lowering to a DAG combine, which gives generic DAG combines a change to apply first. In particular this avoids (and x, -1) being turned into a (bic x, 0) instead of being eliminated entirely. Differential Revision: https://reviews.llvm.org/D59187 llvm-svn: 356299
* AMDGPU: Fix a SIAnnotateControlFlow issue when there are multiple backedges.Changpeng Fang2019-03-151-2/+11
| | | | | | | | | | | | | | | | | | | | | | Summary: At the exit of the loop, the compiler uses a register to remember and accumulate the number of threads that have already exited. When all active threads exit the loop, this register is used to restore the exec mask, and the execution continues for the post loop code. When there is a "continue" in the loop, the compiler made a mistake to reset the register to 0 when the "continue" backedge is taken. This will result in some threads not executing the post loop code as they are supposed to. This patch fixed the issue. Reviewers: nhaehnle, arsenm Differential Revision: https://reviews.llvm.org/D59312 llvm-svn: 356298
* [X86] Strip the SAE bit from the rounding mode passed to the _RND opcodes. ↵Craig Topper2019-03-153-66/+88
| | | | | | | | | | | | Use TargetConstant to save a conversion in the isel table. The asm parser generates the immediate without the SAE bit. So for consistency we should generate the MCInst the same way from CodeGen. Since they are now both the same, remove the masking from the printer and replace with an llvm_unreachable. Use a target constant since we're rebuilding the node anyway. Then we don't have to have isel convert it. Saves about 500 bytes from the isel table. llvm-svn: 356294
* [SimplifyDemandedVec] Strengthen handling all undef lanes (particularly GEPs)Philip Reames2019-03-153-5/+31
| | | | | | | | | | | | A change of two parts: 1) A generic enhancement for all callers of SDVE to exploit the fact that if all lanes are undef, the result is undef. 2) A GEP specific piece to strengthen/fix the vector index undef element handling, and call into the generic infrastructure when visiting the GEP. The result is that we replace a vector gep with at least one undef in each lane with a undef. We can also do the same for vector intrinsics. Once the masked.load patch (D57372) has landed, I'll update to include call tests as well. Differential Revision: https://reviews.llvm.org/D57468 llvm-svn: 356293
* [X86][SSE] Fold scalar_to_vector(i64 anyext(x)) -> ↵Simon Pilgrim2019-03-151-3/+13
| | | | | | | | bitcast(scalar_to_vector(i32 anyext(x))) Reduce the size of an any-extended i64 scalar_to_vector source to i32 - the any_extend nodes are often introduced by SimplifyDemandedBits. llvm-svn: 356292
* [ValueTracking] Use ConstantRange overflow checks for unsigned add/sub; NFCNikita Popov2019-03-151-20/+26
| | | | | | | | | | | | | | | Use the methods introduced in rL356276 to implement the computeOverflowForUnsigned(Add|Sub) functions in ValueTracking, by converting the KnownBits into a ConstantRange. This is NFC: The existing KnownBits based implementation uses the same logic as the the ConstantRange based one. This is not the case for the signed equivalents, so I'm only changing unsigned here. This is in preparation for D59386, which will also intersect the computeConstantRange() result into the range determined from KnownBits. llvm-svn: 356290
* [AArch64][GlobalISel] Regbankselect: Fix G_BUILD_VECTOR trying to use s16 ↵Amara Emerson2019-03-151-2/+5
| | | | | | | | | gpr sources. Since we can't insert s16 gprs as we don't have 16 bit GPR registers, we need to teach RBS to assign them to the FPR bank so our selector works. llvm-svn: 356282
* [X86][GlobalISEL] Support lowering aligned unordered atomicsPhilip Reames2019-03-151-3/+15
| | | | | | | | The existing lowering code is accidentally correct for unordered atomics as far as I can tell. An unordered atomic has no memory ordering, and simply requires the actual load or store to be done as a single well aligned instruction. As such, relax the restriction while adding tests to ensure the lowering remains correct in the future. Differential Revision: https://reviews.llvm.org/D57803 llvm-svn: 356280
OpenPOWER on IntegriCloud