summaryrefslogtreecommitdiffstats
path: root/llvm/lib
Commit message (Collapse)AuthorAgeFilesLines
...
* [ARM] Fixed an assumption of power-of-2 vector MVTTim Renouf2019-03-171-6/+6
| | | | | | | | | | | | I am about to introduce some non-power-of-2 width vector MVTs. This commit fixes a power-of-2 assumption that my forthcoming change would otherwise break, as shown by test/CodeGen/ARM/vcvt_combine.ll and vdiv_combine.ll. Differential Revision: https://reviews.llvm.org/D58927 Change-Id: I56a282e365d3874ab0621e5bdef98a612f702317 llvm-svn: 356341
* [ConstantRange] Add fromKnownBits() methodNikita Popov2019-03-172-11/+27
| | | | | | | | | | | | | | Following the suggestion in D59450, I'm moving the code for constructing a ConstantRange from KnownBits out of ValueTracking, which also allows us to test this code independently. I'm adding this method to ConstantRange rather than KnownBits (which would have been a bit nicer API wise) to avoid creating a dependency from Support to IR, where ConstantRange lives. Differential Revision: https://reviews.llvm.org/D59475 llvm-svn: 356339
* [InstCombine] canonicalize rotate right by constant to rotate leftSanjay Patel2019-03-171-7/+20
| | | | | | | | | | | This was noted as a backend problem: https://bugs.llvm.org/show_bug.cgi?id=41057 ...and subsequently fixed for x86: rL356121 But we should canonicalize these in IR for the benefit of all targets and improve IR analysis such as CSE. llvm-svn: 356338
* [ARM] Search backwards for CMP when combining into CBZDavid Green2019-03-171-35/+59
| | | | | | | | | | | The constant island pass currently only looks at the instruction immediately before a branch for a CMP to fold into a CBZ/CBNZ. This extends it to search backwards for the instruction that defines CPSR. We need to ensure that the register is not overridden between the CMP and the branch. Differential Revision: https://reviews.llvm.org/D59317 llvm-svn: 356336
* [DAGCombine] Fold (x & ~y) | y patternsNikita Popov2019-03-171-0/+22
| | | | | | | | | | | | | Fold (x & ~y) | y and it's four commuted variants to x | y. This pattern can in particular appear when a vselect c, x, -1 is expanded to (x & ~c) | (-1 & c) and combined to (x & ~c) | c. This change has some overlap with D59066, which avoids creating a vselect of this form in the first place during uaddsat expansion. Differential Revision: https://reviews.llvm.org/D59174 llvm-svn: 356333
* [TargetLowering] improve the default expansion of uaddsat/usubsatSanjay Patel2019-03-171-0/+11
| | | | | | | | | | | | | | | This is a subset of what was proposed in: D59006 ...and may overlap with test changes from: D59174 ...but it seems like a good general optimization to turn selects into bitwise-logic when possible because we never know exactly what can happen at this stage of DAG combining depending on how the target has defined things. Differential Revision: https://reviews.llvm.org/D59066 llvm-svn: 356332
* [RISCV][NFC] Factor out matchRegisterNameHelper in RISCVAsmParser.cppAlex Bradbury2019-03-171-11/+17
| | | | | | Contains common logic to match a string to a register name. llvm-svn: 356330
* [RISCV] Fix RISCVAsmParser::ParseRegister and add testsAlex Bradbury2019-03-171-5/+7
| | | | | | | | | | | RISCVAsmParser::ParseRegister is called from AsmParser::parseRegisterOrNumber, which in turn is called when processing CFI directives. The RISC-V implementation wasn't setting RegNo, and so was incorrect. This patch address that and adds cfi directive tests that demonstrate the fix. A follow-up patch will factor out the register parsing logic shared between ParseRegister and parseRegister. llvm-svn: 356329
* [DAGCombine] combineShuffleOfScalars - handle non-zero SCALAR_TO_VECTOR ↵Simon Pilgrim2019-03-161-2/+2
| | | | | | | | indices (PR41097) rL356292 reduces the size of scalar_to_vector if we know the upper bits are undef - which means that shuffles may find they are suddenly referencing scalar_to_vector elements other than zero - so make sure we handle this as undef. llvm-svn: 356327
* [BPF] Add BTF Var and DataSec SupportYonghong Song2019-03-164-41/+208
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Two new kinds, BTF_KIND_VAR and BTF_KIND_DATASEC, are added. BTF_KIND_VAR has the following specification: btf_type.name: var name btf_type.info: type kind btf_type.type: var type // btf_type is followed by one u32 u32: varinfo (currently, only 0 - static, 1 - global allocated in elf sections) Not all globals are supported in this patch. The following globals are supported: . static variables with or without section attributes . global variables with section attributes The inclusion of globals with section attributes is for future potential extraction of key/value type id's from map definition. BTF_KIND_DATASEC has the following specification: btf_type.name: section name associated with variable or one of .data/.bss/.readonly btf_type.info: type kind and vlen for # of variables btf_type.size: 0 #vlen number of the following: u32: id of corresponding BTF_KIND_VAR u32: in-session offset of the var u32: the size of memory var occupied At the time of debug info emission, the data section size is unknown, so the btf_type.size = 0 for BTF_KIND_DATASEC. The loader can patch it during loading time. The in-session offseet of the var is only available for static variables. For global variables, the loader neeeds to assign the global variable symbol value in symbol table to in-section offset. The size of memory is used to specify the amount of the memory a variable occupies. Typically, it equals to the type size, but for certain structures, e.g., struct tt { int a; int b; char c[]; }; static volatile struct tt s2 = {3, 4, "abcdefghi"}; The static variable s2 has size of 20. Note that for BTF_KIND_DATASEC name, the section name does not contain object name. The compiler does have input module name. For example, two cases below: . clang -target bpf -O2 -g -c test.c The compiler knows the input file (module) is test.c and can generate sec name like test.data/test.bss etc. . clang -target bpf -O2 -g -emit-llvm -c test.c -o - | llc -march=bpf -filetype=obj -o test.o The llc compiler has the input file as stdin, and would generate something like stdin.data/stdin.bss etc. which does not really make sense. For any user specificed section name, e.g., static volatile int a __attribute__((section("id1"))); static volatile const int b __attribute__((section("id2"))); The DataSec with name "id1" and "id2" does not contain information whether the section is readonly or not. The loader needs to check the corresponding elf section flags for such information. A simple example: -bash-4.4$ cat t.c int g1; int g2 = 3; const int g3 = 4; static volatile int s1; struct tt { int a; int b; char c[]; }; static volatile struct tt s2 = {3, 4, "abcdefghi"}; static volatile const int s3 = 4; int m __attribute__((section("maps"), used)) = 4; int test() { return g1 + g2 + g3 + s1 + s2.a + s3 + m; } -bash-4.4$ clang -target bpf -O2 -g -S t.c Checking t.s, 4 BTF_KIND_VAR's are generated (s1, s2, s3 and m). 4 BTF_KIND_DATASEC's are generated with names ".data", ".bss", ".rodata" and "maps". Signed-off-by: Yonghong Song <yhs@fb.com> Differential Revision: https://reviews.llvm.org/D59441 llvm-svn: 356326
* [X86][SSE] Constant fold PEXTRB/PEXTRW/EXTRACT_VECTOR_ELT nodes.Simon Pilgrim2019-03-161-17/+26
| | | | | | Replaces existing i1-only fold. llvm-svn: 356325
* [X86] Add SimplifyDemandedBitsForTargetNode support for PEXTRB/PEXTRWSimon Pilgrim2019-03-161-0/+28
| | | | | | Improved constant folding for PEXTRB/PEXTRW will be added in a future commit llvm-svn: 356324
* [WebAssembly] Make rethrow take an except_ref type argumentHeejin Ahn2019-03-165-38/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: In the new wasm EH proposal, `rethrow` takes an `except_ref` argument. This change was missing in r352598. This patch adds `llvm.wasm.rethrow.in.catch` intrinsic. This is an intrinsic that's gonna eventually be lowered to wasm `rethrow` instruction, but this intrinsic can appear only within a catchpad or a cleanuppad scope. Also this intrinsic needs to be invokable - otherwise EH pad successor for it will not be correctly generated in clang. This also adds lowering logic for this intrinsic in `SelectionDAGBuilder::visitInvoke`. This routine is basically a specialized and simplified version of `SelectionDAGBuilder::visitTargetIntrinsic`, but we can't use it because if is only for `CallInst`s. This deletes the previous `llvm.wasm.rethrow` intrinsic and related tests, which was meant to be used within a `__cxa_rethrow` library function. Turned out this needs some more logic, so the intrinsic for this purpose will be added later. LateEHPrepare takes a result value of `catch` and inserts it into matching `rethrow` as an argument. `RETHROW_IN_CATCH` is a pseudo instruction that serves as a link between `llvm.wasm.rethrow.in.catch` and the real wasm `rethrow` instruction. To generate a `rethrow` instruction, we need an `except_ref` argument, which is generated from `catch` instruction. But `catch` instrutions are added in LateEHPrepare pass, so we use `RETHROW_IN_CATCH`, which takes no argument, until we are able to correctly lower it to `rethrow` in LateEHPrepare. Reviewers: dschuff Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59352 llvm-svn: 356316
* [WebAssembly] Method order change in LateEHPrepare (NFC)Heejin Ahn2019-03-161-38/+40
| | | | | | | | | | | | | | | | | | Summary: Currently the order of these methods does not matter, but the following CL needs to have this order changed. Merging the order change and the semantics change within a CL complicates the diff, so submitting the order change first. Reviewers: dschuff Subscribers: sbc100, jgravelle-google, sunfish, jdoerfert, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59342 llvm-svn: 356315
* [WebAssembly] Irreducible control flow rewriteHeejin Ahn2019-03-161-245/+280
| | | | | | | | | | | | | | | | | | | | | | | | | Summary: Rewrite WebAssemblyFixIrreducibleControlFlow to a simpler and cleaner design, which directly computes reachability and other properties itself. This avoids previous complexity and bugs. (The new graph analyses are very similar to how the Relooper algorithm would find loop entries and so forth.) This fixes a few bugs, including where we had a false positive and thought fannkuch was irreducible when it was not, which made us much larger and slower there, and a reverse bug where we missed irreducibility. On fannkuch, we used to be 44% slower than asm2wasm and are now 4% faster. Reviewers: aheejin Subscribers: jdoerfert, mgrang, dschuff, sbc100, jgravelle-google, sunfish, llvm-commits Differential Revision: https://reviews.llvm.org/D58919 Patch by Alon Zakai (kripken) llvm-svn: 356313
* [GlobalISel] Make isel verification checks of vregs run under NDEBUG only.Amara Emerson2019-03-161-4/+4
| | | | llvm-svn: 356309
* [TimePasses] allow -time-passes reporting into a custom streamFedor Sergeev2019-03-151-1/+10
| | | | | | | | | | | | | | | | | | | | | | TimePassesHandler object (implementation of time-passes for new pass manager) gains ability to report into a stream customizable per-instance (per pipeline). Intended use is to specify separate time-passes output stream per each compilation, setting up TimePasses member of StandardInstrumentation during PassBuilder setup. That allows to get independent non-overlapping pass-times reports for parallel independent compilations (in JIT-like setups). By default it still puts timing reports into the info-output-file stream (created by CreateInfoOutputFile every time report is requested). Unit-test added for non-default case, and it also allowed to discover that print() does not work as declared - it did not reset the timers, leading to yet another report being printed into the default stream. Fixed print() to actually reset timers according to what was declared in print's comments before. Reviewed By: philip.pfaffe Differential Revision: https://reviews.llvm.org/D59366 llvm-svn: 356305
* [GlobalISel] Allow MachineIRBuilder to build subregister copies.Amara Emerson2019-03-152-43/+26
| | | | | | | | | | | | This relaxes some asserts about sizes, and adds an optional subreg parameter to buildCopy(). Also update AArch64 instruction selector to use this in places where we previously used MachineInstrBuilder manually. Differential Revision: https://reviews.llvm.org/D59434 llvm-svn: 356304
* [ARM] Add MachineVerifier logic for some Thumb1 instructions.Eli Friedman2019-03-151-0/+25
| | | | | | | | | | | tMOVr and tPUSH/tPOP/tPOP_RET have register constraints which can't be expressed in TableGen, so check them explicitly. I've unfortunately run into issues with both of these recently; hopefully this saves some time for someone else in the future. Differential Revision: https://reviews.llvm.org/D59383 llvm-svn: 356303
* [X86] X86ISelLowering::combineSextInRegCmov(): also handle i8 CMOV'sRoman Lebedev2019-03-151-12/+34
| | | | | | | | | | | | | | | | | | | | Summary: As noted by @andreadb in https://reviews.llvm.org/D59035#inline-525780 If we have `sext (trunc (cmov C0, C1) to i8)`, we can instead do `cmov (sext (trunc C0 to i8)), (sext (trunc C1 to i8))` Reviewers: craig.topper, andreadb, RKSimon Reviewed By: craig.topper Subscribers: llvm-commits, andreadb Tags: #llvm Differential Revision: https://reviews.llvm.org/D59412 llvm-svn: 356301
* [X86] Promote i8 CMOV's (PR40965)Roman Lebedev2019-03-151-2/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: @mclow.lists brought up this issue up in IRC, it came up during implementation of libc++ `std::midpoint()` implementation (D59099) https://godbolt.org/z/oLrHBP Currently LLVM X86 backend only promotes i8 CMOV if it came from 2x`trunc`. This differential proposes to always promote i8 CMOV. There are several concerns here: * Is this actually more performant, or is it just the ASM that looks cuter? * Does this result in partial register stalls? * What about branch predictor? # Indeed, performance should be the main point here. Let's look at a simple microbenchmark: {F8412076} ``` #include "benchmark/benchmark.h" #include <algorithm> #include <cmath> #include <cstdint> #include <iterator> #include <limits> #include <random> #include <type_traits> #include <utility> #include <vector> // Future preliminary libc++ code, from Marshall Clow. namespace std { template <class _Tp> __inline _Tp midpoint(_Tp __a, _Tp __b) noexcept { using _Up = typename std::make_unsigned<typename remove_cv<_Tp>::type>::type; int __sign = 1; _Up __m = __a; _Up __M = __b; if (__a > __b) { __sign = -1; __m = __b; __M = __a; } return __a + __sign * _Tp(_Up(__M - __m) >> 1); } } // namespace std template <typename T> std::vector<T> getVectorOfRandomNumbers(size_t count) { std::random_device rd; std::mt19937 gen(rd()); std::uniform_int_distribution<T> dis(std::numeric_limits<T>::min(), std::numeric_limits<T>::max()); std::vector<T> v; v.reserve(count); std::generate_n(std::back_inserter(v), count, [&dis, &gen]() { return dis(gen); }); assert(v.size() == count); return v; } struct RandRand { template <typename T> static std::pair<std::vector<T>, std::vector<T>> Gen(size_t count) { return std::make_pair(getVectorOfRandomNumbers<T>(count), getVectorOfRandomNumbers<T>(count)); } }; struct ZeroRand { template <typename T> static std::pair<std::vector<T>, std::vector<T>> Gen(size_t count) { return std::make_pair(std::vector<T>(count, T(0)), getVectorOfRandomNumbers<T>(count)); } }; template <class T, class Gen> void BM_StdMidpoint(benchmark::State& state) { const size_t Length = state.range(0); const std::pair<std::vector<T>, std::vector<T>> Data = Gen::template Gen<T>(Length); const std::vector<T>& a = Data.first; const std::vector<T>& b = Data.second; assert(a.size() == Length && b.size() == a.size()); benchmark::ClobberMemory(); benchmark::DoNotOptimize(a); benchmark::DoNotOptimize(a.data()); benchmark::DoNotOptimize(b); benchmark::DoNotOptimize(b.data()); for (auto _ : state) { for (size_t i = 0; i < Length; i++) { const auto calculated = std::midpoint(a[i], b[i]); benchmark::DoNotOptimize(calculated); } } state.SetComplexityN(Length); state.counters["midpoints"] = benchmark::Counter(Length, benchmark::Counter::kIsIterationInvariant); state.counters["midpoints/sec"] = benchmark::Counter(Length, benchmark::Counter::kIsIterationInvariantRate); const size_t BytesRead = 2 * sizeof(T) * Length; state.counters["bytes_read/iteration"] = benchmark::Counter(BytesRead, benchmark::Counter::kDefaults, benchmark::Counter::OneK::kIs1024); state.counters["bytes_read/sec"] = benchmark::Counter( BytesRead, benchmark::Counter::kIsIterationInvariantRate, benchmark::Counter::OneK::kIs1024); } template <typename T> static void CustomArguments(benchmark::internal::Benchmark* b) { const size_t L2SizeBytes = 2 * 1024 * 1024; // What is the largest range we can check to always fit within given L2 cache? const size_t MaxLen = L2SizeBytes / /*total bufs*/ 2 / /*maximal elt size*/ sizeof(T) / /*safety margin*/ 2; b->RangeMultiplier(2)->Range(1, MaxLen)->Complexity(benchmark::oN); } // Both of the values are random. // The comparison is unpredictable. BENCHMARK_TEMPLATE(BM_StdMidpoint, int32_t, RandRand) ->Apply(CustomArguments<int32_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, uint32_t, RandRand) ->Apply(CustomArguments<uint32_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, int64_t, RandRand) ->Apply(CustomArguments<int64_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, uint64_t, RandRand) ->Apply(CustomArguments<uint64_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, int16_t, RandRand) ->Apply(CustomArguments<int16_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, uint16_t, RandRand) ->Apply(CustomArguments<uint16_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, int8_t, RandRand) ->Apply(CustomArguments<int8_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, uint8_t, RandRand) ->Apply(CustomArguments<uint8_t>); // One value is always zero, and another is bigger or equal than zero. // The comparison is predictable. BENCHMARK_TEMPLATE(BM_StdMidpoint, uint32_t, ZeroRand) ->Apply(CustomArguments<uint32_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, uint64_t, ZeroRand) ->Apply(CustomArguments<uint64_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, uint16_t, ZeroRand) ->Apply(CustomArguments<uint16_t>); BENCHMARK_TEMPLATE(BM_StdMidpoint, uint8_t, ZeroRand) ->Apply(CustomArguments<uint8_t>); ``` ``` $ ~/src/googlebenchmark/tools/compare.py --no-utest benchmarks ./llvm-cmov-bench-OLD ./llvm-cmov-bench-NEW RUNNING: ./llvm-cmov-bench-OLD --benchmark_out=/tmp/tmp5a5qjm 2019-03-06 21:53:31 Running ./llvm-cmov-bench-OLD Run on (8 X 4000 MHz CPU s) CPU Caches: L1 Data 16K (x8) L1 Instruction 64K (x4) L2 Unified 2048K (x4) L3 Unified 8192K (x1) Load Average: 1.78, 1.81, 1.36 ---------------------------------------------------------------------------------------------------- Benchmark Time CPU Iterations UserCounters<...> ---------------------------------------------------------------------------------------------------- <...> BM_StdMidpoint<int32_t, RandRand>/131072 300398 ns 300404 ns 2330 bytes_read/iteration=1024k bytes_read/sec=3.25083G/s midpoints=305.398M midpoints/sec=436.319M/s BM_StdMidpoint<int32_t, RandRand>_BigO 2.29 N 2.29 N BM_StdMidpoint<int32_t, RandRand>_RMS 2 % 2 % <...> BM_StdMidpoint<uint32_t, RandRand>/131072 300433 ns 300433 ns 2330 bytes_read/iteration=1024k bytes_read/sec=3.25052G/s midpoints=305.398M midpoints/sec=436.278M/s BM_StdMidpoint<uint32_t, RandRand>_BigO 2.29 N 2.29 N BM_StdMidpoint<uint32_t, RandRand>_RMS 2 % 2 % <...> BM_StdMidpoint<int64_t, RandRand>/65536 169857 ns 169858 ns 4121 bytes_read/iteration=1024k bytes_read/sec=5.74929G/s midpoints=270.074M midpoints/sec=385.828M/s BM_StdMidpoint<int64_t, RandRand>_BigO 2.59 N 2.59 N BM_StdMidpoint<int64_t, RandRand>_RMS 3 % 3 % <...> BM_StdMidpoint<uint64_t, RandRand>/65536 169770 ns 169771 ns 4125 bytes_read/iteration=1024k bytes_read/sec=5.75223G/s midpoints=270.336M midpoints/sec=386.026M/s BM_StdMidpoint<uint64_t, RandRand>_BigO 2.59 N 2.59 N BM_StdMidpoint<uint64_t, RandRand>_RMS 3 % 3 % <...> BM_StdMidpoint<int16_t, RandRand>/262144 591169 ns 591179 ns 1182 bytes_read/iteration=1024k bytes_read/sec=1.65189G/s midpoints=309.854M midpoints/sec=443.426M/s BM_StdMidpoint<int16_t, RandRand>_BigO 2.25 N 2.25 N BM_StdMidpoint<int16_t, RandRand>_RMS 1 % 1 % <...> BM_StdMidpoint<uint16_t, RandRand>/262144 591264 ns 591274 ns 1184 bytes_read/iteration=1024k bytes_read/sec=1.65162G/s midpoints=310.378M midpoints/sec=443.354M/s BM_StdMidpoint<uint16_t, RandRand>_BigO 2.25 N 2.25 N BM_StdMidpoint<uint16_t, RandRand>_RMS 1 % 1 % <...> BM_StdMidpoint<int8_t, RandRand>/524288 2983669 ns 2983689 ns 235 bytes_read/iteration=1024k bytes_read/sec=335.156M/s midpoints=123.208M midpoints/sec=175.718M/s BM_StdMidpoint<int8_t, RandRand>_BigO 5.69 N 5.69 N BM_StdMidpoint<int8_t, RandRand>_RMS 0 % 0 % <...> BM_StdMidpoint<uint8_t, RandRand>/524288 2668398 ns 2668419 ns 262 bytes_read/iteration=1024k bytes_read/sec=374.754M/s midpoints=137.363M midpoints/sec=196.479M/s BM_StdMidpoint<uint8_t, RandRand>_BigO 5.09 N 5.09 N BM_StdMidpoint<uint8_t, RandRand>_RMS 0 % 0 % <...> BM_StdMidpoint<uint32_t, ZeroRand>/131072 300887 ns 300887 ns 2331 bytes_read/iteration=1024k bytes_read/sec=3.24561G/s midpoints=305.529M midpoints/sec=435.619M/s BM_StdMidpoint<uint32_t, ZeroRand>_BigO 2.29 N 2.29 N BM_StdMidpoint<uint32_t, ZeroRand>_RMS 2 % 2 % <...> BM_StdMidpoint<uint64_t, ZeroRand>/65536 169634 ns 169634 ns 4102 bytes_read/iteration=1024k bytes_read/sec=5.75688G/s midpoints=268.829M midpoints/sec=386.338M/s BM_StdMidpoint<uint64_t, ZeroRand>_BigO 2.59 N 2.59 N BM_StdMidpoint<uint64_t, ZeroRand>_RMS 3 % 3 % <...> BM_StdMidpoint<uint16_t, ZeroRand>/262144 592252 ns 592255 ns 1182 bytes_read/iteration=1024k bytes_read/sec=1.64889G/s midpoints=309.854M midpoints/sec=442.62M/s BM_StdMidpoint<uint16_t, ZeroRand>_BigO 2.26 N 2.26 N BM_StdMidpoint<uint16_t, ZeroRand>_RMS 1 % 1 % <...> BM_StdMidpoint<uint8_t, ZeroRand>/524288 987295 ns 987309 ns 711 bytes_read/iteration=1024k bytes_read/sec=1012.85M/s midpoints=372.769M midpoints/sec=531.028M/s BM_StdMidpoint<uint8_t, ZeroRand>_BigO 1.88 N 1.88 N BM_StdMidpoint<uint8_t, ZeroRand>_RMS 1 % 1 % RUNNING: ./llvm-cmov-bench-NEW --benchmark_out=/tmp/tmpPvwpfW 2019-03-06 21:56:58 Running ./llvm-cmov-bench-NEW Run on (8 X 4000 MHz CPU s) CPU Caches: L1 Data 16K (x8) L1 Instruction 64K (x4) L2 Unified 2048K (x4) L3 Unified 8192K (x1) Load Average: 1.17, 1.46, 1.30 ---------------------------------------------------------------------------------------------------- Benchmark Time CPU Iterations UserCounters<...> ---------------------------------------------------------------------------------------------------- <...> BM_StdMidpoint<int32_t, RandRand>/131072 300878 ns 300880 ns 2324 bytes_read/iteration=1024k bytes_read/sec=3.24569G/s midpoints=304.611M midpoints/sec=435.629M/s BM_StdMidpoint<int32_t, RandRand>_BigO 2.29 N 2.29 N BM_StdMidpoint<int32_t, RandRand>_RMS 2 % 2 % <...> BM_StdMidpoint<uint32_t, RandRand>/131072 300231 ns 300226 ns 2330 bytes_read/iteration=1024k bytes_read/sec=3.25276G/s midpoints=305.398M midpoints/sec=436.578M/s BM_StdMidpoint<uint32_t, RandRand>_BigO 2.29 N 2.29 N BM_StdMidpoint<uint32_t, RandRand>_RMS 2 % 2 % <...> BM_StdMidpoint<int64_t, RandRand>/65536 170819 ns 170777 ns 4115 bytes_read/iteration=1024k bytes_read/sec=5.71835G/s midpoints=269.681M midpoints/sec=383.752M/s BM_StdMidpoint<int64_t, RandRand>_BigO 2.60 N 2.60 N BM_StdMidpoint<int64_t, RandRand>_RMS 3 % 3 % <...> BM_StdMidpoint<uint64_t, RandRand>/65536 171705 ns 171708 ns 4106 bytes_read/iteration=1024k bytes_read/sec=5.68733G/s midpoints=269.091M midpoints/sec=381.671M/s BM_StdMidpoint<uint64_t, RandRand>_BigO 2.62 N 2.62 N BM_StdMidpoint<uint64_t, RandRand>_RMS 3 % 3 % <...> BM_StdMidpoint<int16_t, RandRand>/262144 592510 ns 592516 ns 1182 bytes_read/iteration=1024k bytes_read/sec=1.64816G/s midpoints=309.854M midpoints/sec=442.425M/s BM_StdMidpoint<int16_t, RandRand>_BigO 2.26 N 2.26 N BM_StdMidpoint<int16_t, RandRand>_RMS 1 % 1 % <...> BM_StdMidpoint<uint16_t, RandRand>/262144 614823 ns 614823 ns 1180 bytes_read/iteration=1024k bytes_read/sec=1.58836G/s midpoints=309.33M midpoints/sec=426.373M/s BM_StdMidpoint<uint16_t, RandRand>_BigO 2.33 N 2.33 N BM_StdMidpoint<uint16_t, RandRand>_RMS 4 % 4 % <...> BM_StdMidpoint<int8_t, RandRand>/524288 1073181 ns 1073201 ns 650 bytes_read/iteration=1024k bytes_read/sec=931.791M/s midpoints=340.787M midpoints/sec=488.527M/s BM_StdMidpoint<int8_t, RandRand>_BigO 2.05 N 2.05 N BM_StdMidpoint<int8_t, RandRand>_RMS 1 % 1 % BM_StdMidpoint<uint8_t, RandRand>/524288 1071010 ns 1071020 ns 653 bytes_read/iteration=1024k bytes_read/sec=933.689M/s midpoints=342.36M midpoints/sec=489.522M/s BM_StdMidpoint<uint8_t, RandRand>_BigO 2.05 N 2.05 N BM_StdMidpoint<uint8_t, RandRand>_RMS 1 % 1 % <...> BM_StdMidpoint<uint32_t, ZeroRand>/131072 300413 ns 300416 ns 2330 bytes_read/iteration=1024k bytes_read/sec=3.2507G/s midpoints=305.398M midpoints/sec=436.302M/s BM_StdMidpoint<uint32_t, ZeroRand>_BigO 2.29 N 2.29 N BM_StdMidpoint<uint32_t, ZeroRand>_RMS 2 % 2 % <...> BM_StdMidpoint<uint64_t, ZeroRand>/65536 169667 ns 169669 ns 4123 bytes_read/iteration=1024k bytes_read/sec=5.75568G/s midpoints=270.205M midpoints/sec=386.257M/s BM_StdMidpoint<uint64_t, ZeroRand>_BigO 2.59 N 2.59 N BM_StdMidpoint<uint64_t, ZeroRand>_RMS 3 % 3 % <...> BM_StdMidpoint<uint16_t, ZeroRand>/262144 591396 ns 591404 ns 1184 bytes_read/iteration=1024k bytes_read/sec=1.65126G/s midpoints=310.378M midpoints/sec=443.257M/s BM_StdMidpoint<uint16_t, ZeroRand>_BigO 2.26 N 2.26 N BM_StdMidpoint<uint16_t, ZeroRand>_RMS 1 % 1 % <...> BM_StdMidpoint<uint8_t, ZeroRand>/524288 1069421 ns 1069413 ns 655 bytes_read/iteration=1024k bytes_read/sec=935.092M/s midpoints=343.409M midpoints/sec=490.258M/s BM_StdMidpoint<uint8_t, ZeroRand>_BigO 2.04 N 2.04 N BM_StdMidpoint<uint8_t, ZeroRand>_RMS 0 % 0 % Comparing ./llvm-cmov-bench-OLD to ./llvm-cmov-bench-NEW Benchmark Time CPU Time Old Time New CPU Old CPU New ---------------------------------------------------------------------------------------------------------------------------------------- <...> BM_StdMidpoint<int32_t, RandRand>/131072 +0.0016 +0.0016 300398 300878 300404 300880 <...> BM_StdMidpoint<uint32_t, RandRand>/131072 -0.0007 -0.0007 300433 300231 300433 300226 <...> BM_StdMidpoint<int64_t, RandRand>/65536 +0.0057 +0.0054 169857 170819 169858 170777 <...> BM_StdMidpoint<uint64_t, RandRand>/65536 +0.0114 +0.0114 169770 171705 169771 171708 <...> BM_StdMidpoint<int16_t, RandRand>/262144 +0.0023 +0.0023 591169 592510 591179 592516 <...> BM_StdMidpoint<uint16_t, RandRand>/262144 +0.0398 +0.0398 591264 614823 591274 614823 <...> BM_StdMidpoint<int8_t, RandRand>/524288 -0.6403 -0.6403 2983669 1073181 2983689 1073201 <...> BM_StdMidpoint<uint8_t, RandRand>/524288 -0.5986 -0.5986 2668398 1071010 2668419 1071020 <...> BM_StdMidpoint<uint32_t, ZeroRand>/131072 -0.0016 -0.0016 300887 300413 300887 300416 <...> BM_StdMidpoint<uint64_t, ZeroRand>/65536 +0.0002 +0.0002 169634 169667 169634 169669 <...> BM_StdMidpoint<uint16_t, ZeroRand>/262144 -0.0014 -0.0014 592252 591396 592255 591404 <...> BM_StdMidpoint<uint8_t, ZeroRand>/524288 +0.0832 +0.0832 987295 1069421 987309 1069413 ``` What can we tell from the benchmark? * `BM_StdMidpoint<[u]int8_t, RandRand>` indeed has the worst performance. * All `BM_StdMidpoint<uint{8,16,32}_t, ZeroRand>` are all performant, even the 8-bit case. That is because there we are computing mid point between zero and some random number, thus if the branch predictor is in use, it is in optimal situation. * Promoting 8-bit CMOV did improve performance of `BM_StdMidpoint<[u]int8_t, RandRand>`, by -59%..-64%. # What about branch predictor? * `BM_StdMidpoint<uint8_t, ZeroRand>` was faster than `BM_StdMidpoint<uint{16,32,64}_t, ZeroRand>`, which may mean that well-predicted branch is better than `cmov`. * Promoting 8-bit CMOV degraded performance of `BM_StdMidpoint<uint8_t, ZeroRand>`, `cmov` is up to +10% worse than well-predicted branch. * However, i do not believe this is a concern. If the branch is well predicted, then the PGO will also say that it is well predicted, and LLVM will happily expand cmov back into branch: https://godbolt.org/z/P5ufig # What about partial register stalls? I'm not really able to answer that. What i can say is that if the branch is unpredictable (if it is predictable, then use PGO and you'll have branch) in ~50% of cases you will have to pay branch misprediction penalty. ``` $ grep -i MispredictPenalty X86Sched*.td X86SchedBroadwell.td: let MispredictPenalty = 16; X86SchedHaswell.td: let MispredictPenalty = 16; X86SchedSandyBridge.td: let MispredictPenalty = 16; X86SchedSkylakeClient.td: let MispredictPenalty = 14; X86SchedSkylakeServer.td: let MispredictPenalty = 14; X86ScheduleBdVer2.td: let MispredictPenalty = 20; // Minimum branch misdirection penalty. X86ScheduleBtVer2.td: let MispredictPenalty = 14; // Minimum branch misdirection penalty X86ScheduleSLM.td: let MispredictPenalty = 10; X86ScheduleZnver1.td: let MispredictPenalty = 17; ``` .. which it can be as small as 10 cycles and as large as 20 cycles. Partial register stalls do not seem to be an issue for AMD CPU's. For intel CPU's, they should be around ~5 cycles? Is that actually an issue here? I'm not sure. In short, i'd say this is an improvement, at least on this microbenchmark. Fixes [[ https://bugs.llvm.org/show_bug.cgi?id=40965 | PR40965 ]]. Reviewers: craig.topper, RKSimon, spatel, andreadb, nikic Reviewed By: craig.topper, andreadb Subscribers: jfb, jdoerfert, llvm-commits, mclow.lists Tags: #llvm, #libc Differential Revision: https://reviews.llvm.org/D59035 llvm-svn: 356300
* [AArch64] Turn BIC immediate creation into a DAG combineNikita Popov2019-03-152-44/+44
| | | | | | | | | | | Switch BIC immediate creation for vector ANDs from custom lowering to a DAG combine, which gives generic DAG combines a change to apply first. In particular this avoids (and x, -1) being turned into a (bic x, 0) instead of being eliminated entirely. Differential Revision: https://reviews.llvm.org/D59187 llvm-svn: 356299
* AMDGPU: Fix a SIAnnotateControlFlow issue when there are multiple backedges.Changpeng Fang2019-03-151-2/+11
| | | | | | | | | | | | | | | | | | | | | | Summary: At the exit of the loop, the compiler uses a register to remember and accumulate the number of threads that have already exited. When all active threads exit the loop, this register is used to restore the exec mask, and the execution continues for the post loop code. When there is a "continue" in the loop, the compiler made a mistake to reset the register to 0 when the "continue" backedge is taken. This will result in some threads not executing the post loop code as they are supposed to. This patch fixed the issue. Reviewers: nhaehnle, arsenm Differential Revision: https://reviews.llvm.org/D59312 llvm-svn: 356298
* [X86] Strip the SAE bit from the rounding mode passed to the _RND opcodes. ↵Craig Topper2019-03-153-66/+88
| | | | | | | | | | | | Use TargetConstant to save a conversion in the isel table. The asm parser generates the immediate without the SAE bit. So for consistency we should generate the MCInst the same way from CodeGen. Since they are now both the same, remove the masking from the printer and replace with an llvm_unreachable. Use a target constant since we're rebuilding the node anyway. Then we don't have to have isel convert it. Saves about 500 bytes from the isel table. llvm-svn: 356294
* [SimplifyDemandedVec] Strengthen handling all undef lanes (particularly GEPs)Philip Reames2019-03-153-5/+31
| | | | | | | | | | | | A change of two parts: 1) A generic enhancement for all callers of SDVE to exploit the fact that if all lanes are undef, the result is undef. 2) A GEP specific piece to strengthen/fix the vector index undef element handling, and call into the generic infrastructure when visiting the GEP. The result is that we replace a vector gep with at least one undef in each lane with a undef. We can also do the same for vector intrinsics. Once the masked.load patch (D57372) has landed, I'll update to include call tests as well. Differential Revision: https://reviews.llvm.org/D57468 llvm-svn: 356293
* [X86][SSE] Fold scalar_to_vector(i64 anyext(x)) -> ↵Simon Pilgrim2019-03-151-3/+13
| | | | | | | | bitcast(scalar_to_vector(i32 anyext(x))) Reduce the size of an any-extended i64 scalar_to_vector source to i32 - the any_extend nodes are often introduced by SimplifyDemandedBits. llvm-svn: 356292
* [ValueTracking] Use ConstantRange overflow checks for unsigned add/sub; NFCNikita Popov2019-03-151-20/+26
| | | | | | | | | | | | | | | Use the methods introduced in rL356276 to implement the computeOverflowForUnsigned(Add|Sub) functions in ValueTracking, by converting the KnownBits into a ConstantRange. This is NFC: The existing KnownBits based implementation uses the same logic as the the ConstantRange based one. This is not the case for the signed equivalents, so I'm only changing unsigned here. This is in preparation for D59386, which will also intersect the computeConstantRange() result into the range determined from KnownBits. llvm-svn: 356290
* [AArch64][GlobalISel] Regbankselect: Fix G_BUILD_VECTOR trying to use s16 ↵Amara Emerson2019-03-151-2/+5
| | | | | | | | | gpr sources. Since we can't insert s16 gprs as we don't have 16 bit GPR registers, we need to teach RBS to assign them to the FPR bank so our selector works. llvm-svn: 356282
* [X86][GlobalISEL] Support lowering aligned unordered atomicsPhilip Reames2019-03-151-3/+15
| | | | | | | | The existing lowering code is accidentally correct for unordered atomics as far as I can tell. An unordered atomic has no memory ordering, and simply requires the actual load or store to be done as a single well aligned instruction. As such, relax the restriction while adding tests to ensure the lowering remains correct in the future. Differential Revision: https://reviews.llvm.org/D57803 llvm-svn: 356280
* [BPF] handle external global properlyYonghong Song2019-03-151-1/+1
| | | | | | | | | | | | | | | | | Previous commit 6bc58e6d3dbd ("[BPF] do not generate unused local/global types") tried to exclude global variable from type generation. The condition is: if (Global.hasExternalLinkage()) continue; This is not right. It also excluded initialized globals. The correct condition (from AssemblyWriter::printGlobal()) is: if (!GV->hasInitializer() && GV->hasExternalLinkage()) Out << "external "; Let us do the same in BTF type generation. Also added a test for it. Signed-off-by: Yonghong Song <yhs@fb.com> llvm-svn: 356279
* [ConstantRange] Add overflow check helpersNikita Popov2019-03-151-0/+92
| | | | | | | | | | | | | | | | | | | | Add functions to ConstantRange that determine whether the unsigned/signed addition/subtraction of two ConstantRanges may/always/never overflows. This will allow checking overflow conditions based on known constant ranges in addition to known bits. I'm implementing these methods on ConstantRange to allow them to be unit tested independently of any ValueTracking machinery. The tests include exhaustive testing on 4-bit ranges, to make sure the result is both conservatively correct and maximally precise. The OverflowResult enum is redeclared on ConstantRange, because I wanted to avoid a dependency in either direction between ValueTracking.h and ConstantRange.h. Differential Revision: https://reviews.llvm.org/D59193 llvm-svn: 356276
* [SelectionDAG] Add SimplifyDemandedBits handling for ISD::SCALAR_TO_VECTORSimon Pilgrim2019-03-151-0/+13
| | | | | | Fixes a lot of constant folding mismatches between i686 and x86_64 llvm-svn: 356273
* [LLVM-C] Expose the "Add Discriminators" Pass To LLVM-CRobert Widmann2019-03-151-0/+3
| | | | | | | | | | | | | | | | Summary: Add bindings to create a wrapped "Add Discriminators" pass. Now that we have debug info support, this is a handy transform to have. Reviewers: whitequark, deadalnix Reviewed By: whitequark Subscribers: dblaikie, aprantl, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D58624 llvm-svn: 356272
* [X86] Add SimplifyDemandedBitsForTargetNode support for PINSRB/PINSRWSimon Pilgrim2019-03-151-5/+42
| | | | llvm-svn: 356270
* [ThinLTO] Restructure AliasSummary to contain ValueInfo of AliaseeTeresa Johnson2019-03-155-39/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The AliasSummary previously contained the AliaseeGUID, which was only populated when reading the summary from bitcode. This patch changes it to instead hold the ValueInfo of the aliasee, and always populates it. This enables more efficient access to the ValueInfo (specifically in the recent patch r352438 which needed to perform an index hash table lookup using the aliasee GUID). As noted in the comments in AliasSummary, we no longer technically need to keep a pointer to the corresponding aliasee summary, since it could be obtained by walking the list of summaries on the ValueInfo looking for the summary in the same module. However, I am concerned that this would be inefficient when walking through the index during the thin link for various analyses. That can be reevaluated in the future. By always populating this new field, we can remove the guard and special handling for a 0 aliasee GUID when dumping the dot graph of the summary. An additional improvement in this patch is when reading the summaries from LLVM assembly we now set the AliaseeSummary field to the aliasee summary in that same module, which makes it consistent with the behavior when reading the summary from bitcode. Reviewers: pcc, mehdi_amini Subscribers: inglorion, eraman, steven_wu, dexonsmith, arphaman, llvm-commits Differential Revision: https://reviews.llvm.org/D57470 llvm-svn: 356268
* [llvm] Skip over empty line table entries.Mircea Trofin2019-03-151-9/+28
| | | | | | | | | | | | | | | | | | Summary: This is similar to how addr2line handles consecutive entries with the same address - pick the last one. Reviewers: dblaikie, friss, JDevlieghere Reviewed By: dblaikie Subscribers: eugenis, vitalybuka, echristo, JDevlieghere, probinson, aprantl, hiraditya, rupprecht, jdoerfert, llvm-commits Tags: #llvm, #debug-info Differential Revision: https://reviews.llvm.org/D58952 llvm-svn: 356265
* [CodeGenPrepare] avoid crashing from replacing a phi twiceMikael Holmen2019-03-151-2/+6
| | | | | | | | | | | | | | | | | | | | | | Summary: This is a fix to bug 41052: https://bugs.llvm.org/show_bug.cgi?id=41052 While trying to optimize a memory instruction in a dead basic block, we end up registering the same phi for replacement twice. This patch avoids registering more than the first replacement candidate for a phi. Patch by: JesperAntonsson Reviewers: skatkov, aprantl Reviewed By: aprantl Subscribers: jdoerfert, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59358 llvm-svn: 356260
* [ARM] Remove EarlyCSE from backendSam Parker2019-03-151-4/+2
| | | | | | | | | | | There is an issue with early CSE hitting an assert, so temporarily remove the pass from the Arm backend. Bug: https://bugs.llvm.org/show_bug.cgi?id=41081 Differential Revision: https://reviews.llvm.org/D59410 llvm-svn: 356259
* [AMDGPU] Fix SGPR fixing through SCC chainingMichael Liao2019-03-152-12/+18
| | | | | | | | | | | | | | | | | | Summary: - During the fixing of SGPR copying from VGPR, ensure users of SCC is properly propagated, i.e. * only propagate through live def of SCC, * skip the SCC-def inst itself, and * stop the propagation on the other SCC-def inst after checking its SCC-use first. Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D59362 llvm-svn: 356258
* [LSR] Check for signed overflow in NarrowSearchSpaceByDetectingSupersets.Florian Hahn2019-03-151-1/+3
| | | | | | | | | | | | | | | | | We are adding a sign extended IR value to an int64_t, which can cause signed overflows, as in the attached test case, where we have a formula with BaseOffset = -1 and a constant with numeric_limits<int64_t>::min(). If the addition would overflow, skip the simplification for this formula. Note that the target triple is required to trigger the failure. Reviewers: qcolombet, gilr, kparzysz, efriedma Reviewed By: efriedma Differential Revision: https://reviews.llvm.org/D59211 llvm-svn: 356256
* [X86][SSE] Attempt to convert SSE shift-by-var to shift-by-imm.Simon Pilgrim2019-03-151-1/+12
| | | | | | Prep work for PR40203 llvm-svn: 356249
* [yaml2obj]Allow explicit setting of p_filesz, p_memsz, and p_offsetJames Henderson2019-03-151-0/+3
| | | | | | | | | | | | | | yaml2obj currently derives the p_filesz, p_memsz, and p_offset values of program headers from their sections. This makes writing tests for certain formats more complex, and sometimes impossible. This patch allows setting these fields explicitly, overriding the default value, when relevant. Reviewed by: jakehehrlich, Higuoxing Differential Revision: https://reviews.llvm.org/D59372 llvm-svn: 356247
* [ARM][ParallelDSP] Disable for big-endianSam Parker2019-03-151-6/+12
| | | | | | | | | Bail early when we don't have a preheader and also if the target is big endian because it's written with only little endian in mind! Differential Revision: https://reviews.llvm.org/D59368 llvm-svn: 356243
* [MIPS GlobalISel] Improve selection of constantsPetar Avramovic2019-03-151-16/+39
| | | | | | | | | Certain 32 bit constants can be generated with a single instruction instead of two. Implement materialize32BitImm function for MIPS32. Differential Revision: https://reviews.llvm.org/D59369 llvm-svn: 356238
* [BPF] do not generate unused local/global typesYonghong Song2019-03-151-7/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The kernel currently has a limit for # of types to be 64KB and the size of string subsection to be 64KB. A simple bcc tool runqlat.py generates: . the size of ~33KB type section, roughly ~10K types . the size of ~17KB string section The majority type is from the types referenced by local variables in the bpf program. For example, the kernel "task_struct" itself recursively brings in ~900 other types. This patch did the following optimization to avoid generating unused types: . do not generate types for local variables unless they are function arguments. . do not generate types for external globals. If an external global is not used in the program, llvm already removes it from IR, so global variable saving is typical small. For runqlat.py, only one variable "llvm.used" is the external global. The types for locals and external globals can be added back once there is a usage for them. After the above optimization, the runqlat.py generates: . the size of ~1.5KB type section, roughtly 500 types . the size of ~0.7KB string section UPDATE: resubmitted the patch after previous revert with the following fix: use Global.hasExternalLinkage() to test "external" linkage instead of using Global.getInitializer(), which will assert on external variables. Signed-off-by: Yonghong Song <yhs@fb.com> llvm-svn: 356234
* Revert "[BPF] do not generate unused local/global types"Yonghong Song2019-03-151-12/+7
| | | | | | | This reverts commit r356232. Reason: test failure with ASSERT on enabled build. llvm-svn: 356233
* [BPF] do not generate unused local/global typesYonghong Song2019-03-151-7/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The kernel currently has a limit for # of types to be 64KB and the size of string subsection to be 64KB. A simple bcc tool runqlat.py generates: . the size of ~33KB type section, roughly ~10K types . the size of ~17KB string section The majority type is from the types referenced by local variables in the bpf program. For example, the kernel "task_struct" itself recursively brings in ~900 other types. This patch did the following optimization to avoid generating unused types: . do not generate types for local variables unless they are function arguments. . do not generate types for external globals. If an external global is not used in the program, llvm already removes it from IR, so global variable saving is typical small. For runqlat.py, only one variable "llvm.used" is the external global. The types for locals and external globals can be added back once there is a usage for them. After the above optimization, the runqlat.py generates: . the size of ~1.5KB type section, roughtly 500 types . the size of ~0.7KB string section Signed-off-by: Yonghong Song <yhs@fb.com> llvm-svn: 356232
* [WebAssembly] Remove unused load/store patterns that use texternalsymSam Clegg2019-03-154-234/+5
| | | | | | Differential Revision: https://reviews.llvm.org/D59395 llvm-svn: 356221
* AMDGPU: Remove intrinsic operand assertMatt Arsenault2019-03-141-5/+1
| | | | | | | | Before r355981, this was under LLVM_DEBUG. I don't think the assert is quite right, but this really should be a verifier check. Instcombine should not be asserting on this sort of thing. llvm-svn: 356219
* [CGP] add another bailout for degenerate code (PR41064)Sanjay Patel2019-03-141-1/+5
| | | | | | | | | | | | This is almost the same as: rL355345 ...and should prevent any potential crashing from examples like: https://bugs.llvm.org/show_bug.cgi?id=41064 ...although the bug was masked by: rL355823 ...and I'm not sure how to repro the problem after that change. llvm-svn: 356218
OpenPOWER on IntegriCloud