summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/AArch64
Commit message (Collapse)AuthorAgeFilesLines
...
* Revert "AArch64: Fix frame record chain"Logan Chien2019-12-141-29/+0
| | | | | | | | | | | | | | Breaks aosp-O3-polly-before-vectorizer-unprofitable with the following error message: void llvm::emitFrameOffset(llvm::MachineBasicBlock &, MachineBasicBlock::iterator, const llvm::DebugLoc &, unsigned int, unsigned int, llvm::StackOffset, const llvm::TargetInstrInfo *, MachineInstr::MIFlag, bool, bool, bool *): Assertion `(DestReg != AArch64::SP || Bytes % 16 == 0) && "SP increment/decrement not 16-byte aligned"' failed. This reverts commit d4e10e6adb1b629b3fc1b78f7e281fbcec392edb.
* AArch64: Fix frame record chainLogan Chien2019-12-141-0/+29
| | | | | | | | | | | | | | | | | | The commit r369122 may keep LR and FP register (aka. frame record) in the middle of a frame, thus we must add the offsets to ensure the FP register always points to innermost frame record on the stack. According to AAPCS64[1], a conforming code shall construct a linked list of stack frames that can be traversed with frame records. This commit is also essential to frame-pointer-based stack unwinder (e.g. the stack unwinder in linx-perf-tools.) [1] https://github.com/ARM-software/software-standards/blob/master/abi/aapcs64/aapcs64.rst#the-frame-pointer Test: llvm-lit ${LLVM_SRC}/test/CodeGen/AArch64/framelayout-frame-record.ll Test: llvm-lit ${LLVM_SRC}/test/CodeGen/AArch64 Differential Revision: https://reviews.llvm.org/D70800
* [AArch64][test] Fix machine-outliner-size-info.mir after D71168Fangrui Song2019-12-141-0/+1
|
* [AArch64] add tests for fcvtl2; NFCSanjay Patel2019-12-141-0/+49
|
* [AArch64] Save FP for leaf functions when disabling frame pointer eliminationFangrui Song2019-12-139-21/+25
| | | | | | | | | | The change allows clang -mno-omit-leaf-frame-pointer to disable frame pointer elimination. This behavior matches X86 and Mips, and also GCC AArch64. Reviewed By: efriedma Differential Revision: https://reviews.llvm.org/D71168
* [Legalizer] Making artifact combining order-independentRoman Tereshin2019-12-134-19/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Legalization algorithm is complicated by two facts: 1) While regular instructions should be possible to legalize in an isolated, per-instruction, context-free manner, legalization artifacts can only be eliminated in pairs, which could be deeply, and ultimately arbitrary nested: { [ () ] }, where which paranthesis kind depicts an artifact kind, like extend, unmerge, etc. Such structure can only be fully eliminated by simple local combines if they are attempted in a particular order (inside out), or alternatively by repeated scans each eliminating only one innermost pair, resulting in O(n^2) complexity. 2) Some artifacts might in fact be regular instructions that could (and sometimes should) be legalized by the target-specific rules. Which means failure to eliminate all artifacts on the first iteration is not a failure, they need to be tried as instructions, which may produce more artifacts, including the ones that are in fact regular instructions, resulting in a non-constant number of iterations required to finish the process. I trust the recently introduced termination condition (no new artifacts were created during as-a-regular-instruction-retrial of artifacts not eliminated on the previous iteration) to be efficient in providing termination, but only performing the legalization in full if and only if at each step such chains of artifacts are successfully eliminated in full as well. Which is currently not guaranteed, as the artifact combines are applied only once and in an arbitrary order that has to do with the order of creation or insertion of artifacts into their worklist, which is a no particular order. In this patch I make a small change to the artifact combiner, making it to re-insert into the worklist immediate (modulo a look-through copies) artifact users of each vreg that changes its definition due to an artifact combine. Here the first scan through the artifacts worklist, while not being done in any guaranteed order, only needs to find the innermost pair(s) of artifacts that could be immediately combined out. After that the process follows def-use chains, making them shorter at each step, thus combining everything that can be combined in O(n) time. Reviewers: volkan, aditya_nandakumar, qcolombet, paquette, aemerson, dsanders Reviewed By: aditya_nandakumar, paquette Tags: #llvm Differential Revision: https://reviews.llvm.org/D71448
* [DAGCombiner] fold shift-trunc-shift to shift-mask-trunc (2nd try)Sanjay Patel2019-12-131-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The initial attempt (rG89633320) botched the logic by reversing the source/dest types. Added x86 tests for additional coverage. The vector tests show a potential improvement (fold vector load instead of broadcasting), but that's a known/existing problem. This fold is done in IR by instcombine, and we have a special form of it already here in DAGCombiner, but we want the more general transform too: https://rise4fun.com/Alive/3jZm Name: general Pre: (C1 + zext(C2) < 64) %s = lshr i64 %x, C1 %t = trunc i64 %s to i16 %r = lshr i16 %t, C2 => %s2 = lshr i64 %x, C1 + zext(C2) %a = and i64 %s2, zext((1 << (16 - C2)) - 1) %r = trunc %a to i16 Name: special Pre: C1 == 48 %s = lshr i64 %x, C1 %t = trunc i64 %s to i16 %r = lshr i16 %t, C2 => %s2 = lshr i64 %x, C1 + zext(C2) %r = trunc %s2 to i16 ...because D58017 exposes a regression without this fold.
* [PGO][PGSO] Enable size optimizations in code gen / target passes for cold code.Hiroshi Yamauchi2019-12-132-0/+261
| | | | | | | | | | | | Summary: Split off of D67120. Reviewers: davidxl Subscribers: hiraditya, asb, rbar, johnrusso, simoncook, sabuasal, niosHD, jrtc27, MaskRay, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, lenary, s.egerton, pzheng, sameer.abuasal, apazos, luismarques, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D71288
* [AArch64] Emit PAC/BTI .note.gnu.property flagsMomchil Velikov2019-12-139-0/+185
| | | | | | | | | | | | | This patch make LLVM emit the processor specific program property types defined in AArch64 ELF spec https://developer.arm.com/docs/ihi0056/f/elf-for-the-arm-64-bit-architecture-aarch64-abi-2019q2-documentation A file containing no functions gets both property flags. Otherwise, a property is set iff all the functions in the file have the corresponding attribute. Patch by Daniel Kiss and Momchil Velikov. Differential Revision: https://reviews.llvm.org/D71019
* Recommit "[AArch64][SVE] Implement intrinsics for non-temporal loads & stores"Kerry McLaughlin2019-12-132-0/+183
| | | | | | | | | | | | | | Updated pred_load patterns added to AArch64SVEInstrInfo.td by this patch to use reg + imm non-temporal loads to fix previous test failures. Original commit message: Adds the following intrinsics: - llvm.aarch64.sve.ldnt1 - llvm.aarch64.sve.stnt1 This patch creates masked loads and stores with the MONonTemporal flag set when used with the intrinsics above.
* hwasan: add tag_offset DWARF attribute to optimized debug infoEvgenii Stepanov2019-12-121-0/+68
| | | | | | | | | | | | | | | Summary: Support alloca-referencing dbg.value in hwasan instrumentation. Update AsmPrinter to emit DW_AT_LLVM_tag_offset when location is in loclist format. Reviewers: pcc Subscribers: srhines, aprantl, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70753
* [AArch64][SVE] Add integer arithmetic with immediate instructions.Danilo Carvalho Grael2019-12-121-0/+471
| | | | | | | | | | | | | | | | | | | | Summary: Add pattern matching for the following instructions: - add, sub, subr, sqadd, sqsub, uqadd, uqsub This patch required complex patterns to match the immediate with optinal left shift. I re-used the Select function from the other SVE repo to implement the complext pattern. I plan on doing another patch to also match constant vector of the same immediate. Reviewers: sdesmalen, huntergr, rengolin, efriedma, c-rhodes, mgudim, kmclaughlin Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits, amehsan Tags: #llvm Differential Revision: https://reviews.llvm.org/D71370
* Revert "[DAGCombiner] fold shift-trunc-shift to shift-mask-trunc"Sanjay Patel2019-12-121-1/+2
| | | | | This reverts commit 8963332c3327daa652ba3e26d35f9109b6991985. There was a logic bug typo in this code, but it wasn't visible in the asm for the tests.
* [DAGCombiner] fold shift-trunc-shift to shift-mask-truncSanjay Patel2019-12-121-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This fold is done in IR by instcombine, and we have a special form of it already here in DAGCombiner, but we want the more general transform too: https://rise4fun.com/Alive/3jZm Name: general Pre: (C1 + zext(C2) < 64) %s = lshr i64 %x, C1 %t = trunc i64 %s to i16 %r = lshr i16 %t, C2 => %s2 = lshr i64 %x, C1 + zext(C2) %a = and i64 %s2, zext((1 << (16 - C2)) - 1) %r = trunc %a to i16 Name: special Pre: C1 == 48 %s = lshr i64 %x, C1 %t = trunc i64 %s to i16 %r = lshr i16 %t, C2 => %s2 = lshr i64 %x, C1 + zext(C2) %r = trunc %s2 to i16 ...because D58017 exposes a regression without this fold.
* [AArch64][PowerPC] add tests for shift sandwich; NFCSanjay Patel2019-12-121-0/+13
|
* [AArch64][SVE] Add patterns for scalable vselectCameron McInally2019-12-111-0/+85
| | | | | | This patch matches scalable vector selects to predicated move instructions. Differential Revision: https://reviews.llvm.org/D71298
* [AArch64][x86] add tests for possible infinite loops in DAGCombiner; NFCSanjay Patel2019-12-111-2/+30
| | | | | | | This is a reduction of a test that failed (infinite looped) with rGd1f0bdf2d2df (subsequently reverted). I've duplicated it for 2 targets to increase coverage - everything down here is wobbly.
* Revert "[SDAG] remove use restriction in isNegatibleForFree() when called ↵Sanjay Patel2019-12-111-18/+0
| | | | | | | from getNegatedExpression()" This reverts commit d1f0bdf2d2df9bdf11ee2ddfff3df50e53f2f042. The patch can cause infinite loops in DAGCombiner.
* Add intrinsics for unary narrowing operationsAndrzej Warzynski2019-12-111-0/+202
| | | | | | | | | | | | | | | | | | | | | Summary: The following intrinsics for unary narrowing operations are added: * @llvm.aarch64.sve.sqxtnb * @llvm.aarch64.sve.uqxtnb * @llvm.aarch64.sve.sqxtunb * @llvm.aarch64.sve.sqxtnt * @llvm.aarch64.sve.uqxtnt * @llvm.aarch64.sve.sqxtunt Reviewers: sdesmalen, rengolin, efriedma Reviewed By: efriedma Subscribers: tschuett, kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D71270
* [AArch64] Be more careful to skip debug operands in LdSt Optimizier.Florian Hahn2019-12-111-3/+39
| | | | This fixes crashes with $noreg operands.
* [SDAG] remove use restriction in isNegatibleForFree() when called from ↵Sanjay Patel2019-12-111-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | getNegatedExpression() This is an alternate fix for the bug discussed in D70595. This also includes minimal tests for other in-tree targets to show the problem more generally. We check the number of uses as a predicate for whether some value is free to negate, but that use count can change as we rewrite the expression in getNegatedExpression(). So something that was marked free to negate during the cost evaluation phase becomes not free to negate during the rewrite phase (or the inverse - something that was not free becomes free). This can lead to a crash/assert because we expect that everything in an expression that is negatible to be handled in the corresponding code within getNegatedExpression(). This patch skips the use check during the rewrite phase. So we determine that some expression isNegatibleForFree (identically to without this patch), but during the rewrite, don't rely on use counts to decide how to create the optimal expression. Differential Revision: https://reviews.llvm.org/D70975
* [AArch64] Skip debug ops with regsOverlap in AArch64 LD/ST opt.Florian Hahn2019-12-111-0/+49
| | | | This fixes a crash when debug instructions are in between 2 stores.
* Revert "[AArch64][SVE] Implement intrinsics for non-temporal loads & stores"Kerry McLaughlin2019-12-112-183/+0
| | | | | | | | This reverts commit 3f5bf35f868d1e33cd02a5825d33ed4675be8cb1 as it was causing build failures in llvm-clang-x86_64-expensive-checks: http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-debian/builds/392 http://lab.llvm.org:8011/builders/llvm-clang-x86_64-expensive-checks-ubuntu/builds/1045
* [AArch64] Teach Load/Store optimizier to rename store operands for pairing.Florian Hahn2019-12-116-44/+502
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In some cases, we can rename a store operand, in order to enable pairing of stores. For store pairs, that cannot be merged because the first tored register is defined in between the second store, we try to find suitable rename register. First, we check if we can rename the given register: 1. The first store register must be killed at the store, which means we do not have to rename instructions after the first store. 2. We scan backwards from the first store, to find the definition of the stored register and check all uses in between are renamable. Along they way, we collect the minimal register classes of the uses for overlapping (sub/super)registers. Second, we try to find an available register from the minimal physical register class of the original register. A suitable register must not be 1. defined before FirstMI 2. between the previous definition of the register to rename 3. a callee saved register. We use KILL flags to clear defined registers while scanning from the beginning to the end of the block. This triggers quite often, here are the top changes for MultiSource, SPEC2000, SPEC2006 compiled with -O3 for iOS: Metric: aarch64-ldst-opt.NumPairCreated Program base patch diff test-suite...nch/fourinarow/fourinarow.test 2.00 39.00 1850.0% test-suite...s/ASC_Sequoia/IRSmk/IRSmk.test 46.00 80.00 73.9% test-suite...chmarks/Olden/power/power.test 70.00 96.00 37.1% test-suite...cations/hexxagon/hexxagon.test 29.00 39.00 34.5% test-suite...nchmarks/McCat/05-eks/eks.test 100.00 132.00 32.0% test-suite.../Trimaran/enc-rc4/enc-rc4.test 46.00 59.00 28.3% test-suite...T2006/473.astar/473.astar.test 160.00 200.00 25.0% test-suite.../Trimaran/enc-md5/enc-md5.test 8.00 10.00 25.0% test-suite...telecomm-gsm/telecomm-gsm.test 113.00 139.00 23.0% test-suite...ediabench/gsm/toast/toast.test 113.00 139.00 23.0% test-suite...Source/Benchmarks/sim/sim.test 91.00 111.00 22.0% test-suite...C/CFP2000/179.art/179.art.test 41.00 49.00 19.5% test-suite...peg2/mpeg2dec/mpeg2decode.test 245.00 279.00 13.9% test-suite...marks/Olden/health/health.test 16.00 18.00 12.5% test-suite...ks/Prolangs-C/cdecl/cdecl.test 90.00 101.00 12.2% test-suite...fice-ispell/office-ispell.test 91.00 100.00 9.9% test-suite...oxyApps-C/miniGMG/miniGMG.test 430.00 465.00 8.1% test-suite...lowfish/security-blowfish.test 39.00 42.00 7.7% test-suite.../Applications/spiff/spiff.test 42.00 45.00 7.1% test-suite...arks/mafft/pairlocalalign.test 2473.00 2646.00 7.0% test-suite.../VersaBench/ecbdes/ecbdes.test 29.00 31.00 6.9% test-suite...nch/beamformer/beamformer.test 220.00 235.00 6.8% test-suite...CFP2000/177.mesa/177.mesa.test 2110.00 2252.00 6.7% test-suite...ve-susan/automotive-susan.test 109.00 116.00 6.4% test-suite...s-C/unix-smail/unix-smail.test 65.00 69.00 6.2% test-suite...CI_Purple/SMG2000/smg2000.test 1194.00 1265.00 5.9% test-suite.../Benchmarks/nbench/nbench.test 472.00 500.00 5.9% test-suite...oxyApps-C/miniAMR/miniAMR.test 248.00 262.00 5.6% test-suite...quoia/CrystalMk/CrystalMk.test 18.00 19.00 5.6% test-suite...rks/tramp3d-v4/tramp3d-v4.test 7331.00 7710.00 5.2% test-suite.../Benchmarks/Bullet/bullet.test 5651.00 5938.00 5.1% test-suite...ternal/HMMER/hmmcalibrate.test 750.00 788.00 5.1% test-suite...T2006/456.hmmer/456.hmmer.test 764.00 802.00 5.0% test-suite...ications/JM/ldecod/ldecod.test 1028.00 1079.00 5.0% test-suite...CFP2006/444.namd/444.namd.test 1368.00 1434.00 4.8% test-suite...marks/7zip/7zip-benchmark.test 4471.00 4685.00 4.8% test-suite...6/464.h264ref/464.h264ref.test 3122.00 3271.00 4.8% test-suite...pplications/oggenc/oggenc.test 1497.00 1565.00 4.5% test-suite...T2000/300.twolf/300.twolf.test 742.00 774.00 4.3% test-suite.../Prolangs-C/loader/loader.test 24.00 25.00 4.2% test-suite...0.perlbench/400.perlbench.test 1983.00 2058.00 3.8% test-suite...ications/JM/lencod/lencod.test 4612.00 4785.00 3.8% test-suite...yApps-C++/PENNANT/PENNANT.test 995.00 1032.00 3.7% test-suite...arks/VersaBench/dbms/dbms.test 54.00 56.00 3.7% Reviewers: efriedma, thegameg, samparker, dmgreen, paquette, evandro Reviewed By: paquette Differential Revision: https://reviews.llvm.org/D70450
* [AArch64][SVE] Add DAG combine rules for gather loads and sext/zextAndrzej Warzynski2019-12-116-86/+405
| | | | | | | | | | | | | | | | Summary: These changes allow us to support sign-extending gather loads with the exisiting intrinsics (i.e. @llvm.aarch64.sve.ld1.gather.*). Reviewers: sdesmalen, huntergr, kmclaughlin, efriedma, rengolin, rovka, dancgr, mgudim Reviewed By: sdesmalen Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits Tags: #llvm Differential revision: https://reviews.llvm.org/D70812
* Revert "Reland [AArch64][MachineOutliner] Return address signing for ↵Oliver Stannard2019-12-1111-987/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | outlined functions" This reverts commit cec2d5c17457722113580251c8a045fa9aca9b1b. Reverting because this is still creating outlined functions with return address signing instructions with mismatches SP values. For example: int *volatile v; void foo(int x) { int a[x]; v = &a[0]; v = &a[0]; v = &a[0]; v = &a[0]; v = &a[0]; v = &a[0]; } void bar(int x) { int a[x]; v = 0; v = &a[0]; v = &a[0]; v = &a[0]; v = &a[0]; v = &a[0]; } This generates these two outlined functions, both of which modify SP between the paciasp and retaa instructions: $ clang --target=aarch64-arm-none-eabi -march=armv8.3-a -c test2.c -o - -S -Oz -mbranch-protection=pac-ret+leaf ... OUTLINED_FUNCTION_0: // @OUTLINED_FUNCTION_0 .cfi_sections .debug_frame .cfi_startproc // %bb.0: paciasp .cfi_negate_ra_state mov w8, w0 lsl x8, x8, #2 add x8, x8, #15 // =15 mov x9, sp and x8, x8, #0x7fffffff0 sub x8, x9, x8 mov x29, sp mov sp, x8 adrp x9, v retaa ... OUTLINED_FUNCTION_1: // @OUTLINED_FUNCTION_1 .cfi_startproc // %bb.0: paciasp .cfi_negate_ra_state str x8, [x9, :lo12:v] str x8, [x9, :lo12:v] str x8, [x9, :lo12:v] str x8, [x9, :lo12:v] str x8, [x9, :lo12:v] mov sp, x29 retaa
* [AArch64][SVE] Implement intrinsics for non-temporal loads & storesKerry McLaughlin2019-12-112-0/+183
| | | | | | | | | | | | | | | | | | | | Summary: Adds the following intrinsics: - llvm.aarch64.sve.ldnt1 - llvm.aarch64.sve.stnt1 This patch creates masked loads and stores with the MONonTemporal flag set when used with the intrinsics above. Reviewers: sdesmalen, paulwalker-arm, dancgr, mgudim, efriedma, rengolin Reviewed By: efriedma Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, cfe-commits, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D71000
* [AArch64] Fix issues with large arrays on stackKiran Chandramohan2019-12-101-0/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This patch fixes a few issues when large arrays are allocated on the stack. Currently, clang has inconsistent behaviour, for debug builds there is an assertion failure when the array size on stack is around 2GB but there is no assertion when the stack is around 8GB. For release builds there is no assertion, the compilation succeeds but generates incorrect code. The incorrect code generated is due to using int/unsigned int instead of their 64-bit counterparts. This patch, 1) Removes the assertion in frame legality check. 2) Converts int/unsigned int in some places to the 64-bit variants. This helps in generating correct code and removes the inconsistent behaviour. 3) Adds a test which runs without optimisations. Reviewers: sdesmalen, efriedma, fhahn, aemerson Reviewed By: efriedma Subscribers: eli.friedman, fpetrogalli, kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70496
* [AArch64][SVE] Add wide compare immediate patternsCullen Rhodes2019-12-101-0/+404
| | | | | | | | | | | | | | | | | | | | Summary: Recognize wide compares where the wide operand is a splat of a scalar value in the appropriate range and convert to the immediate variant of the instruction. Patch by Graham Hunter Reviewers: sdesmalen, efriedma, dancgr, rovka, rengolin Reviewed By: efriedma Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D71009
* [AArch64][SVE] Implement SPLAT_VECTOR for i1 vectors.Eli Friedman2019-12-091-0/+40
| | | | | | | | | | The generated sequence with whilelo is unintuitive, but it's the best I could come up with given the limited number of SVE instructions that interact with scalar registers. The other sequence I was considering was something like dup+cmpne, but an extra scalar instruction seems better than an extra vector instruction. Differential Revision: https://reviews.llvm.org/D71160
* [PGO][PGSO] Instrument the code gen / target passes.Hiroshi Yamauchi2019-12-092-1/+14
| | | | | | | | | | | | | | | | | | | | Summary: Split off of D67120. Add the profile guided size optimization instrumentation / queries in the code gen or target passes. This doesn't enable the size optimizations in those passes yet as they are currently disabled in shouldOptimizeForSize (for non-IR pass queries). A second try after reverted D71072. Reviewers: davidxl Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D71149
* [AArch64][GlobalISel] Add support for selection of vector G_SHL with immediates.Amara Emerson2019-12-062-16/+196
| | | | | | Only implemented for the type combinations already supported for G_SHL. Differential Revision: https://reviews.llvm.org/D71153
* Revert "[PGO][PGSO] Instrument the code gen / target passes."Hiroshi Yamauchi2019-12-062-14/+1
| | | | | | This reverts commit 9a0b5e14075a1f42a72eedb66fd4fde7985d37ac. This seems to break buildbots.
* [PGO][PGSO] Instrument the code gen / target passes.Hiroshi Yamauchi2019-12-062-1/+14
| | | | | | | | | | | | | | | | | | Summary: Split off of D67120. Add the profile guided size optimization instrumentation / queries in the code gen or target passes. This doesn't enable the size optimizations in those passes yet as they are currently disabled in shouldOptimizeForSize (for non-IR pass queries). Reviewers: davidxl Subscribers: hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D71072
* [MBP] Avoid tail duplication if it can't bring benefitGuozhi Wei2019-12-063-4/+4
| | | | | | | | | | | | | Current tail duplication integrated in bb layout is designed to increase the fallthrough from a BB's predecessor to its successor, but we have observed cases that duplication doesn't increase fallthrough, or it brings too much size overhead. To overcome these two issues in function canTailDuplicateUnplacedPreds I add two checks: make sure there is at least one duplication in current work set. the number of duplication should not exceed the number of successors. The modification in hasBetterLayoutPredecessor fixes a bug that potential predecessor must be at the bottom of a chain. Differential Revision: https://reviews.llvm.org/D64376
* [AArch64] Fix a bug with jump table generationCullen Rhodes2019-12-061-0/+83
| | | | | | | | | | | | | | | | | | | | Summary: When trying to calculate the offsets for the jump table entries we fail to take into account the block alignment, which could be greater than 4 bytes. This led to cases where the jump table offset was too big to fit in a byte. Reviewers: t.p.northover, sdesmalen, ostannard Reviewed By: ostannard Subscribers: ostannard, kristof.beyls, hiraditya, llvm-commits Committed on behalf of David Sherwood (david-arm) Tags: #llvm Differential Revision: https://reviews.llvm.org/D70533
* [AArch64][SVE2] Implement while comparison intrinsicsCullen Rhodes2019-12-061-0/+309
| | | | | | | | | | | | | | | | | Summary: Adds the following intrinsics: * whilege, whilegt, whilehi, whilehs Reviewers: sdesmalen, rovka, dancgr, efriedma, rengolin, huntergr Reviewed By: sdesmalen Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70909
* [AArch64][SVE] Implement integer compare intrinsicsCullen Rhodes2019-12-062-0/+1594
| | | | | | | | | | | | | | | | | | | | | | | | Summary: Adds intrinsics for the following: * cmphs, cmphi * cmpge, cmpgt * cmpeq, cmpne * cmplt, cmple * cmplo, cmpls Includes a minor change to `TLI.getMemValueType` that fixes a crash due to the scalable flag being dropped. Reviewers: sdesmalen, efriedma, rengolin, rovka, dancgr, huntergr Reviewed By: efriedma Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70889
* [AArch64] Fix MUL/SUB fusingSanne Wouda2019-12-051-0/+72
| | | | | | | | | | | | | | | | | Summary: When MUL is the first operand to SUB, we can't use MLS because the accumulator should be negated. Emit a NEG of the accumulator and an MLA instead, similar to what we do for FMUL / FSUB fusing. Reviewers: dmgreen, SjoerdMeijer, fhahn, Gerolf, mstorsjo, asbirlea Reviewed By: asbirlea Subscribers: kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D71067
* [AArch64][SVE] Integer reduction instructions pattern/intrinsics.Danilo Carvalho Grael2019-12-051-0/+400
| | | | | | | | Added pattern matching/intrinsics for the following SVE instructions: -- saddv, uaddv -- smaxv, sminv, umaxv, uminv -- orv, eorv, andv
* [AArch64][SVE] Implement element count intrinsicsCullen Rhodes2019-12-051-0/+99
| | | | | | | | | | | | | | | | | | | | | Summary: Adds intrinsics for the following: * cntb * cnth * cntw * cntd * cntp Reviewers: sdesmalen, huntergr, dancgr, rengolin, efriedma, rovka Reviewed By: efriedma Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70967
* [AArch64][SVE] Add intrinsics and patterns for logical predicate instructionsDanilo Carvalho Grael2019-12-043-12/+601
| | | | | | | Add instrinics and patters for the following logical predicate instructions: -- and, ands, bic, bics, eor, eors -- sel -- orr, orrs, orn, orns, nor, nors, nand, nads
* Reland [AArch64][MachineOutliner] Return address signing for outlined functionsDavid Tellenbach2019-12-0511-0/+987
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Reland after fixing an ASan failure by stopping outlining early if the constraints for return address signing removed too many outlining candidates. During AArch64 frame lowering instructions to enable return address signing are inserted into functions if needed. Functions generated during machine outlining don't run through target frame lowering and hence are missing such instructions. This patch introduces the following changes: 1. If not all functions that potentially participate in function outlining agree on their return address signing scope and their return address signing key, outlining is disabled for these functions. 2. If not all functions that potentially participate in function outlining agree on their support for v8.3A features, outlining is disabled for these functions. 3. If an outlining candidate would outline instructions that modify sp in a way that invalidates return address signing, outlining is disabled for that particular candidate. 4. If all candidate functions agree on the signing scope, signing key and their support for v8.3 features, the outlined function behaves as if it had the same scope and key attributes and as if it would provide the same v8.3A support as the original functions. Reviewers: ostannard, paquette Reviewed By: ostannard Subscribers: kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70635
* [GlobalISel] Fix compiler crash lowering G_LOAD in AArch64.Amara Emerson2019-12-041-0/+22
| | | | | | Patch by Daniel Rodríguez Troitiño. Differential Revision: https://reviews.llvm.org/D70794
* Revert "Reland [AArch64][MachineOutliner] Return address signing for ↵Sterling Augustine2019-12-0411-987/+0
| | | | | | | | | outlined functions" This reverts commit 02760b750b2ffcc0e2f5d78ecb137c80930c42c3. The original commit is not asan clean. http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-fast/builds/37147/steps/check-llvm%20asan/logs/stdio
* Reland [AArch64][MachineOutliner] Return address signing for outlined functionsDavid Tellenbach2019-12-0411-0/+987
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Reland after fixing a bug that allowed outlining of SP modifying instructions that invalidated return address signing. During AArch64 frame lowering instructions to enable return address signing are inserted into functions if needed. Functions generated during machine outlining don't run through target frame lowering and hence are missing such instructions. This patch introduces the following changes: 1. If not all functions that potentially participate in function outlining agree on their return address signing scope and their return address signing key, outlining is disabled for these functions. 2. If not all functions that potentially participate in function outlining agree on their support for v8.3A features, outlining is disabled for these functions. 3. If an outlining candidate would outline instructions that modify sp in a way that invalidates return address signing, outlining is disabled for that particular candidate. 4. If all candidate functions agree on the signing scope, signing key and their support for v8.3 features, the outlined function behaves as if it had the same scope and key attributes and as if it would provide the same v8.3A support as the original functions. Reviewers: ostannard, paquette Reviewed By: ostannard Subscribers: kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70635
* [SVE][AArch64] Adding patterns for while intrinsics.Mikhail Gudim2019-12-041-0/+309
|
* [AArch64][SVE] Implement reversal intrinsicsCullen Rhodes2019-12-041-0/+166
| | | | | | | | | | | | | | | | | | | | | | | Summary: Adds intrinsics for the following: * rbit * revb * revh * revw Patterns are also defined to map the 'llvm.bswap.*' intrinsic to the SVE revb instruction. Reviewers: sdesmalen, huntergr, dancgr, rengolin, efriedma, rovka Reviewed By: sdesmalen Subscribers: tschuett, kristof.beyls, hiraditya, rkruppe, psnobl, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70960
* [MacroFusion] Limit the max fused number as 2 to reduce the dependencyQingShan Zhang2019-12-041-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the example: int foo(int a, int b, int c, int d) { return a + b + c + d; } And this is the Dependency Graph: +------+ +------+ +------+ +------+ | A | | B | | C | | D | +--+--++ +---+--+ +--+---+ +--+---+ ^ ^ ^ ^ ^ ^ | | | | | | | | | |New1 +--------------+ | | | | | | | | | +--+---+ | |New2 | +-------+ ADD1 | | | | +--+---+ | | | Fuse ^ | | +-------------+ | +------------+ | | | Fuse +--+---+ +----------->+ ADD2 | | +------+ +--+---+ | ADD3 | +------+ We need also create an artificial edge from ADD1 to A if https://reviews.llvm.org/D69998 is landed. That will force the Node A scheduled before the ADD1 and ADD2. But in fact, it is ok to schedule the Node A in-between ADD3 and ADD2, as ADD3 and ADD2 are NOT a fusion pair because ADD2 has been matched to ADD1. We are creating these unnecessary dependency edges that override the heuristics. Differential Revision: https://reviews.llvm.org/D70066
* [AArch64] Fix over-eager fusing of NEON SIMD MUL/ADDSanne Wouda2019-12-032-31/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | Summary: The ISel pattern for SIMD MLA is a bit too eager: it replaces the ADD with an MLA even when the MUL cannot be eliminated, e.g. when it has another use. An MLA is usually has a higher latency than an ADD (and there are fewer pipes available that can execute it), so trading an MLA for an ADD is not great. ISel is not taking the number of uses of the MUL result into account, nor any other factors such as the length of the critical path or other resource pressure. The MachineCombiner is able to make these judgments so this patch ports the ISel pattern for MUL/ADD fusing to the MachineCombiner. Similarly for MUL/SUB -> MLS, as well as the indexed variants. The change has no impact on SPEC CPU© intrate nor fprate. Reviewers: dmgreen, SjoerdMeijer, fhahn, Gerolf Subscribers: kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70673
OpenPOWER on IntegriCloud