summaryrefslogtreecommitdiffstats
path: root/llvm/test/Transforms/InstCombine/ARM
Commit message (Collapse)AuthorAgeFilesLines
* [ARM,MVE] Add an InstCombine rule permitting VPNOT.Simon Tatham2019-12-021-0/+94
| | | | | | | | | | | | | | | | | | | | | | | | Summary: If a user writing C code using the ACLE MVE intrinsics generates a predicate and then complements it, then the resulting IR will use the `pred_v2i` IR intrinsic to turn some `<n x i1>` vector into a 16-bit integer; complement that integer; and convert back. This will generate machine code that moves the predicate out of the `P0` register, complements it in an integer GPR, and moves it back in again. This InstCombine rule replaces `i2v(~v2i(x))` with a direct complement of the original predicate vector, which we can already instruction- select as the VPNOT instruction which complements P0 in place. Reviewers: ostannard, MarkMurrayARM, dmgreen Reviewed By: dmgreen Subscribers: kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70484
* [ARM,MVE] Add InstCombine rules for pred_i2v / pred_v2i.Simon Tatham2019-11-181-0/+236
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If you're writing C code using the ACLE MVE intrinsics that passes the result of a vcmp as input to a predicated intrinsic, e.g. mve_pred16_t pred = vcmpeqq(v1, v2); v_out = vaddq_m(v_inactive, v3, v4, pred); then clang's codegen for the compare intrinsic will create calls to `@llvm.arm.mve.pred.v2i` to convert the output of `icmp` into an `mve_pred16_t` integer representation, and then the next intrinsic will call `@llvm.arm.mve.pred.i2v` to convert it straight back again. This will be visible in the generated code as a `vmrs`/`vmsr` pair that move the predicate value pointlessly out of `p0` and back into it again. To prevent that, I've added InstCombine rules to remove round trips of the form `v2i(i2v(x))` and `i2v(v2i(x))`. Also I've taught InstCombine about the known and demanded bits of those intrinsics. As a result, you now get just the generated code you wanted: vpt.u16 eq, q1, q2 vaddt.u16 q0, q3, q4 Reviewers: ostannard, MarkMurrayARM, dmgreen Reviewed By: dmgreen Subscribers: kristof.beyls, hiraditya, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D70313
* [SimplifyLibCalls] Mark known arguments with nonnullDavid Bolvansky2019-09-171-2/+2
| | | | | | | | | | | | Reviewers: efriedma, jdoerfert Reviewed By: jdoerfert Subscribers: ychen, rsmith, joerg, aaron.ballman, lebedev.ri, uenoku, jdoerfert, hfinkel, javed.absar, spatel, dmgreen, llvm-commits Differential Revision: https://reviews.llvm.org/D53342 llvm-svn: 372091
* [SimplifyLibCalls] Add dereferenceable bytes from known callsitesDavid Bolvansky2019-08-131-24/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: int mm(char *a, char *b) { return memcmp(a,b,16); } Currently: define dso_local i32 @mm(i8* nocapture readonly %a, i8* nocapture readonly %b) local_unnamed_addr #1 { entry: %call = tail call i32 @memcmp(i8* %a, i8* %b, i64 16) ret i32 %call } After patch: define dso_local i32 @mm(i8* nocapture readonly %a, i8* nocapture readonly %b) local_unnamed_addr #1 { entry: %call = tail call i32 @memcmp(i8* dereferenceable(16) %a, i8* dereferenceable(16) %b, i64 16) ret i32 %call } Reviewers: jdoerfert, efriedma Reviewed By: jdoerfert Subscribers: javed.absar, spatel, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D66079 llvm-svn: 368657
* Revert "Temporarily Revert "Add basic loop fusion pass.""Eric Christopher2019-04-179-0/+531
| | | | | | | | The reversion apparently deleted the test/Transforms directory. Will be re-reverting again. llvm-svn: 358552
* Temporarily Revert "Add basic loop fusion pass."Eric Christopher2019-04-179-531/+0
| | | | | | | | As it's causing some bot failures (and per request from kbarton). This reverts commit r358543/ab70da07286e618016e78247e4a24fcb84077fda. llvm-svn: 358546
* [InstCombine, ARM] Convert vld1 to llvm loadAlexandros Lamprineas2018-05-311-0/+118
| | | | | | | | | | Convert a vector load intrinsic into an llvm load instruction. This is beneficial when the underlying object being addressed comes from a constant, since we get constant-folding for free. Differential Revision: https://reviews.llvm.org/D46273 llvm-svn: 333643
* [InstCombine, ARM, AArch64] Convert table lookup to shuffle vectorAlexandros Lamprineas2018-05-301-0/+35
| | | | | | | | | | | Turning a table lookup intrinsic into a shuffle vector instruction can be beneficial. If the mask used for the lookup is the constant vector {7,6,5,4,3,2,1,0}, then the back-end generates byte reverse instructions instead. Differential Revision: https://reviews.llvm.org/D46133 llvm-svn: 333550
* [InstCombine] Combine XOR and AES instructions on ARM/ARM64.Chad Rosier2018-05-241-0/+43
| | | | | | | | | | | The ARM/ARM64 AESE and AESD instructions have a builtin XOR as the first step in the instruction. Therefore, if the AES key is zero and the AES data was previously XORed, it can be combined into a single instruction. Differential Revision: https://reviews.llvm.org/D47239 Patch by Michael Brase! llvm-svn: 333193
* InstCombine: Move tests that use target intrinsics into subdirectoriesJustin Bogner2017-05-134-0/+106
| | | | | | | | Tests with target intrinsics are inherently target specific, so it doesn't actually make sense to run them if we've excluded their target. llvm-svn: 302979
* Enable simplify libcalls for ARM PCSSam Parker2016-09-132-0/+229
Teach SimplifyLibcalls that in can treat functions annotated with apcs, aapcs or aapcs_vfp like normal C functions if they only take and return integer or pointer values, and the target is not iOS. Differential Revision: https://reviews.llvm.org/D24453 llvm-svn: 281322
OpenPOWER on IntegriCloud