summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/ARM/shift-i64.ll
Commit message (Collapse)AuthorAgeFilesLines
* [ARM] Favour PL/MI over GE/LT when possibleDavid Green2019-07-041-12/+9
| | | | | | | | | | | | | | | The arm condition codes for GE is N==V (and for LT is N!=V). If the source of flags cannot set V (overflow), such as a cmp against #0, then we can use the simpler PL and MI conditions that only check N. As these PL/MI conditions are simpler than GE/LT, other passes like the peephole optimiser can have a better time optimising away the redundant CMPs. The exception is the VSEL instruction, which cannot take the PL code, so there the transform favours GE. Differential Revision: https://reviews.llvm.org/D64160 llvm-svn: 365117
* [ARM] Added testing for D64160. NFCDavid Green2019-07-041-48/+50
| | | | | | | Adds some extra vsel testing and regenerates long shift and saturation bitop tests. llvm-svn: 365116
* [ARM] Expand long shifts for Thumb1 to __aeabi_ callsWeiming Zhao2018-01-241-0/+15
| | | | | | | | | | | | | | Summary: For long shifts, the inlined version takes about 20 instructions on Thumb1. To avoid the code bloat, expand to __aeabi_ calls if target is Thumb1. Reviewers: samparker Reviewed By: samparker Subscribers: samparker, aemerson, javed.absar, kristof.beyls, llvm-commits Differential Revision: https://reviews.llvm.org/D42401 llvm-svn: 323354
* [ARM] Make -mcpu=generic schedule for an in-order core (Cortex-A8).Kristof Beyls2017-06-281-1/+1
| | | | | | | | | | | | | | | | The benchmarking summarized in http://lists.llvm.org/pipermail/llvm-dev/2017-May/113525.html showed this is beneficial for a wide range of cores. As is to be expected, quite a few small adaptations are needed to the regressions tests, as the difference in scheduling results in: - Quite a few small instruction schedule differences. - A few changes in register allocation decisions caused by different instruction schedules. - A few changes in IfConversion decisions, due to a difference in instruction schedule and/or the estimated cost of a branch mispredict. llvm-svn: 306514
* ARM: fix CodeGen for 64-bit shifts.Tim Northover2016-11-161-0/+59
One half of the shifts obviously needed conditional selection based on whether the shift amount is more than 32-bits, but leaving the other half as the natural shift isn't acceptable either: it's undefined behaviour to shift a 32-bit value by more than 31. llvm-svn: 287149
OpenPOWER on IntegriCloud