summaryrefslogtreecommitdiffstats
path: root/llvm/test/CodeGen/ARM/long_shift.ll
Commit message (Collapse)AuthorAgeFilesLines
* [ARM] Make -mcpu=generic schedule for an in-order core (Cortex-A8).Kristof Beyls2017-06-281-8/+8
| | | | | | | | | | | | | | | | The benchmarking summarized in http://lists.llvm.org/pipermail/llvm-dev/2017-May/113525.html showed this is beneficial for a wide range of cores. As is to be expected, quite a few small adaptations are needed to the regressions tests, as the difference in scheduling results in: - Quite a few small instruction schedule differences. - A few changes in register allocation decisions caused by different instruction schedules. - A few changes in IfConversion decisions, due to a difference in instruction schedule and/or the estimated cost of a branch mispredict. llvm-svn: 306514
* ARM big endian function argument passingChristian Pirker2014-05-081-18/+40
| | | | llvm-svn: 208316
* ARM: fixup more tests to specify the target more explicitlySaleem Abdulrasool2014-04-031-1/+1
| | | | | | | | | | | | | This changes the tests that were targeting ARM EABI to explicitly specify the environment rather than relying on the default. This breaks with the new Windows on ARM support when running the tests on Windows where the default environment is no longer EABI. Take the opportunity to avoid a pointless redirect (helps when trying to debug with providing a command line invocation which can be copy and pasted) and removing a few greps in favour of FileCheck. llvm-svn: 205541
* Revert "Tests: Be less dependent on a specific schedule/regalloc"Matthias Braun2013-10-111-6/+6
| | | | | | | | | This reverts r192454 Apparently FileCheck isn't as smart as I though and does not enforce a topological order between variable defs+uses. llvm-svn: 192472
* Tests: Be less dependent on a specific schedule/regallocMatthias Braun2013-10-111-6/+6
| | | | llvm-svn: 192454
* Tests: Use CHECK-LABEL where possibleMatthias Braun2013-10-101-4/+4
| | | | llvm-svn: 192403
* - Add MachineInstrBundle.h and MachineInstrBundle.cpp. This includes a functionEvan Cheng2011-12-141-2/+2
| | | | | | | | | | to finalize MI bundles (i.e. add BUNDLE instruction and computing register def and use lists of the BUNDLE instruction) and a pass to unpack bundles. - Teach more of MachineBasic and MachineInstr methods to be bundle aware. - Switch Thumb2 IT block to MI bundles and delete the hazard recognizer hack to prevent IT blocks from being broken apart. llvm-svn: 146542
* Cmp peephole optimization isn't always safe for signed arithmetics.Evan Cheng2011-03-231-2/+4
| | | | | | | | | | | | | | | | | | | | | int tries = INT_MAX; while (tries > 0) { tries--; } The check should be: subs r4, #1 cmp r4, #0 bgt LBB0_1 The subs can set the overflow V bit when r4 is INT_MAX+1 (which loop canonicalization apparently does in this case). cmp #0 would have cleared it while not changing the N and Z bits. Since BGT is dependent on the V bit, i.e. (N == V) && !Z, it is not safe to eliminate the cmp #0. rdar://9172742 llvm-svn: 128179
* Properly pseudo-ize MOVCCr and MOVCCs.Jim Grosbach2011-03-101-2/+2
| | | | llvm-svn: 127434
* When we look at instructions to convert to setting the 's' flag, we need to lookBill Wendling2010-11-011-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | at more than those which define CPSR. You can have this situation: (1) subs ... (2) sub r6, r5, r4 (3) movge ... (4) cmp r6, 0 (5) movge ... We cannot convert (2) to "subs" because (3) is using the CPSR set by (1). There's an analogous situation here: (1) sub r1, r2, r3 (2) sub r4, r5, r6 (3) cmp r4, ... (5) movge ... (6) cmp r1, ... (7) movge ... We cannot convert (1) to "subs" because of the intervening use of CPSR. llvm-svn: 117950
* More tests to XFAIL. The arm-and-txt-peephole.ll test passes even when theBill Wendling2010-11-011-0/+2
| | | | | | peephole optimizer is disabled. That's not good at all. llvm-svn: 117905
* Refactor the MOVsr[al]_flag and RRX pseudo-instructions to really be pseudosJim Grosbach2010-10-141-1/+1
| | | | | | | and let the ARMExpandPseudoInsts pass fix them up into the real (MOVs) instruction form. llvm-svn: 116534
* Tweak the ARM backend to use the RRX mnemonic instead of the 'mov a, b, rrx'Jim Grosbach2010-10-141-1/+1
| | | | | | pseudonym. llvm-svn: 116512
* Update tests to handle MC-inst instruction printing of shift operations. TheJim Grosbach2010-09-171-3/+3
| | | | | | | | | legacy asm printer uses instructions of the form, "mov r0, r0, lsl #3", while the MC-instruction printer uses the form "lsl r0, r0, #3". The latter mnemonic is correct and preferred according the ARM documentation (A8.6.98). The former are pseudo-instructions for the latter. llvm-svn: 114221
* Update test to match output of optimize compares for ARM.Bill Wendling2010-08-111-4/+2
| | | | llvm-svn: 110765
* Teach EmitLiveInCopies to omit copies for unused virtual registers,Dan Gohman2010-06-241-4/+4
| | | | | | and to clean up unused incoming physregs from the live-in list. llvm-svn: 106805
* Run codegen dce pass for all targets at all optimization levels. Previously it'sEvan Cheng2010-02-061-4/+4
| | | | | | | | | | only run for x86 with fastisel. I've found it being very effective in eliminating some obvious dead code as result of formal parameter lowering especially when tail call optimization eliminated the need for some of the loads from fixed frame objects. It also shrinks a number of the tests. A couple of tests no longer make sense and are now eliminated. llvm-svn: 95493
* Update test to be more explicit about what instruction sequences are ↵Jim Grosbach2009-10-311-3/+16
| | | | | | expected for each operation. llvm-svn: 85689
* Expand 64-bit logical shift right inlineJim Grosbach2009-10-311-1/+1
| | | | llvm-svn: 85687
* Expand 64-bit arithmetic shift right inlineJim Grosbach2009-10-311-1/+1
| | | | llvm-svn: 85685
* Expand 64 bit left shift inline rather than using the libcall. For now, thisJim Grosbach2009-10-311-1/+1
| | | | | | | is unconditional. Making it still use the libcall when optimizing for size would be a good adjustment. llvm-svn: 85675
* Add missing colons for FileCheck.Benjamin Kramer2009-10-311-1/+1
| | | | llvm-svn: 85674
* Convert to FileCheckJim Grosbach2009-10-311-5/+9
| | | | llvm-svn: 85673
* Eliminate more uses of llvm-as and llvm-dis.Dan Gohman2009-09-091-1/+1
| | | | llvm-svn: 81293
* Move thumb and thumb2 tests into separate directories.Evan Cheng2009-06-241-1/+0
| | | | llvm-svn: 74068
* Convert tests using "| wc -l | grep ..." to use the count script.Dan Gohman2007-08-151-1/+1
| | | | llvm-svn: 41097
* For PR1319:Reid Spencer2007-04-161-5/+5
| | | | | | | | Remove && from the end of the lines to prevent tests from throwing run lines into the background. Also, clean up places where the same command is run multiple times by using a temporary file. llvm-svn: 36142
* -march=arm -enable-thumb => -march=thumbEvan Cheng2007-02-231-1/+1
| | | | llvm-svn: 34522
* Changes to support making the shift instructions be true BinaryOperators.Reid Spencer2007-02-021-8/+8
| | | | | | | | | | | | This feature is needed in order to support shifts of more than 255 bits on large integer types. This changes the syntax for llvm assembly to make shl, ashr and lshr instructions look like a binary operator: shl i32 %X, 1 instead of shl i32 %X, i8 1 Additionally, this should help a few passes perform additional optimizations. llvm-svn: 33776
* Merge tests.Evan Cheng2007-01-271-7/+28
| | | | llvm-svn: 33560
* ARM test cases contributed by Apple.Evan Cheng2007-01-191-0/+10
llvm-svn: 33354
OpenPOWER on IntegriCloud