diff options
| author | Ahmed Bougacha <ahmed.bougacha@gmail.com> | 2017-01-23 21:10:05 +0000 |
|---|---|---|
| committer | Ahmed Bougacha <ahmed.bougacha@gmail.com> | 2017-01-23 21:10:05 +0000 |
| commit | cfb384d39d09e4bf4e79bb0436c25e77f5d4b063 (patch) | |
| tree | 40cb7b270683887df3a6f25f551c7224fc45ce5e /llvm/test/CodeGen | |
| parent | 69af145767490dcf8fed92f6544fb621c2553bca (diff) | |
| download | bcm5719-llvm-cfb384d39d09e4bf4e79bb0436c25e77f5d4b063.tar.gz bcm5719-llvm-cfb384d39d09e4bf4e79bb0436c25e77f5d4b063.zip | |
[AArch64][GlobalISel] Legalize narrow scalar ops again.
Since r279760, we've been marking as legal operations on narrow integer
types that have wider legal equivalents (for instance, G_ADD s8).
Compared to legalizing these operations, this reduced the amount of
extends/truncates required, but was always a weird legalization decision
made at selection time.
So far, we haven't been able to formalize it in a way that permits the
selector generated from SelectionDAG patterns to be sufficient.
Using a wide instruction (say, s64), when a narrower instruction exists
(s32) would introduce register class incompatibilities (when one narrow
generic instruction is selected to the wider variant, but another is
selected to the narrower variant).
It's also impractical to limit which narrow operations are matched for
which instruction, as restricting "narrow selection" to ranges of types
clashes with potentially incompatible instruction predicates.
Concerns were also raised regarding MIPS64's sign-extended register
assumptions, as well as wrapping behavior.
See discussions in https://reviews.llvm.org/D26878.
Instead, legalize the operations.
Should we ever revert to selecting these narrow operations, we should
try to represent this more accurately: for instance, by separating
a "concrete" type on operations, and an "underlying" type on vregs, we
could move the "this narrow-looking op is really legal" decision to the
legalizer, and let the selector use the "underlying" vreg type only,
which would be guaranteed to map to a register class.
In any case, we eventually should mitigate:
- the performance impact by selecting no-op extract/truncates to COPYs
(which we currently do), and the COPYs to register reuses (which we
don't do yet).
- the compile-time impact by optimizing away extract/truncate sequences
in the legalizer.
llvm-svn: 292827
Diffstat (limited to 'llvm/test/CodeGen')
9 files changed, 37 insertions, 415 deletions
diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/arm64-instructionselect.mir b/llvm/test/CodeGen/AArch64/GlobalISel/arm64-instructionselect.mir index 9c7b73052a3..eb7827fd74b 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/arm64-instructionselect.mir +++ b/llvm/test/CodeGen/AArch64/GlobalISel/arm64-instructionselect.mir @@ -8,34 +8,22 @@ --- | target datalayout = "e-m:o-i64:64-i128:128-n32:64-S128" - define void @add_s8_gpr() { ret void } - define void @add_s16_gpr() { ret void } define void @add_s32_gpr() { ret void } define void @add_s64_gpr() { ret void } - define void @sub_s8_gpr() { ret void } - define void @sub_s16_gpr() { ret void } define void @sub_s32_gpr() { ret void } define void @sub_s64_gpr() { ret void } - define void @or_s1_gpr() { ret void } - define void @or_s16_gpr() { ret void } define void @or_s32_gpr() { ret void } define void @or_s64_gpr() { ret void } define void @or_v2s32_fpr() { ret void } - define void @xor_s8_gpr() { ret void } - define void @xor_s16_gpr() { ret void } define void @xor_s32_gpr() { ret void } define void @xor_s64_gpr() { ret void } - define void @and_s8_gpr() { ret void } - define void @and_s16_gpr() { ret void } define void @and_s32_gpr() { ret void } define void @and_s64_gpr() { ret void } - define void @shl_s8_gpr() { ret void } - define void @shl_s16_gpr() { ret void } define void @shl_s32_gpr() { ret void } define void @shl_s64_gpr() { ret void } @@ -45,8 +33,6 @@ define void @ashr_s32_gpr() { ret void } define void @ashr_s64_gpr() { ret void } - define void @mul_s8_gpr() { ret void } - define void @mul_s16_gpr() { ret void } define void @mul_s32_gpr() { ret void } define void @mul_s64_gpr() { ret void } @@ -157,62 +143,6 @@ ... --- -# CHECK-LABEL: name: add_s8_gpr -name: add_s8_gpr -legalized: true -regBankSelected: true - -# CHECK: registers: -# CHECK-NEXT: - { id: 0, class: gpr32 } -# CHECK-NEXT: - { id: 1, class: gpr32 } -# CHECK-NEXT: - { id: 2, class: gpr32 } -registers: - - { id: 0, class: gpr } - - { id: 1, class: gpr } - - { id: 2, class: gpr } - -# CHECK: body: -# CHECK: %0 = COPY %w0 -# CHECK: %1 = COPY %w1 -# CHECK: %2 = ADDWrr %0, %1 -body: | - bb.0: - liveins: %w0, %w1 - - %0(s8) = COPY %w0 - %1(s8) = COPY %w1 - %2(s8) = G_ADD %0, %1 -... - ---- -# CHECK-LABEL: name: add_s16_gpr -name: add_s16_gpr -legalized: true -regBankSelected: true - -# CHECK: registers: -# CHECK-NEXT: - { id: 0, class: gpr32 } -# CHECK-NEXT: - { id: 1, class: gpr32 } -# CHECK-NEXT: - { id: 2, class: gpr32 } -registers: - - { id: 0, class: gpr } - - { id: 1, class: gpr } - - { id: 2, class: gpr } - -# CHECK: body: -# CHECK: %0 = COPY %w0 -# CHECK: %1 = COPY %w1 -# CHECK: %2 = ADDWrr %0, %1 -body: | - bb.0: - liveins: %w0, %w1 - - %0(s16) = COPY %w0 - %1(s16) = COPY %w1 - %2(s16) = G_ADD %0, %1 -... - ---- # Check that we select a 32-bit GPR G_ADD into ADDWrr on GPR32. # Also check that we constrain the register class of the COPY to GPR32. # CHECK-LABEL: name: add_s32_gpr @@ -272,62 +202,6 @@ body: | ... --- -# CHECK-LABEL: name: sub_s8_gpr -name: sub_s8_gpr -legalized: true -regBankSelected: true - -# CHECK: registers: -# CHECK-NEXT: - { id: 0, class: gpr32 } -# CHECK-NEXT: - { id: 1, class: gpr32 } -# CHECK-NEXT: - { id: 2, class: gpr32 } -registers: - - { id: 0, class: gpr } - - { id: 1, class: gpr } - - { id: 2, class: gpr } - -# CHECK: body: -# CHECK: %0 = COPY %w0 -# CHECK: %1 = COPY %w1 -# CHECK: %2 = SUBWrr %0, %1 -body: | - bb.0: - liveins: %w0, %w1 - - %0(s8) = COPY %w0 - %1(s8) = COPY %w1 - %2(s8) = G_SUB %0, %1 -... - ---- -# CHECK-LABEL: name: sub_s16_gpr -name: sub_s16_gpr -legalized: true -regBankSelected: true - -# CHECK: registers: -# CHECK-NEXT: - { id: 0, class: gpr32 } -# CHECK-NEXT: - { id: 1, class: gpr32 } -# CHECK-NEXT: - { id: 2, class: gpr32 } -registers: - - { id: 0, class: gpr } - - { id: 1, class: gpr } - - { id: 2, class: gpr } - -# CHECK: body: -# CHECK: %0 = COPY %w0 -# CHECK: %1 = COPY %w1 -# CHECK: %2 = SUBWrr %0, %1 -body: | - bb.0: - liveins: %w0, %w1 - - %0(s16) = COPY %w0 - %1(s16) = COPY %w1 - %2(s16) = G_SUB %0, %1 -... - ---- # Same as add_s32_gpr, for G_SUB operations. # CHECK-LABEL: name: sub_s32_gpr name: sub_s32_gpr @@ -386,62 +260,6 @@ body: | ... --- -# CHECK-LABEL: name: or_s1_gpr -name: or_s1_gpr -legalized: true -regBankSelected: true - -# CHECK: registers: -# CHECK-NEXT: - { id: 0, class: gpr32 } -# CHECK-NEXT: - { id: 1, class: gpr32 } -# CHECK-NEXT: - { id: 2, class: gpr32 } -registers: - - { id: 0, class: gpr } - - { id: 1, class: gpr } - - { id: 2, class: gpr } - -# CHECK: body: -# CHECK: %0 = COPY %w0 -# CHECK: %1 = COPY %w1 -# CHECK: %2 = ORRWrr %0, %1 -body: | - bb.0: - liveins: %w0, %w1 - - %0(s1) = COPY %w0 - %1(s1) = COPY %w1 - %2(s1) = G_OR %0, %1 -... - ---- -# CHECK-LABEL: name: or_s16_gpr -name: or_s16_gpr -legalized: true -regBankSelected: true - -# CHECK: registers: -# CHECK-NEXT: - { id: 0, class: gpr32 } -# CHECK-NEXT: - { id: 1, class: gpr32 } -# CHECK-NEXT: - { id: 2, class: gpr32 } -registers: - - { id: 0, class: gpr } - - { id: 1, class: gpr } - - { id: 2, class: gpr } - -# CHECK: body: -# CHECK: %0 = COPY %w0 -# CHECK: %1 = COPY %w1 -# CHECK: %2 = ORRWrr %0, %1 -body: | - bb.0: - liveins: %w0, %w1 - - %0(s16) = COPY %w0 - %1(s16) = COPY %w1 - %2(s16) = G_OR %0, %1 -... - ---- # Same as add_s32_gpr, for G_OR operations. # CHECK-LABEL: name: or_s32_gpr name: or_s32_gpr @@ -531,62 +349,6 @@ body: | ... --- -# CHECK-LABEL: name: xor_s8_gpr -name: xor_s8_gpr -legalized: true -regBankSelected: true - -# CHECK: registers: -# CHECK-NEXT: - { id: 0, class: gpr32 } -# CHECK-NEXT: - { id: 1, class: gpr32 } -# CHECK-NEXT: - { id: 2, class: gpr32 } -registers: - - { id: 0, class: gpr } - - { id: 1, class: gpr } - - { id: 2, class: gpr } - -# CHECK: body: -# CHECK: %0 = COPY %w0 -# CHECK: %1 = COPY %w1 -# CHECK: %2 = EORWrr %0, %1 -body: | - bb.0: - liveins: %w0, %w1 - - %0(s8) = COPY %w0 - %1(s8) = COPY %w1 - %2(s8) = G_XOR %0, %1 -... - ---- -# CHECK-LABEL: name: xor_s16_gpr -name: xor_s16_gpr -legalized: true -regBankSelected: true - -# CHECK: registers: -# CHECK-NEXT: - { id: 0, class: gpr32 } -# CHECK-NEXT: - { id: 1, class: gpr32 } -# CHECK-NEXT: - { id: 2, class: gpr32 } -registers: - - { id: 0, class: gpr } - - { id: 1, class: gpr } - - { id: 2, class: gpr } - -# CHECK: body: -# CHECK: %0 = COPY %w0 -# CHECK: %1 = COPY %w1 -# CHECK: %2 = EORWrr %0, %1 -body: | - bb.0: - liveins: %w0, %w1 - - %0(s16) = COPY %w0 - %1(s16) = COPY %w1 - %2(s16) = G_XOR %0, %1 -... - ---- # Same as add_s32_gpr, for G_XOR operations. # CHECK-LABEL: name: xor_s32_gpr name: xor_s32_gpr @@ -645,62 +407,6 @@ body: | ... --- -# CHECK-LABEL: name: and_s8_gpr -name: and_s8_gpr -legalized: true -regBankSelected: true - -# CHECK: registers: -# CHECK-NEXT: - { id: 0, class: gpr32 } -# CHECK-NEXT: - { id: 1, class: gpr32 } -# CHECK-NEXT: - { id: 2, class: gpr32 } -registers: - - { id: 0, class: gpr } - - { id: 1, class: gpr } - - { id: 2, class: gpr } - -# CHECK: body: -# CHECK: %0 = COPY %w0 -# CHECK: %1 = COPY %w1 -# CHECK: %2 = ANDWrr %0, %1 -body: | - bb.0: - liveins: %w0, %w1 - - %0(s8) = COPY %w0 - %1(s8) = COPY %w1 - %2(s8) = G_AND %0, %1 -... - ---- -# CHECK-LABEL: name: and_s16_gpr -name: and_s16_gpr -legalized: true -regBankSelected: true - -# CHECK: registers: -# CHECK-NEXT: - { id: 0, class: gpr32 } -# CHECK-NEXT: - { id: 1, class: gpr32 } -# CHECK-NEXT: - { id: 2, class: gpr32 } -registers: - - { id: 0, class: gpr } - - { id: 1, class: gpr } - - { id: 2, class: gpr } - -# CHECK: body: -# CHECK: %0 = COPY %w0 -# CHECK: %1 = COPY %w1 -# CHECK: %2 = ANDWrr %0, %1 -body: | - bb.0: - liveins: %w0, %w1 - - %0(s16) = COPY %w0 - %1(s16) = COPY %w1 - %2(s16) = G_AND %0, %1 -... - ---- # Same as add_s32_gpr, for G_AND operations. # CHECK-LABEL: name: and_s32_gpr name: and_s32_gpr @@ -759,62 +465,6 @@ body: | ... --- -# CHECK-LABEL: name: shl_s8_gpr -name: shl_s8_gpr -legalized: true -regBankSelected: true - -# CHECK: registers: -# CHECK-NEXT: - { id: 0, class: gpr32 } -# CHECK-NEXT: - { id: 1, class: gpr32 } -# CHECK-NEXT: - { id: 2, class: gpr32 } -registers: - - { id: 0, class: gpr } - - { id: 1, class: gpr } - - { id: 2, class: gpr } - -# CHECK: body: -# CHECK: %0 = COPY %w0 -# CHECK: %1 = COPY %w1 -# CHECK: %2 = LSLVWr %0, %1 -body: | - bb.0: - liveins: %w0, %w1 - - %0(s8) = COPY %w0 - %1(s8) = COPY %w1 - %2(s8) = G_SHL %0, %1 -... - ---- -# CHECK-LABEL: name: shl_s16_gpr -name: shl_s16_gpr -legalized: true -regBankSelected: true - -# CHECK: registers: -# CHECK-NEXT: - { id: 0, class: gpr32 } -# CHECK-NEXT: - { id: 1, class: gpr32 } -# CHECK-NEXT: - { id: 2, class: gpr32 } -registers: - - { id: 0, class: gpr } - - { id: 1, class: gpr } - - { id: 2, class: gpr } - -# CHECK: body: -# CHECK: %0 = COPY %w0 -# CHECK: %1 = COPY %w1 -# CHECK: %2 = LSLVWr %0, %1 -body: | - bb.0: - liveins: %w0, %w1 - - %0(s16) = COPY %w0 - %1(s16) = COPY %w1 - %2(s16) = G_SHL %0, %1 -... - ---- # Same as add_s32_gpr, for G_SHL operations. # CHECK-LABEL: name: shl_s32_gpr name: shl_s32_gpr @@ -989,62 +639,6 @@ body: | ... --- -# CHECK-LABEL: name: mul_s8_gpr -name: mul_s8_gpr -legalized: true -regBankSelected: true - -# CHECK: registers: -# CHECK-NEXT: - { id: 0, class: gpr32 } -# CHECK-NEXT: - { id: 1, class: gpr32 } -# CHECK-NEXT: - { id: 2, class: gpr32 } -registers: - - { id: 0, class: gpr } - - { id: 1, class: gpr } - - { id: 2, class: gpr } - -# CHECK: body: -# CHECK: %0 = COPY %w0 -# CHECK: %1 = COPY %w1 -# CHECK: %2 = MADDWrrr %0, %1, %wzr -body: | - bb.0: - liveins: %w0, %w1 - - %0(s8) = COPY %w0 - %1(s8) = COPY %w1 - %2(s8) = G_MUL %0, %1 -... - ---- -# CHECK-LABEL: name: mul_s16_gpr -name: mul_s16_gpr -legalized: true -regBankSelected: true - -# CHECK: registers: -# CHECK-NEXT: - { id: 0, class: gpr32 } -# CHECK-NEXT: - { id: 1, class: gpr32 } -# CHECK-NEXT: - { id: 2, class: gpr32 } -registers: - - { id: 0, class: gpr } - - { id: 1, class: gpr } - - { id: 2, class: gpr } - -# CHECK: body: -# CHECK: %0 = COPY %w0 -# CHECK: %1 = COPY %w1 -# CHECK: %2 = MADDWrrr %0, %1, %wzr -body: | - bb.0: - liveins: %w0, %w1 - - %0(s16) = COPY %w0 - %1(s16) = COPY %w1 - %2(s16) = G_MUL %0, %1 -... - ---- # Check that we select s32 GPR G_MUL. This is trickier than other binops because # there is only MADDWrrr, and we have to use the WZR physreg. # CHECK-LABEL: name: mul_s32_gpr diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-add.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-add.mir index 252e60c6b2e..679c6b4788f 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-add.mir +++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-add.mir @@ -69,7 +69,10 @@ body: | bb.0.entry: liveins: %x0, %x1, %x2, %x3 ; CHECK-LABEL: name: test_scalar_add_small - ; CHECK: [[RES:%.*]](s8) = G_ADD %2, %3 + ; CHECK: [[OP0:%.*]](s32) = G_ANYEXT %2(s8) + ; CHECK: [[OP1:%.*]](s32) = G_ANYEXT %3(s8) + ; CHECK: [[RES32:%.*]](s32) = G_ADD [[OP0]], [[OP1]] + ; CHECK: [[RES:%.*]](s8) = G_TRUNC [[RES32]](s32) %0(s64) = COPY %x0 %1(s64) = COPY %x1 diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-and.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-and.mir index 69459bfacb0..cdd885cb673 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-and.mir +++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-and.mir @@ -22,7 +22,10 @@ body: | bb.0.entry: liveins: %x0, %x1, %x2, %x3 ; CHECK-LABEL: name: test_scalar_and_small - ; CHECK: %4(s8) = G_AND %2, %3 + ; CHECK: [[OP0:%.*]](s32) = G_ANYEXT %2(s8) + ; CHECK: [[OP1:%.*]](s32) = G_ANYEXT %3(s8) + ; CHECK: [[RES32:%.*]](s32) = G_AND [[OP0]], [[OP1]] + ; CHECK: [[RES:%.*]](s8) = G_TRUNC [[RES32]](s32) %0(s64) = COPY %x0 %1(s64) = COPY %x1 diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-mul.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-mul.mir index eb642d4b1a7..e56eef0bc4f 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-mul.mir +++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-mul.mir @@ -22,7 +22,10 @@ body: | bb.0.entry: liveins: %x0, %x1, %x2, %x3 ; CHECK-LABEL: name: test_scalar_mul_small - ; CHECK: %4(s8) = G_MUL %2, %3 + ; CHECK: [[OP0:%.*]](s32) = G_ANYEXT %2(s8) + ; CHECK: [[OP1:%.*]](s32) = G_ANYEXT %3(s8) + ; CHECK: [[RES32:%.*]](s32) = G_MUL [[OP0]], [[OP1]] + ; CHECK: [[RES:%.*]](s8) = G_TRUNC [[RES32]](s32) %0(s64) = COPY %x0 %1(s64) = COPY %x1 diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-or.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-or.mir index edf10cd411e..802d8ad1989 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-or.mir +++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-or.mir @@ -22,7 +22,10 @@ body: | bb.0.entry: liveins: %x0, %x1, %x2, %x3 ; CHECK-LABEL: name: test_scalar_or_small - ; CHECK: %4(s8) = G_OR %2, %3 + ; CHECK: [[OP0:%.*]](s32) = G_ANYEXT %2(s8) + ; CHECK: [[OP1:%.*]](s32) = G_ANYEXT %3(s8) + ; CHECK: [[RES32:%.*]](s32) = G_OR [[OP0]], [[OP1]] + ; CHECK: [[RES:%.*]](s8) = G_TRUNC [[RES32]](s32) %0(s64) = COPY %x0 %1(s64) = COPY %x1 diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-rem.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-rem.mir index e77f3487609..bd8cdf4f1ae 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-rem.mir +++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-rem.mir @@ -45,8 +45,15 @@ body: | ; CHECK: [[RHS32:%[0-9]+]](s32) = G_SEXT %7 ; CHECK: [[QUOT32:%[0-9]+]](s32) = G_SDIV [[LHS32]], [[RHS32]] ; CHECK: [[QUOT:%[0-9]+]](s8) = G_TRUNC [[QUOT32]] - ; CHECK: [[PROD:%[0-9]+]](s8) = G_MUL [[QUOT]], %7 - ; CHECK: [[RES:%[0-9]+]](s8) = G_SUB %6, [[PROD]] + + ; CHECK: [[QUOT32_2:%.*]](s32) = G_ANYEXT [[QUOT]](s8) + ; CHECK: [[RHS32_2:%.*]](s32) = G_ANYEXT %7(s8) + ; CHECK: [[PROD32:%.*]](s32) = G_MUL [[QUOT32_2]], [[RHS32_2]] + ; CHECK: [[PROD:%.*]](s8) = G_TRUNC [[PROD32]](s32) + + ; CHECK: [[LHS32_2:%.*]](s32) = G_ANYEXT %6(s8) + ; CHECK: [[PROD32_2:%.*]](s32) = G_ANYEXT [[PROD]](s8) + ; CHECK: [[RES:%[0-9]+]](s32) = G_SUB [[LHS32_2]], [[PROD32_2]] %6(s8) = G_TRUNC %0 %7(s8) = G_TRUNC %1 %8(s8) = G_SREM %6, %7 diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-shift.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-shift.mir index 673b23562ff..5d95c5ee2d8 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-shift.mir +++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-shift.mir @@ -39,6 +39,9 @@ body: | ; CHECK: %5(s8) = G_TRUNC [[RES32]] %5(s8) = G_LSHR %2, %3 - ; CHECK: %6(s8) = G_SHL %2, %3 + ; CHECK: [[OP0:%.*]](s32) = G_ANYEXT %2(s8) + ; CHECK: [[OP1:%.*]](s32) = G_ANYEXT %3(s8) + ; CHECK: [[RES32:%.*]](s32) = G_SHL [[OP0]], [[OP1]] + ; CHECK: [[RES:%.*]](s8) = G_TRUNC [[RES32]](s32) %6(s8) = G_SHL %2, %3 ... diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-sub.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-sub.mir index e5403cb73c3..6652d2b4d7a 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-sub.mir +++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-sub.mir @@ -22,7 +22,10 @@ body: | bb.0.entry: liveins: %x0, %x1, %x2, %x3 ; CHECK-LABEL: name: test_scalar_sub_small - ; CHECK: [[RES:%.*]](s8) = G_SUB %2, %3 + ; CHECK: [[OP0:%.*]](s32) = G_ANYEXT %2(s8) + ; CHECK: [[OP1:%.*]](s32) = G_ANYEXT %3(s8) + ; CHECK: [[RES32:%.*]](s32) = G_SUB [[OP0]], [[OP1]] + ; CHECK: [[RES:%.*]](s8) = G_TRUNC [[RES32]](s32) %0(s64) = COPY %x0 %1(s64) = COPY %x1 diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-xor.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-xor.mir index 919e674965c..a2f3c8ea3b1 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/legalize-xor.mir +++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalize-xor.mir @@ -22,7 +22,10 @@ body: | bb.0.entry: liveins: %x0, %x1, %x2, %x3 ; CHECK-LABEL: name: test_scalar_xor_small - ; CHECK: %4(s8) = G_XOR %2, %3 + ; CHECK: [[OP0:%.*]](s32) = G_ANYEXT %2(s8) + ; CHECK: [[OP1:%.*]](s32) = G_ANYEXT %3(s8) + ; CHECK: [[RES32:%.*]](s32) = G_XOR [[OP0]], [[OP1]] + ; CHECK: [[RES:%.*]](s8) = G_TRUNC [[RES32]](s32) %0(s64) = COPY %x0 %1(s64) = COPY %x1 |

