diff options
author | Craig Topper <craig.topper@intel.com> | 2018-03-14 17:57:19 +0000 |
---|---|---|
committer | Craig Topper <craig.topper@intel.com> | 2018-03-14 17:57:19 +0000 |
commit | 9c098ed819b87a95a57d00bdacf7c7a2a2989bbf (patch) | |
tree | cd6d8938e24cc5c100044ef008dfe55aa06c2435 /llvm/test/CodeGen | |
parent | fe3aae2a7655d33cd005a3e067db6912dcdc658b (diff) | |
download | bcm5719-llvm-9c098ed819b87a95a57d00bdacf7c7a2a2989bbf.tar.gz bcm5719-llvm-9c098ed819b87a95a57d00bdacf7c7a2a2989bbf.zip |
[X86] Add back fast-isel code for handling i8 shifts.
I removed this in r316797 because the coverage report showed no coverage and I thought it should have been handled by the auto generated table. I now see that there is code that bypasses the table if the shift amount is out of bounds.
This adds back the code. We'll codegen out of bounds i8 shifts to effectively (amount & 0x1f). The 0x1f is a strange quirk of x86 that shift amounts are always masked to 5-bits(except 64-bits). So if the masked value is still out bounds the result will be 0.
Fixes PR36731.
llvm-svn: 327540
Diffstat (limited to 'llvm/test/CodeGen')
-rw-r--r-- | llvm/test/CodeGen/X86/fast-isel-shift.ll | 12 |
1 files changed, 12 insertions, 0 deletions
diff --git a/llvm/test/CodeGen/X86/fast-isel-shift.ll b/llvm/test/CodeGen/X86/fast-isel-shift.ll index 3699b7ba4bf..4dc56f351f5 100644 --- a/llvm/test/CodeGen/X86/fast-isel-shift.ll +++ b/llvm/test/CodeGen/X86/fast-isel-shift.ll @@ -381,3 +381,15 @@ define i64 @ashr_imm4_i64(i64 %a) { %c = ashr i64 %a, 4 ret i64 %c } + +; Make sure we don't crash on out of bounds i8 shifts. +define i8 @PR36731(i8 %a) { +; CHECK-LABEL: PR36731: +; CHECK: ## %bb.0: +; CHECK-NEXT: movb $255, %cl +; CHECK-NEXT: shlb %cl, %dil +; CHECK-NEXT: movl %edi, %eax +; CHECK-NEXT: retq + %b = shl i8 %a, -1 + ret i8 %b +} |