diff options
| author | Chih-Hung Hsieh <chh@google.com> | 2015-12-14 22:08:36 +0000 |
|---|---|---|
| committer | Chih-Hung Hsieh <chh@google.com> | 2015-12-14 22:08:36 +0000 |
| commit | 7993e18e804dbc02f9b6a1bdaefd1193abe0e095 (patch) | |
| tree | 621daf0529d81170b5486c3a6a7d326199468677 /llvm/test/CodeGen/X86/fp128-compare.ll | |
| parent | f801290a91521e321887c82b23ef29fdf7389fc6 (diff) | |
| download | bcm5719-llvm-7993e18e804dbc02f9b6a1bdaefd1193abe0e095.tar.gz bcm5719-llvm-7993e18e804dbc02f9b6a1bdaefd1193abe0e095.zip | |
[X86] Part 2 to fix x86-64 fp128 calling convention.
Part 1 was submitted in http://reviews.llvm.org/D15134.
Changes in this part:
* X86RegisterInfo.td, X86RecognizableInstr.cpp: Add FR128 register class.
* X86CallingConv.td: Pass f128 values in XMM registers or on stack.
* X86InstrCompiler.td, X86InstrInfo.td, X86InstrSSE.td:
Add instruction selection patterns for f128.
* X86ISelLowering.cpp:
When target has MMX registers, configure MVT::f128 in FR128RegClass,
with TypeSoftenFloat action, and custom actions for some opcodes.
Add missed cases of MVT::f128 in places that handle f32, f64, or vector types.
Add TODO comment to support f128 type in inline assembly code.
* SelectionDAGBuilder.cpp:
Fix infinite loop when f128 type can have
VT == TLI.getTypeToTransformTo(Ctx, VT).
* Add unit tests for x86-64 fp128 type.
Differential Revision: http://reviews.llvm.org/D11438
llvm-svn: 255558
Diffstat (limited to 'llvm/test/CodeGen/X86/fp128-compare.ll')
| -rw-r--r-- | llvm/test/CodeGen/X86/fp128-compare.ll | 96 |
1 files changed, 96 insertions, 0 deletions
diff --git a/llvm/test/CodeGen/X86/fp128-compare.ll b/llvm/test/CodeGen/X86/fp128-compare.ll new file mode 100644 index 00000000000..b5d4fbe1b74 --- /dev/null +++ b/llvm/test/CodeGen/X86/fp128-compare.ll @@ -0,0 +1,96 @@ +; RUN: llc < %s -O2 -mtriple=x86_64-linux-android -mattr=+mmx | FileCheck %s +; RUN: llc < %s -O2 -mtriple=x86_64-linux-gnu -mattr=+mmx | FileCheck %s + +define i32 @TestComp128GT(fp128 %d1, fp128 %d2) { +entry: + %cmp = fcmp ogt fp128 %d1, %d2 + %conv = zext i1 %cmp to i32 + ret i32 %conv +; CHECK-LABEL: TestComp128GT: +; CHECK: callq __gttf2 +; CHECK: setg %al +; CHECK: movzbl %al, %eax +; CHECK: retq +} + +define i32 @TestComp128GE(fp128 %d1, fp128 %d2) { +entry: + %cmp = fcmp oge fp128 %d1, %d2 + %conv = zext i1 %cmp to i32 + ret i32 %conv +; CHECK-LABEL: TestComp128GE: +; CHECK: callq __getf2 +; CHECK: testl %eax, %eax +; CHECK: setns %al +; CHECK: movzbl %al, %eax +; CHECK: retq +} + +define i32 @TestComp128LT(fp128 %d1, fp128 %d2) { +entry: + %cmp = fcmp olt fp128 %d1, %d2 + %conv = zext i1 %cmp to i32 + ret i32 %conv +; CHECK-LABEL: TestComp128LT: +; CHECK: callq __lttf2 +; CHECK-NEXT: shrl $31, %eax +; CHECK: retq +; +; The 'shrl' is a special optimization in llvm to combine +; the effect of 'fcmp olt' and 'zext'. The main purpose is +; to test soften call to __lttf2. +} + +define i32 @TestComp128LE(fp128 %d1, fp128 %d2) { +entry: + %cmp = fcmp ole fp128 %d1, %d2 + %conv = zext i1 %cmp to i32 + ret i32 %conv +; CHECK-LABEL: TestComp128LE: +; CHECK: callq __letf2 +; CHECK-NEXT: testl %eax, %eax +; CHECK: setle %al +; CHECK: movzbl %al, %eax +; CHECK: retq +} + +define i32 @TestComp128EQ(fp128 %d1, fp128 %d2) { +entry: + %cmp = fcmp oeq fp128 %d1, %d2 + %conv = zext i1 %cmp to i32 + ret i32 %conv +; CHECK-LABEL: TestComp128EQ: +; CHECK: callq __eqtf2 +; CHECK-NEXT: testl %eax, %eax +; CHECK: sete %al +; CHECK: movzbl %al, %eax +; CHECK: retq +} + +define i32 @TestComp128NE(fp128 %d1, fp128 %d2) { +entry: + %cmp = fcmp une fp128 %d1, %d2 + %conv = zext i1 %cmp to i32 + ret i32 %conv +; CHECK-LABEL: TestComp128NE: +; CHECK: callq __netf2 +; CHECK-NEXT: testl %eax, %eax +; CHECK: setne %al +; CHECK: movzbl %al, %eax +; CHECK: retq +} + +define fp128 @TestMax(fp128 %x, fp128 %y) { +entry: + %cmp = fcmp ogt fp128 %x, %y + %cond = select i1 %cmp, fp128 %x, fp128 %y + ret fp128 %cond +; CHECK-LABEL: TestMax: +; CHECK: movaps %xmm1 +; CHECK: movaps %xmm0 +; CHECK: callq __gttf2 +; CHECK: movaps {{.*}}, %xmm0 +; CHECK: testl %eax, %eax +; CHECK: movaps {{.*}}, %xmm0 +; CHECK: retq +} |

