diff options
| author | Alexei Starovoitov <alexei.starovoitov@gmail.com> | 2017-04-13 22:24:13 +0000 |
|---|---|---|
| committer | Alexei Starovoitov <alexei.starovoitov@gmail.com> | 2017-04-13 22:24:13 +0000 |
| commit | 56db14516450f980e69034f7d8a7be1e1f133c52 (patch) | |
| tree | f1bfc66959ebdc5a60b7b5407349684f6129727d /llvm/test/CodeGen | |
| parent | e6185b70e93b16350b3cc4e82e2a809ce41dcd9b (diff) | |
| download | bcm5719-llvm-56db14516450f980e69034f7d8a7be1e1f133c52.tar.gz bcm5719-llvm-56db14516450f980e69034f7d8a7be1e1f133c52.zip | |
[bpf] Fix memory offset check for loads and stores
If the offset cannot fit into the instruction, an addition to the
pointer is emitted before the actual access. However, BPF offsets are
16-bit but LLVM considers them to be, for the matter of this check,
to be 32-bit long.
This causes the following program:
int bpf_prog1(void *ign)
{
volatile unsigned long t = 0x8983984739ull;
return *(unsigned long *)((0xffffffff8fff0002ull) + t);
}
To generate the following (wrong) code:
0: 18 01 00 00 39 47 98 83 00 00 00 00 89 00 00 00
r1 = 590618314553ll
2: 7b 1a f8 ff 00 00 00 00 *(u64 *)(r10 - 8) = r1
3: 79 a1 f8 ff 00 00 00 00 r1 = *(u64 *)(r10 - 8)
4: 79 10 02 00 00 00 00 00 r0 = *(u64 *)(r1 + 2)
5: 95 00 00 00 00 00 00 00 exit
Fix it by changing the offset check to 16-bit.
Patch by Nadav Amit <nadav.amit@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Differential Revision: https://reviews.llvm.org/D32055
llvm-svn: 300269
Diffstat (limited to 'llvm/test/CodeGen')
| -rw-r--r-- | llvm/test/CodeGen/BPF/mem_offset.ll | 17 |
1 files changed, 17 insertions, 0 deletions
diff --git a/llvm/test/CodeGen/BPF/mem_offset.ll b/llvm/test/CodeGen/BPF/mem_offset.ll new file mode 100644 index 00000000000..2b86e44ae59 --- /dev/null +++ b/llvm/test/CodeGen/BPF/mem_offset.ll @@ -0,0 +1,17 @@ +; RUN: llc -march=bpfel -show-mc-encoding < %s | FileCheck %s + +; Function Attrs: nounwind +define i32 @bpf_prog1(i8* nocapture readnone) local_unnamed_addr #0 { +; CHECK: r1 += -1879113726 # encoding: [0x07,0x01,0x00,0x00,0x02,0x00,0xff,0x8f] +; CHECK: r0 = *(u64 *)(r1 + 0) # encoding: [0x79,0x10,0x00,0x00,0x00,0x00,0x00,0x00] + %2 = alloca i64, align 8 + %3 = bitcast i64* %2 to i8* + store volatile i64 590618314553, i64* %2, align 8 + %4 = load volatile i64, i64* %2, align 8 + %5 = add i64 %4, -1879113726 + %6 = inttoptr i64 %5 to i64* + %7 = load i64, i64* %6, align 8 + %8 = trunc i64 %7 to i32 + ret i32 %8 +} + |

