diff options
author | Dylan McKay <me@dylanmckay.io> | 2017-12-09 06:45:36 +0000 |
---|---|---|
committer | Dylan McKay <me@dylanmckay.io> | 2017-12-09 06:45:36 +0000 |
commit | 80463fe64dec84e92764f9a796870f3be404455d (patch) | |
tree | 2d6e20d9dacd0d09b2c39bac665e5eb02c9bd79c /llvm/test/CodeGen/AVR | |
parent | aae5b6907988f2ef48f242a3fc6e02761a967fd8 (diff) | |
download | bcm5719-llvm-80463fe64dec84e92764f9a796870f3be404455d.tar.gz bcm5719-llvm-80463fe64dec84e92764f9a796870f3be404455d.zip |
Relax unaligned access assertion when type is byte aligned
Summary:
This relaxes an assertion inside SelectionDAGBuilder which is overly
restrictive on targets which have no concept of alignment (such as AVR).
In these architectures, all types are aligned to 8-bits.
After this, LLVM will only assert that accesses are aligned on targets
which actually require alignment.
This patch follows from a discussion on llvm-dev a few months ago
http://llvm.1065342.n5.nabble.com/llvm-dev-Unaligned-atomic-load-store-td112815.html
Reviewers: bogner, nemanjai, joerg, efriedma
Reviewed By: efriedma
Subscribers: efriedma, cactus, llvm-commits
Differential Revision: https://reviews.llvm.org/D39946
llvm-svn: 320243
Diffstat (limited to 'llvm/test/CodeGen/AVR')
-rw-r--r-- | llvm/test/CodeGen/AVR/unaligned-atomic-loads.ll | 19 |
1 files changed, 19 insertions, 0 deletions
diff --git a/llvm/test/CodeGen/AVR/unaligned-atomic-loads.ll b/llvm/test/CodeGen/AVR/unaligned-atomic-loads.ll new file mode 100644 index 00000000000..db1ab33fa88 --- /dev/null +++ b/llvm/test/CodeGen/AVR/unaligned-atomic-loads.ll @@ -0,0 +1,19 @@ +; RUN: llc -mattr=addsubiw < %s -march=avr | FileCheck %s + +; This verifies that the middle end can handle an unaligned atomic load. +; +; In the past, an assertion inside the SelectionDAGBuilder would always +; hit an assertion for unaligned loads and stores. + +%AtomicI16 = type { %CellI16, [0 x i8] } +%CellI16 = type { i16, [0 x i8] } + +; CHECK-LABEL: foo +; CHECK: ret +define void @foo(%AtomicI16* %self) { +start: + %a = getelementptr inbounds %AtomicI16, %AtomicI16* %self, i16 0, i32 0, i32 0 + load atomic i16, i16* %a seq_cst, align 1 + ret void +} + |