summaryrefslogtreecommitdiffstats
path: root/clang/lib/Sema
diff options
context:
space:
mode:
authorPhilip Reames <listmail@philipreames.com>2017-02-14 01:38:31 +0000
committerPhilip Reames <listmail@philipreames.com>2017-02-14 01:38:31 +0000
commitb2bca7e309848db08237d6d4a5955a5d5d8b337e (patch)
tree9eaafc7b9e2f02c954ba38d38ee9c1bdf317f783 /clang/lib/Sema
parente2fa5492b20e06c1dcc356f6242c45e7705c1c48 (diff)
downloadbcm5719-llvm-b2bca7e309848db08237d6d4a5955a5d5d8b337e.tar.gz
bcm5719-llvm-b2bca7e309848db08237d6d4a5955a5d5d8b337e.zip
[LICM] Make store promotion work in the face of unordered atomics
Extend our store promotion code to deal with unordered atomic accesses. Ordered atomics continue to be unhandled. Most of the change is straight-forward, the only complicated bit is in the reasoning around mixing of atomic and non-atomic memory access. Rather than trying to reason about the complex semantics in these cases, I simply disallowed promotion when both atomic and non-atomic accesses are present. This is conservatively correct. It seems really tempting to just promote all access to atomics, but the original accesses might have been conditional. Since we can't lower an arbitrary atomic type, it might not be safe to promote all access to atomic. Consider a loop like the following: while(b) { load i128 ... if (can lower i128 atomic) store atomic i128 ... else store i128 } It could be there's no race on the location and thus the code is perfectly well defined even if we can't lower a i128 atomically. It's not clear we need to be this conservative - arguably the program above is brocken since it can't be lowered unless the branch is folded - but I didn't want to have to fix any fallout which might result. Differential Revision: https://reviews.llvm.org/D15592 llvm-svn: 295015
Diffstat (limited to 'clang/lib/Sema')
0 files changed, 0 insertions, 0 deletions
OpenPOWER on IntegriCloud