summaryrefslogtreecommitdiffstats
path: root/clang/test/CodeGen/c11atomics.c
Commit message (Collapse)AuthorAgeFilesLines
* _Atomic of empty struct shouldn't assertJF Bastien2018-05-091-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: An _Atomic of an empty struct is pretty silly. In general we just widen empty structs to hold a byte's worth of storage, and we represent size and alignment as 0 internally and let LLVM figure out what to do. For _Atomic it's a bit different: the memory model mandates concrete effects occur when atomic operations occur, so in most cases actual instructions need to get emitted. It's really not worth trying to optimize empty struct atomics by figuring out e.g. that a fence would do, even though sane compilers should do optimize atomics. Further, wg21.link/p0528 will fix C++20 atomics with padding bits so that cmpxchg on them works, which means that we'll likely need to do the zero-init song and dance for empty atomic structs anyways (and I think we shouldn't special-case this behavior to C++20 because prior standards are just broken). This patch therefore makes a minor change to r176658 "Promote atomic type sizes up to a power of two": if the width of the atomic's value type is 0, just use 1 byte for width and leave alignment as-is (since it should never be zero, and over-aligned zero-width structs are weird but fine). This fixes an assertion: (NumBits >= MIN_INT_BITS && "bitwidth too small"), function get, file ../lib/IR/Type.cpp, line 241. It seems like this has run into other assertions before (namely the unreachable Kind check in ImpCastExprToType), but I haven't reproduced that issue with tip-of-tree. <rdar://problem/39678063> Reviewers: arphaman, rjmccall Subscribers: aheejin, cfe-commits Differential Revision: https://reviews.llvm.org/D46613 llvm-svn: 331845
* Change memcpy/memove/memset to have dest and source alignment attributes.Daniel Neilson2018-01-281-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This change is step three in the series of changes to remove alignment argument from memcpy/memmove/memset in favour of alignment attributes. Steps: Step 1) Remove alignment parameter and create alignment parameter attributes for memcpy/memmove/memset. ( rL322965, rC322964, rL322963 ) Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing source and dest alignments. ( rL323597 ) Step 3) Update Clang to use the new IRBuilder API. Step 4) Update Polly to use the new IRBuilder API. Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API, and those that use use MemIntrinsicInst::[get|set]Alignment() to use getDestAlignment() and getSourceAlignment() instead. Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the MemIntrinsicInst::[get|set]Alignment() methods. Reference http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.html http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html Reviewers: rjmccall Subscribers: jyknight, nemanjai, nhaehnle, javed.absar, sbc100, aheejin, kbarton, fedor.sergeev, cfe-commits Differential Revision: https://reviews.llvm.org/D41677 llvm-svn: 323617
* Change memcpy/memove/memset to have dest and source alignment attributes ↵Daniel Neilson2018-01-191-16/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | (Step 1). Summary: Upstream LLVM is changing the the prototypes of the @llvm.memcpy/memmove/memset intrinsics. This change updates the Clang tests for this change. The @llvm.memcpy/memmove/memset intrinsics currently have an explicit argument which is required to be a constant integer. It represents the alignment of the dest (and source), and so must be the minimum of the actual alignment of the two. This change removes the alignment argument in favour of placing the alignment attribute on the source and destination pointers of the memory intrinsic call. For example, code which used to read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 100, i32 4, i1 false) will now read call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 4 %dest, i8* align 4 %src, i32 100, i1 false) At this time the source and destination alignments must be the same (Step 1). Step 2 of the change, to be landed shortly, will relax that contraint and allow the source and destination to have different alignments. llvm-svn: 322964
* Revert "Change memcpy/memset/memmove to have dest and source alignments."Pete Cooper2015-11-191-16/+16
| | | | | | | | | | This reverts commit r253512. This likely broke the bots in: http://lab.llvm.org:8011/builders/clang-ppc64-elf-linux2/builds/20202 http://bb.pgr.jp/builders/clang-3stage-i686-linux/builds/3787 llvm-svn: 253542
* Change memcpy/memset/memmove to have dest and source alignments.Pete Cooper2015-11-181-16/+16
| | | | | | | | | | | | | | | | | | | | | | This is a follow on from a similar LLVM commit: r253511. Note, this was reviewed (and more details are in) http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html These intrinsics currently have an explicit alignment argument which is required to be a constant integer. It represents the alignment of the source and dest, and so must be the minimum of those. This change allows source and dest to each have their own alignments by using the alignment attribute on their arguments. The alignment argument itself is removed. The only code change to clang is hidden in CGBuilder.h which now passes both dest and source alignment to IRBuilder, instead of taking the minimum of dest and source alignments. Reviewed by Hal Finkel. llvm-svn: 253512
* Atomics: support __c11_* calls on _Atomic struct types.Tim Northover2015-11-091-10/+106
| | | | | | | | | | | | | When a struct's size is not a power of 2, the corresponding _Atomic() type is promoted to the nearest. We already correctly handled normal C++ expressions of this form, but direct calls to the __c11_atomic_whatever builtins ended up performing dodgy operations on the smaller non-atomic types (e.g. memcpy too much). Later optimisations removed this as undefined behaviour. This patch converts EmitAtomicExpr to allocate its temporaries at the full atomic width, sidestepping the issue. llvm-svn: 252507
* Compute and preserve alignment more faithfully in IR-generation.John McCall2015-09-081-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce an Address type to bundle a pointer value with an alignment. Introduce APIs on CGBuilderTy to work with Address values. Change core APIs on CGF/CGM to traffic in Address where appropriate. Require alignments to be non-zero. Update a ton of code to compute and propagate alignment information. As part of this, I've promoted CGBuiltin's EmitPointerWithAlignment helper function to CGF and made use of it in a number of places in the expression emitter. The end result is that we should now be significantly more correct when performing operations on objects that are locally known to be under-aligned. Since alignment is not reliably tracked in the type system, there are inherent limits to this, but at least we are no longer confused by standard operations like derived-to-base conversions and array-to-pointer decay. I've also fixed a large number of bugs where we were applying the complete-object alignment to a pointer instead of the non-virtual alignment, although most of these were hidden by the very conservative approach we took with member alignment. Also, because IRGen now reliably asserts on zero alignments, we should no longer be subject to an absurd but frustrating recurring bug where an incomplete type would report a zero alignment and then we'd naively do a alignmentAtOffset on it and emit code using an alignment equal to the largest power-of-two factor of the offset. We should also now be emitting much more aggressive alignment attributes in the presence of over-alignment. In particular, field access now uses alignmentAtOffset instead of min. Several times in this patch, I had to change the existing code-generation pattern in order to more effectively use the Address APIs. For the most part, this seems to be a strict improvement, like doing pointer arithmetic with GEPs instead of ptrtoint. That said, I've tried very hard to not change semantics, but it is likely that I've failed in a few places, for which I apologize. ABIArgInfo now always carries the assumed alignment of indirect and indirect byval arguments. In order to cut down on what was already a dauntingly large patch, I changed the code to never set align attributes in the IR on non-byval indirect arguments. That is, we still generate code which assumes that indirect arguments have the given alignment, but we don't express this information to the backend except where it's semantically required (i.e. on byvals). This is likely a minor regression for those targets that did provide this information, but it'll be trivial to add it back in a later patch. I partially punted on applying this work to CGBuiltin. Please do not add more uses of the CreateDefaultAligned{Load,Store} APIs; they will be going away eventually. llvm-svn: 246985
* [CodeGen] Remove atomic sugar from record types in isSafeToConvertDavid Majnemer2015-06-291-2/+18
| | | | | | | | | | We failed to see that we should have deferred the creation of a type which references a type currently under construction because of atomic sugar. This fixes PR23985. llvm-svn: 240989
* Update Clang tests to handle explicitly typed load changes in LLVM.David Blaikie2015-02-271-20/+20
| | | | llvm-svn: 230795
* Update Clang tests to handle explicitly typed gep changes in LLVM.David Blaikie2015-02-271-32/+32
| | | | llvm-svn: 230783
* Bugfix for Codegen of atomic load/store/other ops.Alexey Bataev2014-12-151-7/+7
| | | | | | | Currently clang fires assertions on x86-64 on any atomic operations for long double operands. Patch fixes codegen for such operations. Differential Revision: http://reviews.llvm.org/D6499 llvm-svn: 224230
* PR18097: Support initializing an _Atomic(T) from an object of C++ class type TRichard Smith2014-07-311-0/+5
| | | | | | | or a class derived from T. We already supported this when initializing _Atomic(T) from T for most (and maybe all) other reasonable values of T. llvm-svn: 214390
* CHECK-LABEL-ify some code gen tests to improve diagnostic experience when ↵Stephen Lin2013-08-151-1/+1
| | | | | | tests fail. llvm-svn: 188447
* clang/test/CodeGen/c11atomics.c: Fix testcase for -Asserts since r186054.NAKAMURA Takumi2013-07-111-1/+1
| | | | llvm-svn: 186055
* Simplify atomic load/store IRGen.Eli Friedman2013-07-111-5/+20
| | | | | | Also fixes a couple minor bugs along the way; see testcases. llvm-svn: 186049
* Emit native implementations of atomic operations on FreeBSD/armv6.Ed Schouten2013-06-151-1/+1
| | | | | | | | | | | | Just like on Linux, FreeBSD/armv6 assumes the system supports ldrex/strex unconditionally. It is also used by the kernel. We can therefore enable support for it, like we do on Linux. While there, change one of the unit tests to explicitly test against armv5 instead of armv7, as it actually tests whether libcalls are emitted. llvm-svn: 184040
* Promote atomic type sizes up to a power of two, capped byJohn McCall2013-03-071-0/+207
| | | | | | | | MaxAtomicPromoteWidth. Fix a ton of terrible bugs with _Atomic types and (non-intrinsic-mediated) loads and stores thereto. llvm-svn: 176658
* Remove unused check from test.David Chisnall2013-03-031-2/+0
| | | | llvm-svn: 176422
* Improve C11 atomics support:David Chisnall2013-03-031-0/+139
- Generate atomicrmw operations in most of the cases when it's sensible to do so. - Don't crash in several common cases (and hopefully don't crash in more of them). - Add some better tests. We now generate significantly better code for things like: _Atomic(int) x; ... x++; On MIPS, this now generates a 4-instruction ll/sc loop, where previously it generated about 30 instructions in two nested loops. On x86-64, we generate a single lock incl, instead of a lock cmpxchgl loop (one instruction instead of ten). llvm-svn: 176420
OpenPOWER on IntegriCloud