summaryrefslogtreecommitdiffstats
path: root/clang/test/CodeGen/packed-nest-unpacked.c
Commit message (Collapse)AuthorAgeFilesLines
* Change memcpy/memove/memset to have dest and source alignment attributes.Daniel Neilson2018-01-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: This change is step three in the series of changes to remove alignment argument from memcpy/memmove/memset in favour of alignment attributes. Steps: Step 1) Remove alignment parameter and create alignment parameter attributes for memcpy/memmove/memset. ( rL322965, rC322964, rL322963 ) Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing source and dest alignments. ( rL323597 ) Step 3) Update Clang to use the new IRBuilder API. Step 4) Update Polly to use the new IRBuilder API. Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API, and those that use use MemIntrinsicInst::[get|set]Alignment() to use getDestAlignment() and getSourceAlignment() instead. Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the MemIntrinsicInst::[get|set]Alignment() methods. Reference http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.html http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html Reviewers: rjmccall Subscribers: jyknight, nemanjai, nhaehnle, javed.absar, sbc100, aheejin, kbarton, fedor.sergeev, cfe-commits Differential Revision: https://reviews.llvm.org/D41677 llvm-svn: 323617
* Change memcpy/memove/memset to have dest and source alignment attributes ↵Daniel Neilson2018-01-191-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | (Step 1). Summary: Upstream LLVM is changing the the prototypes of the @llvm.memcpy/memmove/memset intrinsics. This change updates the Clang tests for this change. The @llvm.memcpy/memmove/memset intrinsics currently have an explicit argument which is required to be a constant integer. It represents the alignment of the dest (and source), and so must be the minimum of the actual alignment of the two. This change removes the alignment argument in favour of placing the alignment attribute on the source and destination pointers of the memory intrinsic call. For example, code which used to read: call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 100, i32 4, i1 false) will now read call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 4 %dest, i8* align 4 %src, i32 100, i1 false) At this time the source and destination alignments must be the same (Step 1). Step 2 of the change, to be landed shortly, will relax that contraint and allow the source and destination to have different alignments. llvm-svn: 322964
* Revert "Change memcpy/memset/memmove to have dest and source alignments."Pete Cooper2015-11-191-5/+5
| | | | | | | | | | This reverts commit r253512. This likely broke the bots in: http://lab.llvm.org:8011/builders/clang-ppc64-elf-linux2/builds/20202 http://bb.pgr.jp/builders/clang-3stage-i686-linux/builds/3787 llvm-svn: 253542
* Change memcpy/memset/memmove to have dest and source alignments.Pete Cooper2015-11-181-5/+5
| | | | | | | | | | | | | | | | | | | | | | This is a follow on from a similar LLVM commit: r253511. Note, this was reviewed (and more details are in) http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html These intrinsics currently have an explicit alignment argument which is required to be a constant integer. It represents the alignment of the source and dest, and so must be the minimum of those. This change allows source and dest to each have their own alignments by using the alignment attribute on their arguments. The alignment argument itself is removed. The only code change to clang is hidden in CGBuilder.h which now passes both dest and source alignment to IRBuilder, instead of taking the minimum of dest and source alignments. Reviewed by Hal Finkel. llvm-svn: 253512
* Respect alignment of nested bitfieldsUlrich Weigand2015-07-101-1/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tools/clang/test/CodeGen/packed-nest-unpacked.c contains this test: struct XBitfield { unsigned b1 : 10; unsigned b2 : 12; unsigned b3 : 10; }; struct YBitfield { char x; struct XBitfield y; } __attribute((packed)); struct YBitfield gbitfield; unsigned test7() { // CHECK: @test7 // CHECK: load i32, i32* getelementptr inbounds (%struct.YBitfield, %struct.YBitfield* @gbitfield, i32 0, i32 1, i32 0), align 4 return gbitfield.y.b2; } The "align 4" is actually wrong. Accessing all of "gbitfield.y" as a single i32 is of course possible, but that still doesn't make it 4-byte aligned as it remains packed at offset 1 in the surrounding gbitfield object. This alignment was changed by commit r169489, which also introduced changes to bitfield access code in CGExpr.cpp. Code before that change used to take into account *both* the alignment of the field to be accessed within the current struct, *and* the alignment of that outer struct itself; this logic was removed by the above commit. Neglecting to consider both values can cause incorrect code to be generated (I've seen an unaligned access crash on SystemZ due to this bug). In order to always use the best known alignment value, this patch removes the CGBitFieldInfo::StorageAlignment member and replaces it with a StorageOffset member specifying the offset from the start of the surrounding struct to the bitfield's underlying storage. This offset can then be combined with the best-known alignment for a bitfield access lvalue to determine the alignment to use when accessing the bitfield's storage. Differential Revision: http://reviews.llvm.org/D11034 llvm-svn: 241916
* Test case updates for explicit type parameter to the gep operatorDavid Blaikie2015-03-131-7/+7
| | | | llvm-svn: 232187
* Update Clang tests to handle explicitly typed load changes in LLVM.David Blaikie2015-02-271-2/+2
| | | | llvm-svn: 230795
* Complete Rewrite of CGRecordLayoutBuilderWarren Hunt2014-02-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CGRecordLayoutBuilder was aging, complex, multi-pass, and shows signs of existing before ASTRecordLayoutBuilder. It redundantly performed many layout operations that are now performed by ASTRecordLayoutBuilder and asserted that the results were the same. With the addition of support for the MS-ABI, such as placement of vbptrs, vtordisps, different bitfield layout and a variety of other features, CGRecordLayoutBuilder was growing unwieldy in its redundancy. This patch re-architects CGRecordLayoutBuilder to not perform any redundant layout but rather, as directly as possible, lower an ASTRecordLayout to an llvm::type. The new architecture is significantly smaller and simpler than the CGRecordLayoutBuilder and contains fewer ABI-specific code paths. It's also one pass. The architecture of the new system is described in the comments. For the most part, the new system simply takes all of the fields and bases from an ASTRecordLayout, sorts them, inserts padding and dumps a record. Bitfields, unions and primary virtual bases make this process a bit more complicated. See the inline comments. In addition, this patch updates a few lit tests due to the fact that the new system computes more accurate llvm types than CGRecordLayoutBuilder. Each change is commented individually in the review. Differential Revision: http://llvm-reviews.chandlerc.com/D2795 llvm-svn: 201907
* Fix a FIXME in a testcase about packed structs and calls I left aroundEli Friedman2013-06-111-1/+1
| | | | | | | | | while fixing a related bug. The fix here was simpler than I thought it would be. Fixes <rdar://problem/10530444>. llvm-svn: 183718
* Rework the bitfield access IR generation to address PR13619 andChandler Carruth2012-12-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | generally support the C++11 memory model requirements for bitfield accesses by relying more heavily on LLVM's memory model. The primary change this introduces is to move from a manually aligned and strided access pattern across the bits of the bitfield to a much simpler lump access of all bits in the bitfield followed by math to extract the bits relevant for the particular field. This simplifies the code significantly, but relies on LLVM to intelligently lowering these integers. I have tested LLVM's lowering both synthetically and in benchmarks. The lowering appears to be functional, and there are no really significant performance regressions. Different code patterns accessing bitfields will vary in how this impacts them. The only real regressions I'm seeing are a few patterns where the LLVM code generation for loads that feed directly into a mask operation don't take advantage of the x86 ability to do a smaller load and a cheap zero-extension. This doesn't regress any benchmark in the nightly test suite on my box past the noise threshold, but my box is quite noisy. I'll be watching the LNT numbers, and will look into further improvements to the LLVM lowering as needed. llvm-svn: 169489
* Propagate lvalue alignment into bitfields. Per report on cfe-dev.Eli Friedman2012-06-271-0/+18
| | | | llvm-svn: 159295
* Attempt to fix test.Eli Friedman2012-04-171-1/+1
| | | | llvm-svn: 154897
* Make sure EmitMoveFromReturnSlot is passing the correct alignment toChad Rosier2012-04-171-0/+8
| | | | | | | EmitFinalDestCopy (and thus pass EmitAggregateCopy the correct alignment). rdar://11220251 llvm-svn: 154883
* Propagate alignment on lvalues through EmitLValueForField. PR12395.Eli Friedman2012-04-161-1/+9
| | | | llvm-svn: 154789
* Make EmitAggregateCopy take an alignment argument. Make EmitFinalDestCopy ↵Eli Friedman2011-12-051-0/+31
pass in the correct alignment when known. The test includes a FIXME for a related case involving calls; it's a bit more complicated to fix because the RValue class doesn't keep track of alignment. <rdar://problem/10463337> llvm-svn: 145862
OpenPOWER on IntegriCloud