| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Previously the X86 backend would look for the sret attribute and handle
this for us. inalloca takes that all away, so we have to do the return
ourselves now.
llvm-svn: 202097
|
|
|
|
|
|
|
| |
on correctly handled block layout IRGen.
// rdar://16111839
llvm-svn: 202063
|
|
|
|
| |
llvm-svn: 202059
|
|
|
|
|
|
|
| |
We still don't use the PGO to set branch weights for these loops, but at
least this keeps the compiler from crashing. <rdar://problem/16137778>
llvm-svn: 202002
|
|
|
|
|
|
|
|
| |
that the optimizer does not duplicate code.
Patch thanks to Marcello Maggioni!
llvm-svn: 201941
|
|
|
|
|
|
|
| |
MSVC allows extra-qualification on member functions, it lets you repeat
the class name on the method.
llvm-svn: 201918
|
|
|
|
|
|
|
| |
Unused variable is removed. Construction order is changed to match
declaration order.
llvm-svn: 201914
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CGRecordLayoutBuilder was aging, complex, multi-pass, and shows signs of
existing before ASTRecordLayoutBuilder. It redundantly performed many
layout operations that are now performed by ASTRecordLayoutBuilder and
asserted that the results were the same. With the addition of support
for the MS-ABI, such as placement of vbptrs, vtordisps, different
bitfield layout and a variety of other features, CGRecordLayoutBuilder
was growing unwieldy in its redundancy.
This patch re-architects CGRecordLayoutBuilder to not perform any
redundant layout but rather, as directly as possible, lower an
ASTRecordLayout to an llvm::type. The new architecture is significantly
smaller and simpler than the CGRecordLayoutBuilder and contains fewer
ABI-specific code paths. It's also one pass.
The architecture of the new system is described in the comments. For the
most part, the new system simply takes all of the fields and bases from
an ASTRecordLayout, sorts them, inserts padding and dumps a record.
Bitfields, unions and primary virtual bases make this process a bit more
complicated. See the inline comments.
In addition, this patch updates a few lit tests due to the fact that the
new system computes more accurate llvm types than CGRecordLayoutBuilder.
Each change is commented individually in the review.
Differential Revision: http://llvm-reviews.chandlerc.com/D2795
llvm-svn: 201907
|
|
|
|
|
|
|
|
|
|
| |
Because GCC incorrectly defines _mm_prefetch to take anything that casts
to void*, people have started using that behavior. The previous patch
that made _mm_prefetch actually take a const char * broke compatibility
with existing code. This update to the patch leaves the macro that
defines _mm_prefetch with the (void*) cast when _MSC_VER is not defined.
llvm-svn: 201901
|
|
|
|
| |
llvm-svn: 201849
|
|
|
|
|
|
|
|
|
|
| |
This extends the intrinsic lookup table format slightly, and adds
entries for use the shared ARM/AArch64 definitions. The benefit is
currently smaller than for the SISD intrinsics (there's more custom
code implementing this set), but a few lines are saved and there's
scope for future expansion.
llvm-svn: 201848
|
|
|
|
|
|
|
|
|
|
| |
This extracts the table-driven intrinsic lookup phase into a separate
function, to be used by EmitCommonNeonBuiltinExpr soon.
It also simplifies the logic used in that lookup, since VectorCastArgN
and ScalarArgN were actually identical.
llvm-svn: 201847
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
introduce CLANG_TABLEGEN_TARGETS.
This does;
- clang_tablegen() adds each tblgen'd target to global property CLANG_TABLEGEN_TARGETS as list.
- List of targets is added to LLVM_COMMON_DEPENDS.
- all clang libraries and targets depend on generated headers.
You might wonder this would be regression, but in fact, this is little loss.
- Almost all of clang libraries depend on tblgen'd files and clang-tblgen.
- clang-tblgen may cause short stall-out but doesn't cause unconditional rebuild.
- Each library's dependencies to tblgen'd files might vary along headers' structure.
It made hard to track and update *really optimal* dependencies.
Each dependency to intrinsics_gen and ClangSACheckers is left as DEPENDS.
llvm-svn: 201842
|
|
|
|
| |
llvm-svn: 201837
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Virtual methods expect 'this' to point to the vfptr containing the
virtual method, and this extends to virtual member pointer thunks. The
relevant vfptr is always at offset zero on entry to the thunk, and no
this adjustment is needed.
Previously we would not include the vfptr adjustment in the member
pointer, and we'd look at the vfptr offset when loading from the vftable
in the thunk.
Fixes PR18917.
llvm-svn: 201835
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
inheritance with an incomplete class type
The MS ABI requires that we determine the vbptr offset if have a
virtual inheritance model. Instead, raise an error pointing to the
diagnostic when this happens.
This fixes PR18583.
Differential Revision: http://llvm-reviews.chandlerc.com/D2842
llvm-svn: 201824
|
|
|
|
| |
llvm-svn: 201791
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This breaks backwards compatibility with existing code. Previously, this
was defined as
#define _mm_prefetch(a, sel) (__builtin_prefetch((void *)(a), 0, (sel)))
Which basically accepts any pointer. Changing this to char* simply
breaks a lot of existing code. I have tried changing char* to
"const void*", which seems to be the right thing as per Intel
specification this should work on basically any pointer. However,
apparently this breaks windows compatibility (because of a conflicting
declaration in windows.h).
So, we probably need to #ifdef this based on whether clang is compiling
for windows. According to Chandler, this might be done by introducing an
additional symbol to a fake type in BuiltinsX86.def and then condition
the type expansion on the platform.
llvm-svn: 201775
|
|
|
|
| |
llvm-svn: 201739
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds several built-ins that are required for ms
compatibility. _mm_prefetch must be a built-in because it takes a
compile-time constant argument and our prior approach of using a #define
to the current built-in doesn't work in the presence of re-declaration
of _mm_prefetch. The others can be obtained by including the windows
system headers. If a user includes the windows system headers but not
intrin.h they still need to work and therefore must be built-in because
we don't get a chance to implement them in intrin.h in this case.
llvm-svn: 201734
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes one immediate bug where an expression with side-effects
could be emitted twice during a NEON call.
It also prepares the way for folding CodeGen for many of the SISD
intrinsics into a table, reducing code size and hopefully increasing
performance eventually ("binary search + few switch cases" should be
better than "lots of switch cases").
llvm-svn: 201667
|
|
|
|
|
|
|
|
|
|
| |
These instructions (well, the f32 ones) are supported on 32-bit ARMv8, not just
AArch64. Now that the arm_neon.td refactoring is complete, adding them is
surprisingly simple.
rdar://problem/16035743
llvm-svn: 201661
|
|
|
|
|
|
| |
Clang never produces a linker private object, so this code is dead.
llvm-svn: 201627
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Generally the vector deleting dtor, which we model as a vtable thunk,
takes care of non-virtual adjustment and delegates to the other
destructor variants. The other non-complete destructor variants assume
that 'this' on entry points to the virtual base subobject that first
declared the virtual destructor.
We need to change the adjustment in both the prologue and the vdtor call
setup.
Reviewers: timurrrr
CC: cfe-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D2821
llvm-svn: 201612
|
|
|
|
|
|
|
|
| |
is already built.
No functional change intended.
llvm-svn: 201602
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we made one traversal of the AST prior to codegen to assign
counters to the ASTs and then propagated the count values during codegen. This
patch now adds a separate AST traversal prior to codegen for the
-fprofile-instr-use option to propagate the count values. The counts are then
saved in a map from which they can be retrieved during codegen.
This new approach has several advantages:
1. It gets rid of a lot of extra PGO-related code that had previously been
added to codegen.
2. It fixes a serious bug. My original implementation (which was mailed to the
list but never committed) used 3 counters for every loop. Justin improved it to
move 2 of those counters into the less-frequently executed breaks and continues,
but that turned out to produce wrong count values in some cases. The solution
requires visiting a loop body before the condition so that the count for the
condition properly includes the break and continue counts. Changing codegen to
visit a loop body first would be a fairly invasive change, but with a separate
AST traversal, it is easy to control the order of traversal. I've added a
testcase (provided by Justin) to make sure this works correctly.
3. It improves the instrumentation overhead, reducing the number of counters for
a loop from 3 to 1. We no longer need dedicated counters for breaks and
continues, since we can just use the propagated count values when visiting
breaks and continues.
To make this work, I needed to make a change to the way we count case
statements, going back to my original approach of not including the fall-through
in the counter values. This was necessary because there isn't always an AST node
that can be used to record the fall-through count. Now case statements are
handled the same as default statements, with the fall-through paths branching
over the counter increments. While I was at it, I also went back to using this
approach for do-loops -- omitting the fall-through count into the loop body
simplifies some of the calculations and make them behave the same as other
loops. Whenever we start using this instrumentation for coverage, we'll need
to add the fall-through counts into the counter values.
llvm-svn: 201528
|
|
|
|
| |
llvm-svn: 201527
|
|
|
|
| |
llvm-svn: 201526
|
|
|
|
| |
llvm-svn: 201470
|
|
|
|
|
|
|
|
| |
When a function has a single counter, we will offset the pointer by 1 when
parsing the next function. If a function has multiple counters, we are
okay after skipping rest of the counters.
llvm-svn: 201456
|
|
|
|
|
|
|
|
|
| |
properties by fixing shouldBindAsLValue to accept arrays
(like record types) because we always manipulate
them in memory. Patch suggested by John MaCall.
// rdar://15610943
llvm-svn: 201428
|
|
|
|
|
|
|
|
| |
This changes ARM to use @llvm.fabs for floating-point vabs. Patterns
already existed in the backend, and it might help mid-end phases since
it's more likely to be understood than @llvm.arm.neon.vabs.
llvm-svn: 201313
|
|
|
|
|
|
|
|
| |
The s64/u64 vcvt conversion operations are actually pretty much identical to
the s32/u32 ones in implementation, and can be shared with just one extra
variable.
llvm-svn: 201145
|
|
|
|
|
|
|
|
| |
Xcore target ABI requires const data that is externally visible
to be handled differently if it has C-language linkage rather than
C++ language linkage.
llvm-svn: 201142
|
|
|
|
|
|
|
|
|
|
| |
According to the AAPCS, we can split structs between GPRs and the stack,
except for when an argument has already been allocated on the stack. This
can occur when a large number of floating-point arguments fill up the VFP
registers, and are alllocated on the stack before the general-purpose argument
registers are full.
llvm-svn: 201137
|
|
|
|
|
|
|
|
|
|
| |
This option has the following effects:
* It adds the sspstrong IR attribute to each function within the CU.
* It defines the macro __SSP_STRONG__ with the value of 2.
Differential Revision: http://llvm-reviews.chandlerc.com/D2717
llvm-svn: 201120
|
|
|
|
|
|
|
| |
Now that both ARM backends use the same implementation for vshll operations,
the code can be shared. This is also a necessary LLVM/Clang interface update.
llvm-svn: 201094
|
|
|
|
|
|
|
| |
Now the backend supports the natural LLVM IR, we can shamelessly steal the
AArch64 front-end code to implement the vshrn intrinsic on 32-bit ARM.
llvm-svn: 201086
|
|
|
|
|
|
|
|
|
|
| |
unique them and permits the implementation of dynamic_cast (and
anything else which knows it's working with a complete class
type) to compare their addresses directly.
rdar://16005328
llvm-svn: 201020
|
|
|
|
|
|
| |
gross, and increasingly replaced through other mechanisms.
llvm-svn: 201011
|
|
|
|
|
|
|
|
|
| |
An HFA is defined as a struct containing floating point values of the
same machine type. In the 32-bit ABI, double and long double have the
same machine type, so a struct with a mixture of these types must be an
HFA (assuming it meets the other criteria).
llvm-svn: 200971
|
|
|
|
|
|
|
|
|
| |
The approach is similar to the existing inline-asm reporting, just more
general.
<rdar://problem/15886278>
llvm-svn: 200931
|
|
|
|
| |
llvm-svn: 200893
|
|
|
|
| |
llvm-svn: 200891
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We collect a maximal function count among all functions in the pgo data file.
For functions that are hot, we set its InlineHint attribute. For functions that
are cold, we set its Cold attribute.
We currently treat functions with >= 30% of the maximal function count as hot
and functions with <= 1% of the maximal function count are treated as cold.
These two numbers are from preliminary tuning on SPEC.
This commit should not affect non-PGO builds and should boost performance on
instrumentation based PGO.
llvm-svn: 200874
|
|
|
|
| |
llvm-svn: 200861
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Member pointers are mangled as they would be represented at runtime.
They can be a single integer literal, single decl, or a tuple with some
more numbers tossed in. With Clang today, most of those numbers will be
zero because we reject pointers to members of virtual bases.
This change required moving VTableContextBase ownership from
CodeGenVTables to ASTContext, because mangling now depends on vtable
layout.
I also hoisted the inheritance model helpers up to be inline static
methods of MSInheritanceAttr. This makes the AST code that deals with
member pointers much more readable.
MSVC doesn't appear to have stable manglings of null member pointers:
- Null data memptrs in function templates have a mangling collision with
the first field of a non-polymorphic single inheritance class.
- The mangling of null data memptrs changes if you add casts.
- Large null function memptrs in class templates crash MSVC.
Clang uses the class template mangling for null data memptrs and the
function template mangling for null function memptrs to deal with this.
Reviewers: majnemer
Differential Revision: http://llvm-reviews.chandlerc.com/D2695
llvm-svn: 200857
|
|
|
|
|
|
|
|
| |
not-yet-completed templated types. getTypeSize() needs a complete type.
rdar://problem/15931354
llvm-svn: 200797
|
|
|
|
|
|
|
|
| |
Now that the back-end intrinsics are more regular, there's no need for the
special handling these got in the front-end, so they can be moved to
EmitCommonNeonBuiltinExpr.
llvm-svn: 200769
|
|
|
|
| |
llvm-svn: 200711
|