| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
llvm-svn: 227079
|
| |
|
|
|
|
| |
and 256 bit vectors of dwords and qwords.
llvm-svn: 227075
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
clang currently calls MarkVTableUsed() for classes that get their virtual
methods called or that participate in a dynamic_cast. This is unnecessary,
since CodeGen only emits vtables when it generates constructor, destructor, and
vtt code. (*)
Note that Sema::MarkVTableUsed() doesn't cause the emission of a vtable.
Its main user-visible effect is that it instantiates virtual member functions
of template classes, to make sure that if codegen decides to write a vtable
all the entries in the vtable are defined.
While this shouldn't change the behavior of codegen (other than being faster),
it does make clang more permissive: virtual methods of templates (in particular
destructors) end up being instantiated less often. In particular, classes that
have members that are smart pointers to incomplete types will now get their
implicit virtual destructor instantiated less frequently. For example, this
used to not compile but does now compile:
template <typename T> struct OwnPtr {
~OwnPtr() { static_assert((sizeof(T) > 0), "TypeMustBeComplete"); }
};
class ScriptLoader;
struct Base { virtual ~Base(); };
struct Sub : public Base {
virtual void someFun() const {}
OwnPtr<ScriptLoader> m_loader;
};
void f(Sub *s) { s->someFun(); }
The more permissive behavior matches both gcc (where this is not often
observable, since in practice most things with virtual methods have a key
function, and Sema::DefineUsedVTables() skips vtables for classes with key
functions) and cl (which is my motivation for this change) – this fixes
PR20337. See this issue and the review thread for some discussions about
optimizations.
This is similar to r213109 in spirit. r225761 was a prerequisite for this
change.
Various tests relied on "a->f()" marking a's vtable as used (in the sema
sense), switch these to just construct a on the stack. This forces
instantiation of the implicit constructor, which will mark the vtable as used.
(*) The exception is -fapple-kext mode: In this mode, qualified calls to
virtual functions (`a->Base::f()`) still go through the vtable, and since the
vtable pointer off this doesn't point to Base's vtable, this needs to reference
Base's vtable directly. To keep this working, keep referencing the vtable for
virtual calls in apple kext mode.
llvm-svn: 227073
|
| |
|
|
| |
llvm-svn: 227067
|
| |
|
|
| |
llvm-svn: 227066
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This patch allows clang to have llvm reserve the x18
platform register on AArch64. FreeBSD will use this in the kernel for
per-cpu data but has no need to reserve this register in userland so
will need this flag to reserve it.
This uses llvm r226664 to allow this register to be reserved.
Patch by Andrew Turner.
llvm-svn: 227062
|
| |
|
|
| |
llvm-svn: 227052
|
| |
|
|
| |
llvm-svn: 227037
|
| |
|
|
|
|
| |
the start of the whole expression
llvm-svn: 227028
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
expression line info
This causes things like assignment to refer to the '=' rather than the
LHS when attributing the store instruction, for example.
There were essentially 3 options for this:
* The beginning of an expression (this was the behavior prior to this
commit). This meant that stepping through subexpressions would bounce
around from subexpressions back to the start of the outer expression,
etc. (eg: x + y + z would go x, y, x, z, x (the repeated 'x's would be
where the actual addition occurred)).
* The end of an expression. This seems to be what GCC does /mostly/, and
certainly this for function calls. This has the advantage that
progress is always 'forwards' (never jumping backwards - except for
independent subexpressions if they're evaluated in interesting orders,
etc). "x + y + z" would go "x y z" with the additions occurring at y
and z after the respective loads.
The problem with this is that the user would still have to think
fairly hard about precedence to realize which subexpression is being
evaluated or which operator overload is being called in, say, an asan
backtrace.
* The preferred location or 'exprloc'. In this case you get sort of what
you'd expect, though it's a bit confusing in its own way due to going
'backwards'. In this case the locations would be: "x y + z +" in
lovely postfix arithmetic order. But this does mean that if the op+
were an operator overload, say, and in a backtrace, the backtrace will
point to the exact '+' that's being called, not to the end of one of
its operands.
(actually the operator overload case doesn't work yet for other reasons,
but that's being fixed - but this at least gets scalar/complex
assignments and other plain operators right)
llvm-svn: 227027
|
| |
|
|
| |
llvm-svn: 227026
|
| |
|
|
| |
llvm-svn: 227024
|
| |
|
|
| |
llvm-svn: 227023
|
| |
|
|
|
|
|
|
|
|
| |
same-type object.
Only the first two items for now, changing Sections 8.5.4 [dcl.init.list] paragraph 3 and 13.3.1.7 [over.match.list] paragraph 1,
so that defining class objects and character arrays using uniform initialization syntax is actually treated as list initialization
and before it is treated aggregate initialization.
llvm-svn: 227022
|
| |
|
|
| |
llvm-svn: 227015
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and only update the orginal list on a valid arugment list. When checking an
individual expression template argument, and conversions are required, update
the expression in the template argument. Since template arguments are
speculatively checked, the copying of the template argument list prevents
updating the template arguments when the list does not match the template.
Additionally, clean up the integer checking code in the template diffing code.
The code performs unneccessary conversions from APSInt to APInt.
Fixes PR21758.
This essentially reverts r224770 to recommits r224667 and r224668 with extra
changes to prevent the template instantiation problems seen in PR22006.
A test to catch the discovered problem is also added.
llvm-svn: 226983
|
| |
|
|
| |
llvm-svn: 226982
|
| |
|
|
|
|
|
|
|
|
|
| |
encountered any definition for the class; this happens when the definition is
added by an update record that is not yet loaded. In such a case, eagerly pick
the original parent of the member as the canonical definition of the class
rather than muddling through with the canonical declaration (the latter can
lead to us failing to merge properly later if the canonical definition turns
out to be some other declaration).
llvm-svn: 226977
|
| |
|
|
| |
llvm-svn: 226968
|
| |
|
|
|
|
| |
differentiate inline callsites.
llvm-svn: 226955
|
| |
|
|
|
|
|
| |
converting to property-dot syntax for setters.
rdar://19381786
llvm-svn: 226944
|
| |
|
|
|
|
|
|
| |
We did't properly mark all of an AnnotatedLine's children as finalized
and thus would reformat the same tokens in different branches of #if/#else
sequences leading to invalid replacements.
llvm-svn: 226930
|
| |
|
|
|
|
|
| |
receiver type is not valid for property-dot syntz use.
rdar://19381786
llvm-svn: 226927
|
| |
|
|
| |
llvm-svn: 226925
|
| |
|
|
|
|
|
|
|
|
| |
Before:
*a = b *c;
After:
*a = b * c;
llvm-svn: 226923
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The driver currently accepts but ignores the -fno-signed-zeros flag.
This patch passes the flag through and enables 'nsz' fast-math-flag
generation in IR.
The existing OpenCL flag for the same functionality is made into an
alias here. It may be removed in a subsequent patch.
This should resolve bug 20870 ( http://llvm.org/bugs/show_bug.cgi?id=20870 );
patches for the optimizer were checked in at:
http://llvm.org/viewvc/llvm-project?view=revision&revision=225050
http://llvm.org/viewvc/llvm-project?view=revision&revision=224583
Differential Revision: http://reviews.llvm.org/D6873
llvm-svn: 226915
|
| |
|
|
|
|
|
|
| |
http://reviews.llvm.org/D7090
Patch by Gábor Horváth!
llvm-svn: 226914
|
| |
|
|
| |
llvm-svn: 226908
|
| |
|
|
|
|
|
|
|
|
|
| |
In ItaniumCXXABI::EmitCXXDestructors we first emit the base destructor
and then try to emit the complete one as an alias.
If in the base ends up calling the complete destructor, the GD for the
complete will be in the list of deferred decl by the time we replace
it with an alias and delete the original GV.
llvm-svn: 226896
|
| |
|
|
|
|
|
| |
produce diagnostics with source locations before the diagnostics system is
ready for them.
llvm-svn: 226882
|
| |
|
|
|
|
| |
Differential Revision: http://reviews.llvm.org/D7127
llvm-svn: 226877
|
| |
|
|
|
|
|
|
|
|
| |
Previously, Clang would fail to warn on:
int n = x + foo ? 1 : 2;
when foo is a pointer.
llvm-svn: 226870
|
| |
|
|
| |
llvm-svn: 226865
|
| |
|
|
|
|
| |
really help. Improve diagnostics.
llvm-svn: 226863
|
| |
|
|
| |
llvm-svn: 226813
|
| |
|
|
|
|
| |
Differential Revision: http://reviews.llvm.org/D7006
llvm-svn: 226795
|
| |
|
|
|
| |
Reviewers: kcc, samsonov, petarj, eugenis
llvm-svn: 226790
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
"omp atomic read [seq_cst]" accepts expressions "v=x;". In this patch we perform
an atomic load of "x" (using builtin atomic loading instructions or a call to
"atomic_load()" for simple lvalues and "kmpc_atomic_start();load
<x>;kmpc_atomic_end();" for other lvalues), convert the result of loading to
type of "v" (using EmitScalarConversion() for simple types and
EmitComplexToScalarConversion() for conversions from complex to scalar) and then
store the result in "v".)
Differential Revision: http://reviews.llvm.org/D6431
llvm-svn: 226788
|
| |
|
|
|
|
| |
Need to add initialization of AtomicInfo::EvaluationKind field
llvm-svn: 226787
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
"omp atomic read [seq_cst]" accepts expressions "v=x;". In this patch we perform
an atomic load of "x" (using builtin atomic loading instructions or a call to
"atomic_load()" for simple lvalues and "kmpc_atomic_start();load
<x>;kmpc_atomic_end();" for other lvalues), convert the result of loading to
type of "v" (using EmitScalarConversion() for simple types and
EmitComplexToScalarConversion() for conversions from complex to scalar) and then
store the result in "v".)
Differential Revision: http://reviews.llvm.org/D6431
llvm-svn: 226786
|
| |
|
|
|
|
| |
Accidentally modified file SemaType.cpp must be restored to its original state.
llvm-svn: 226785
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
"omp atomic read [seq_cst]" accepts expressions "v=x;". In this patch we perform
an atomic load of "x" (using builtin atomic loading instructions or a call to
"atomic_load()" for simple lvalues and "kmpc_atomic_start();load
<x>;kmpc_atomic_end();" for other lvalues), convert the result of loading to
type of "v" (using EmitScalarConversion() for simple types and
EmitComplexToScalarConversion() for conversions from complex to scalar) and then
store the result in "v".
Differential Revision: http://reviews.llvm.org/D6431
llvm-svn: 226784
|
| |
|
|
|
|
|
|
| |
record, and that class declaration is not the canonical definition of the
class, be sure to add the class to the list of classes that are consulted when
we look up a special member in the canonical definition.
llvm-svn: 226778
|
| |
|
|
|
|
| |
Minor optimization of code like __try { ... } __except(1) { ... }.
llvm-svn: 226766
|
| |
|
|
|
|
|
| |
on top of a local declaration of the same entity, we still need to remember
that we loaded the first one or we may fail to merge the second one properly.
llvm-svn: 226765
|
| |
|
|
|
|
|
|
|
|
| |
We don't emit any coverage mapping for uncovered functions that come
from system headers, but we were creating a GlobalVariable with each
of their names. This is wasteful since the linker will need to dead
strip the unused symbols, and it can lead to issues when merging
coverage with others TUs that do have coverage for those functions.
llvm-svn: 226764
|
| |
|
|
|
|
|
| |
load the definition data from the declaration itself. In that case, merge
properly; don't assume the prior definition is the same as our own.
llvm-svn: 226761
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The lowering looks a lot like normal EH lowering, with the exception
that the exceptions are caught by executing filter expression code
instead of matching typeinfo globals. The filter expressions are
outlined into functions which are used in landingpad clauses where
typeinfo would normally go.
Major aspects that still need work:
- Non-call exceptions in __try bodies won't work yet. The plan is to
outline the __try block in the frontend to keep things simple.
- Filter expressions cannot use local variables until capturing is
implemented.
- __finally blocks will not run after exceptions. Fixing this requires
work in the LLVM SEH preparation pass.
The IR lowering looks like this:
// C code:
bool safe_div(int n, int d, int *r) {
__try {
*r = normal_div(n, d);
} __except(_exception_code() == EXCEPTION_INT_DIVIDE_BY_ZERO) {
return false;
}
return true;
}
; LLVM IR:
define i32 @filter(i8* %e, i8* %fp) {
%ehptrs = bitcast i8* %e to i32**
%ehrec = load i32** %ehptrs
%code = load i32* %ehrec
%matches = icmp eq i32 %code, i32 u0xC0000094
%matches.i32 = zext i1 %matches to i32
ret i32 %matches.i32
}
define i1 zeroext @safe_div(i32 %n, i32 %d, i32* %r) {
%rr = invoke i32 @normal_div(i32 %n, i32 %d)
to label %normal unwind to label %lpad
normal:
store i32 %rr, i32* %r
ret i1 1
lpad:
%ehvals = landingpad {i8*, i32} personality i32 (...)* @__C_specific_handler
catch i8* bitcast (i32 (i8*, i8*)* @filter to i8*)
%ehptr = extractvalue {i8*, i32} %ehvals, i32 0
%sel = extractvalue {i8*, i32} %ehvals, i32 1
%filter_sel = call i32 @llvm.eh.seh.typeid.for(i8* bitcast (i32 (i8*, i8*)* @filter to i8*))
%matches = icmp eq i32 %sel, %filter_sel
br i1 %matches, label %eh.except, label %eh.resume
eh.except:
ret i1 false
eh.resume:
resume
}
Reviewers: rjmccall, rsmith, majnemer
Differential Revision: http://reviews.llvm.org/D5607
llvm-svn: 226760
|
| |
|
|
| |
llvm-svn: 226756
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently we emit DeferredDeclsToEmit in reverse order. This patch changes that.
The advantages of the change are that
* The output order is a bit closer to the source order. The change to
test/CodeGenCXX/pod-member-memcpys.cpp is a good example.
* If we decide to deffer more, it will not cause as large changes in the
estcases as it would without this patch.
llvm-svn: 226751
|