| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
|
|
|
|
| |
Although this does cut the number of traces recomputed by ~10% for the
test case mentioned in http://reviews.llvm.org/D10460, it doesn't
make a dent in the overall performance. That example needs to be more
selective when invalidating traces.
llvm-svn: 241393
|
| |
|
|
| |
llvm-svn: 241392
|
| |
|
|
|
|
|
|
| |
This is needed for COFF linkers to distinguish between weak external aliases
and regular symbols with LLVM weak linkage, which are represented as strong
symbols in COFF.
llvm-svn: 241389
|
| |
|
|
| |
llvm-svn: 241387
|
| |
|
|
|
|
|
|
|
|
|
| |
Requested by Eugene Rozenfeld of the LLILC team, this feature allows JIT
clients to skip relocations for selected external symbols by returning ~0ULL
from their symbol resolver. If this value is returned for a given symbol,
RuntimeDyld will skip all relocations for that symbol. The client will be
responsible for applying the skipped relocations manually before the code
is executed.
llvm-svn: 241383
|
| |
|
|
| |
llvm-svn: 241381
|
| |
|
|
| |
llvm-svn: 241380
|
| |
|
|
|
|
|
|
|
|
| |
SHT_NOBITS sections do not have content in an object file. Now the yaml2obj
tool does not accept `Content` field for such sections, and the obj2yaml
tool does not attempt to read the section content from a file.
Restore r241350 and r241352.
llvm-svn: 241377
|
| |
|
|
| |
llvm-svn: 241375
|
| |
|
|
| |
llvm-svn: 241374
|
| |
|
|
| |
llvm-svn: 241373
|
| |
|
|
|
|
| |
Is anyone using those?
llvm-svn: 241372
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Looking at r241279, I noticed that UpgradedIntrinsics only gets written
to in the following code:
if (UpgradeIntrinsicFunction(&F, NewFn))
UpgradedIntrinsics[&F] = NewFn;
Looking through UpgradeIntrinsicFunction, we always return false OR
NewFn will be set to a different function from our source.
This patch pulls the F != NewFn into UpgradeIntrinsicFunction as an
assert, and removes the check from callers of UpgradeIntrinsicFunction.
Reviewers: rafael, chandlerc
Subscribers: llvm-commits-list
Differential Revision: http://reviews.llvm.org/D10915
llvm-svn: 241369
|
| |
|
|
|
|
|
| |
It can fail trying to get the section on ELF and COFF. This makes sure the
error is handled.
llvm-svn: 241366
|
| |
|
|
| |
llvm-svn: 241365
|
| |
|
|
|
|
|
| |
In MachO the value of the symbol is always the address, so we can use the
simpler function.
llvm-svn: 241364
|
| |
|
|
|
|
|
|
|
|
|
| |
r241350 broke lld tests.
r241352 depends on r241350.
Original messages:
"[ELFYAML] Fix handling SHT_NOBITS sections by obj2yaml/yaml2obj tools"
"[ELFYAML] Make the Size field for .bss section optional"
llvm-svn: 241354
|
| |
|
|
|
|
| |
It's a common case to have a zero-size .bss section in an object file.
llvm-svn: 241352
|
| |
|
|
|
|
|
|
| |
SHT_NOBITS sections do not have content in an object file. Now yaml2obj
tool does not accept `Content` field for such sections, and obj2yaml
tool does not attempt to read the section content from a file.
llvm-svn: 241350
|
| |
|
|
| |
llvm-svn: 241345
|
| |
|
|
|
|
|
|
| |
Add support for v2i8/v2i16 to v2f64 by using a sign extension to v2i32 before conversion to v2f64.
Differential Revision: http://reviews.llvm.org/D10589
llvm-svn: 241325
|
| |
|
|
| |
llvm-svn: 241324
|
| |
|
|
|
|
|
|
| |
This patch adds support for sign extension for sub 128-bit vectors, such as to v2i32. It concatenates with UNDEF subvectors up to 128-bits, performs the sign extension (i.e. as v4i32) and then extracts the target subvector.
Patch 1/2 of D10589 - the second patch covers the conversion of v2i8/v2i16 to v2f64.
llvm-svn: 241323
|
| |
|
|
|
|
|
|
|
|
|
| |
The assertion in getCopyFromPartsVector assumed that the vector 'part' must
match the type of argument (arguments are potentially split into multiple
parts). However, in some cases the targets return a 'part' of the right size
but with a different type. We already handle this case correctly later on
and generate a bitcast. This commit just makes sure that we are actually
checking the property that we care about.
llvm-svn: 241312
|
| |
|
|
|
|
| |
for the arrays.
llvm-svn: 241308
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit changes normal isel and fast isel to read the user-defined trap
function name from function attribute "trap-func-name" attached to llvm.trap or
llvm.debugtrap instead of from TargetOptions::TrapFuncName. This is needed to
use clang's command line option "-ftrap-function" for LTO and enable changing
the trap function name on a per-call-site basis.
Out-of-tree projects currently using TargetOptions::TrapFuncName to specify the
trap function name should attach attribute "trap-func-name" to the call sites
of llvm.trap and llvm.debugtrap instead.
rdar://problem/21225723
Differential Revision: http://reviews.llvm.org/D10832
llvm-svn: 241305
|
| |
|
|
|
|
|
| |
This change is needed later when I make changes to attach string function
attributes to llvm.trap and llvm.debugtrap.
llvm-svn: 241304
|
| |
|
|
| |
llvm-svn: 241302
|
| |
|
|
| |
llvm-svn: 241301
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This function can really fail since the string table offset can be out of
bounds.
Using ErrorOr makes sure the error is checked.
Hopefully a lot of the boilerplate code in tools/* can go away once we have
a diagnostic manager in Object.
llvm-svn: 241297
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In r241285, I removed the SUBREG_TO_REG restriction from VSX swap
removal, determining that this was overly conservative. We have
another form of the same restriction in that we check for the presence
of implicit subregs in vector operations. As with SUBREG_TO_REG for
partial register conversions, an implicit subreg is safe in and of
itself, provided no other operation makes a lane-sensitive assumption
about the result. This patch removes that restriction, by removing
the HasImplicitSubreg flag and all code that relies on it.
I've added a test case that fails to optimize before this patch is
applied, and optimizes properly with the patch. Test based on a
report from Anton Blanchard.
llvm-svn: 241290
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With a previous patch, the VSX swap optimization is able to recognize
the doubleword load-splat idiom that can be implemented using lxvdsx.
However, that does not cover a doubleword splat where the source is a
register. We can implement this using xxspltd (a special form of
xxpermdi). This patch teaches the swap optimization pass about this
idiom.
As a prerequisite, it also permits swap optimization to succeed for
all forms of SUBREG_TO_REG. Previously we were conservative and only
allowed SUBREG_TO_REG when it copied a full register. However, on
reflection any form of SUBREG_TO_REG is safe in and of itself, so long
as an unsafe operation is not performed on its result. In particular,
a widening SUBREG_TO_REG often occurs as an input to a doubleword
splat idiom, particularly in auto-vectorized code.
The doubleword splat idiom is an XXPERMDI operation where both source
registers are identical, and the selection mask is either 0 (splat the
first element) or 3 (splat the second element). To determine whether
the registers are identical, we use the existing mechanism for looking
through "copy-like" operations. That mechanism has a side effect of
marking the XXPERMDI operation as using a physical register, which
would invalidate its presence in a swap-optimized region. This is
correct for the form of XXPERMDI that performs a swap and hence would
be removed, but is not what we want for a doubleword-splat variety of
XXPERMDI. Therefore we reset the physical-register flag on the
XXPERMDI when it represents a splat.
A simple test case is added to verify that we generate the splat and
that we also remove the xxswapd instructions that would otherwise be
associated with the load and store of another operand.
llvm-svn: 241285
|
| |
|
|
| |
llvm-svn: 241284
|
| |
|
|
|
|
|
|
|
|
|
|
| |
When trying to upgrade @llvm.x86.sse2.psrl.dq while parsing a module,
BitcodeReader adds the function to its worklist twice, resulting in a
crash when accessing it the second time.
This patch replaces the worklist vector by a map.
Patch by Philip Pfaffe.
llvm-svn: 241281
|
| |
|
|
|
|
| |
Patch by Philip Pfaffe!
llvm-svn: 241279
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This patch changes linkage with dbghlp.dll for clang from static (at load time)
to on demand (at the first use of required functions). Clang uses dbghlp.dll
only in minor use-cases. First of all in case of crash and in case of plugin load.
The dbghlp.dll library can be absent on system. In this case clang will fail
to load. With lazy load of dbghlp.dll clang can work even if dbghlp.dll
is not available.
Differential Revision: http://reviews.llvm.org/D10737
llvm-svn: 241271
|
| |
|
|
| |
llvm-svn: 241268
|
| |
|
|
| |
llvm-svn: 241265
|
| |
|
|
|
|
|
|
| |
The code responsible for shl folding in the DAGCombiner was assuming incorrectly that all constants are less than 64 bits. This patch simply changes the way values are compared.
It has been reverted previously because of some problems with comparing APInt with raw uint64_t. That has been fixed/changed with r241204.
llvm-svn: 241254
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By default, the GraphWriter code assumes that the generic file open
program (`open` on Apple, `xdg-open` on other systems) can wait on the
forked proces to complete. When the fork ends, the code would delete
the temporary dot files created, and return.
On GNU/Linux, the xdg-open program does not have a "wait for your fork
to complete before dying" option. So the behaviour was that xdg-open
would launch a process, quickly die itself, and then the GraphWriter
code would think its OK to quickly delete all the temporary files.
Once the temporary files were deleted, the dot viewers would get very
upset, and often give you weird errors.
This change only waits on the generic open program on Apple platforms.
Elsewhere, we don't wait on the process, and hence we don't try and
clean up the temporary files.
llvm-svn: 241250
|
| |
|
|
|
|
|
|
|
|
| |
Summary: Rename some methods to make Statepoint look more like CallSite.
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10756
llvm-svn: 241235
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This checks subtarget feature compatibility for inlining by verifying
that the callee is a strict subset of the caller's features. This includes
the cpu as part of the subtarget we can get via the incoming functions as
the backend takes CPUs as feature sets.
This allows us to inline things like:
int foo() { return baz(); }
int __attribute__((target("sse4.2"))) bar() {
return foo();
}
so that generic code can be inlined into specialized functions.
llvm-svn: 241221
|
| |
|
|
|
|
|
| |
at the attributes on a function to determine whether or not to allow
inlining.
llvm-svn: 241220
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
* Add 64-bit address space feature.
* Rename SIMD feature to SIMD128.
* Handle single-thread model with an IR pass (same way ARM does).
* Rename generic processor to MVP, to follow design's lead.
* Add bleeding-edge processors, with all features included.
* Fix a few DEBUG_TYPE to match other backends.
Test Plan: ninja check
Reviewers: sunfish
Subscribers: jfb, llvm-commits
Differential Revision: http://reviews.llvm.org/D10880
llvm-svn: 241211
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TwoAddressInstructionPass stops after a successful commuting but 3 Addr
conversion might be good for some cases.
Consider:
int foo(int a, int b) {
return a + b;
}
Before this commit, we emit:
addl %esi, %edi
movl %edi, %eax
ret
After this commit, we try 3 Addr conversion:
leal (%rsi,%rdi), %eax
ret
Patch by Volkan Keles <vkeles@apple.com>!
Differential Revision: http://reviews.llvm.org/D10851
llvm-svn: 241206
|
| |
|
|
|
|
|
|
|
|
| |
This is mostly an NFC, which increases code readability (instead of
saving old terminator, generating new one in front of old, and deleting
old, we just call a function). However, it would additionaly copy
the debug location from old instruction to replacement, which
would help PR23837.
llvm-svn: 241197
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All file formats only needed 16-bits right now which is enough to fit
in to the padding with other fields.
This reduces the size of MCSymbol to 24-bytes on a 64-bit system. The
layout is now
0 | class llvm::MCSymbol
0 | class llvm::PointerIntPair SectionOrFragmentAndHasName
0 | intptr_t Value
| [sizeof=8, dsize=8, align=8
| nvsize=8, nvalign=8]
8 | unsigned int IsTemporary
8 | unsigned int IsRedefinable
8 | unsigned int IsUsed
8 | _Bool IsRegistered
8 | unsigned int IsExternal
8 | unsigned int IsPrivateExtern
8 | unsigned int Kind
9 | unsigned int IsUsedInReloc
9 | unsigned int SymbolContents
9 | unsigned int CommonAlignLog2
10 | uint32_t Flags
12 | uint32_t Index
16 | union
16 | uint64_t Offset
16 | uint64_t CommonSize
16 | const class llvm::MCExpr * Value
| [sizeof=8, dsize=8, align=8
| nvsize=8, nvalign=8]
| [sizeof=24, dsize=24, align=8
| nvsize=24, nvalign=8]
llvm-svn: 241196
|
| |
|
|
| |
llvm-svn: 241193
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
According to PTX ISA:
For convenience, ld, st, and cvt instructions permit source and destination data operands to be wider than the instruction-type size, so that narrow values may be loaded, stored, and converted using regular-width registers. For example, 8-bit or 16-bit values may be held directly in 32-bit or 64-bit registers when being loaded, stored, or converted to other types and sizes. The operand type checking rules are relaxed for bit-size and integer (signed and unsigned) instruction types; floating-point instruction types still require that the operand type-size matches exactly, unless the operand is of bit-size type.
So, the ISA does not support load with extending/store with truncatation for floating numbers. This is reflected in setting the loadext/truncstore actions to expand in the code for floating numbers, but vectors of floating numbers are not taken care of.
As a result, loading a vector of floats followed by a fp_extend may be combined by DAGCombiner to a extload, and the extload may be lowered to NVPTXISD::LoadV2 with extending information. However, NVPTXISD::LoadV2 does not perform extending, and no extending instructions are inserted. Finally, PTX instructions with mismatched types are generated, like
ld.v2.f32 {%fd3, %fd4}, [%rd2]
This patch adds the correct actions for vectors of floats, so DAGCombiner would not create loads with extending, and correct code is generated.
Patched by Gang Hu.
Test Plan: Test case attached.
Reviewers: jingyue
Reviewed By: jingyue
Subscribers: llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D10876
llvm-svn: 241191
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Given that alignments are always powers of 2, just encode it this way.
This matches how we encode alignment on IR GlobalValue's for example.
This compresses the CommonAlign member down to 5 bits which allows it
to pack better with the surrounding fields.
Reviewed by Duncan Exon Smith.
llvm-svn: 241189
|