| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
llvm-svn: 271823
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A basic block could contain:
%cp = cleanuppad []
cleanupret from %cp unwind to caller
This basic block is empty and is thus a candidate for removal. However,
there can be other uses of %cp outside of this basic block. This is
only possible in unreachable blocks.
Make our transform more correct by checking that the pad has a single
user before removing the BB.
This fixes PR28005.
llvm-svn: 271816
|
| |
|
|
|
|
|
| |
This is step 1 of unknown towards fixing PR28001:
https://llvm.org/bugs/show_bug.cgi?id=28001
llvm-svn: 271810
|
| |
|
|
| |
llvm-svn: 271809
|
| |
|
|
| |
llvm-svn: 271808
|
| |
|
|
| |
llvm-svn: 271806
|
| |
|
|
| |
llvm-svn: 271805
|
| |
|
|
|
|
| |
Allows XOP to vectorize BITREVERSE - other targets will follow as their costmodels improve.
llvm-svn: 271803
|
| |
|
|
|
|
|
| |
Windows itanium is nearly identical to windows-msvc (MS ABI for C, itanium for
C++). Enable the TLS support for the target similar to the MSVC model.
llvm-svn: 271797
|
| |
|
|
|
|
|
|
| |
The AVX2 v16i16 shift lowering works by unpacking to 2 x v8i32, performing the shift and then truncating the result.
The unpacking is used to place the values in the upper 16-bits so that we can correctly sign-extend for SRA shifts. Unfortunately we weren't ensuring that the lower 16-bits were zero to ensure that SHL correctly shifts in zero bits.
llvm-svn: 271796
|
| |
|
|
|
|
|
|
| |
Add the MMX implementation to the SimplifyDemandedUseBits SSE/AVX MOVMSK support added in D19614
Requires a minor tweak as llvm.x86.mmx.pmovmskb takes a x86_mmx argument - so we have to be explicit about the implied v8i8 vector type.
llvm-svn: 271789
|
| |
|
|
|
|
|
|
|
|
|
| |
Original commit message:
[sancov] Run sancov tests on more platforms
The only tests that need to be run on Linux are the ones that use C++
demangling. I'm assuming they will fail on Mac, since __cxa_demangle
there won't handle the non-double-underscore prefixed mangled names.
llvm-svn: 271763
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and/or tests aren't working on Windows currently.
There seems to be some problem with quoting the file paths. I don't
understand the test structure here or the code well enough to try to
come up with a way to correctly handle paths with back slashes in them,
and this has caused the Windows builds to be failing for 7 hours now, so
I'm reverting the whole thing to bring them back to life. Sorry for the
disruption, but a couple of these were bug fixes anyways that can be
folded into a fresh commit.
Reverts the following patches:
r271756: Clean up the way we create the input filenames buffer (NFC)
r271748: Fix use-after-free from discarded MemoryBuffer (NFC)
r271710: Fix option description (NFC)
r271709: Add option to ingest filepaths from a file
llvm-svn: 271760
|
| |
|
|
| |
llvm-svn: 271753
|
| |
|
|
|
|
|
| |
This is allowed (though used rarely) and useful to keep your tests
short.
llvm-svn: 271752
|
| |
|
|
| |
llvm-svn: 271745
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Adds an option -esan-assume-intra-cache-line which causes esan to assume
that a single memory access touches just one cache line, even if it is not
aligned, for better performance at a potential accuracy cost. Experiments
show that the performance difference can be 2x or more, and accuracy loss
is typically negligible, so we turn this on by default. This currently
applies just to the working set tool.
Reviewers: aizatsky
Subscribers: vitalybuka, zhaoqin, kcc, eugenis, llvm-commits
Differential Revision: http://reviews.llvm.org/D20978
llvm-svn: 271743
|
| |
|
|
| |
llvm-svn: 271738
|
| |
|
|
|
|
| |
Differential Revision: http://reviews.llvm.org/D20945
llvm-svn: 271736
|
| |
|
|
|
|
| |
Differential Revision: http://reviews.llvm.org/D20648
llvm-svn: 271728
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Previously we would try to load PDBs for every PE executable we tried to
symbolize. If that failed, we would fall back to DWARF. If there wasn't
any DWARF, we'd print mostly useless symbol information using the export
table.
With this change, we only try to load PDBs for executables that claim to
have them. If that fails, we can now print an error rather than falling
back silently. This should make it a lot easier to diagnose and fix
common symbolization issues, such as not having DIA or not having a PDB.
Reviewers: zturner, eugenis
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20982
llvm-svn: 271725
|
| |
|
|
| |
llvm-svn: 271718
|
| |
|
|
|
|
|
| |
This is very similar to r271677, but for extracts from i32 with the SIGN_EXTEND
acting on a arithmetic shift.
llvm-svn: 271717
|
| |
|
|
|
|
| |
Differential Revision: http://reviews.llvm.org/D20980
llvm-svn: 271709
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Under emscripten, C code can take the address of a function implemented
in Javascript (which is exposed via an import in wasm). Because imports
do not have linear memory address in wasm, we need to generate a thunk
to be the target of the indirect call; it call the import directly.
To make this possible, LLVM needs to emit the type signatures for these
functions, because they may not be called directly or referred to other
than where the address is taken.
This uses s new .s directive (.functype) which specifies the signature.
Differential Revision: http://reviews.llvm.org/D20891
Re-apply r271599 but instead of bailing with an error when a declared
function has multiple returns, replace it with a pointer argument. Also
add the test case I forgot to 'git add' last time around.
llvm-svn: 271703
|
| |
|
|
|
|
| |
Copied from test/CodeGen/X86
llvm-svn: 271698
|
| |
|
|
|
|
|
|
| |
The only tests that need to be run on Linux are the ones that use C++
demangling. I'm assuming they will fail on Mac, since __cxa_demangle
there won't handle the non-double-underscore prefixed mangled names.
llvm-svn: 271695
|
| |
|
|
|
|
|
|
| |
This re-applies r271611, and hopefully the bots won't break this time.
Although ld64 always outputs linkedit data in the same order, it isn't actually required to. This change makes yaml2obj resilient if the offsets are in arbitrary order.
llvm-svn: 271687
|
| |
|
|
|
|
|
|
|
|
|
| |
This only translates data members for now. Translating overloaded
methods is complicated, so I stopped short of doing that.
Reviewers: aaboud
Differential Revision: http://reviews.llvm.org/D20924
llvm-svn: 271680
|
| |
|
|
|
|
|
|
| |
in more instructions than the libary call.
Differential Revision: http://reviews.llvm.org/D20958
llvm-svn: 271678
|
| |
|
|
|
|
|
|
|
|
| |
We were assuming all SBFX-like operations would have the shl/asr form, but often
when the field being extracted is an i8 or i16, we end up with a
SIGN_EXTEND_INREG acting on a shift instead.
This is a port of r213754 from ARM to AArch64.
llvm-svn: 271677
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was concern that creating bitcasts for the simpler potential select pattern:
define <2 x i64> @vecBitcastOp1(<4 x i1> %cmp, <2 x i64> %a) {
%a2 = add <2 x i64> %a, %a
%sext = sext <4 x i1> %cmp to <4 x i32>
%bc = bitcast <4 x i32> %sext to <2 x i64>
%and = and <2 x i64> %a2, %bc
ret <2 x i64> %and
}
might lead to worse code for some targets, so this patch is matching the larger
patterns seen in the test cases.
The motivating example for this patch is this IR produced via SSE intrinsics in C:
define <2 x i64> @gibson(<2 x i64> %a, <2 x i64> %b) {
%t0 = bitcast <2 x i64> %a to <4 x i32>
%t1 = bitcast <2 x i64> %b to <4 x i32>
%cmp = icmp sgt <4 x i32> %t0, %t1
%sext = sext <4 x i1> %cmp to <4 x i32>
%t2 = bitcast <4 x i32> %sext to <2 x i64>
%and = and <2 x i64> %t2, %a
%neg = xor <4 x i32> %sext, <i32 -1, i32 -1, i32 -1, i32 -1>
%neg2 = bitcast <4 x i32> %neg to <2 x i64>
%and2 = and <2 x i64> %neg2, %b
%or = or <2 x i64> %and, %and2
ret <2 x i64> %or
}
For an AVX target, this is currently:
vpcmpgtd %xmm1, %xmm0, %xmm2
vpand %xmm0, %xmm2, %xmm0
vpandn %xmm1, %xmm2, %xmm1
vpor %xmm1, %xmm0, %xmm0
retq
With this patch, it becomes:
vpmaxsd %xmm1, %xmm0, %xmm0
Differential Revision: http://reviews.llvm.org/D20774
llvm-svn: 271676
|
| |
|
|
|
|
|
|
|
|
| |
Test added as per discussion in http://reviews.llvm.org/D20588.
The macro is just a demonstration, useless in practice.
Coding style fixes.
Differential Revision: http://reviews.llvm.org/D20797
llvm-svn: 271675
|
| |
|
|
| |
llvm-svn: 271674
|
| |
|
|
| |
llvm-svn: 271673
|
| |
|
|
|
|
|
|
|
|
| |
new instruction to ARM and AArch64 targets and several system registers.
Patch by: Roger Ferrer Ibanez and Oliver Stannard
Differential Revision: http://reviews.llvm.org/D20282
llvm-svn: 271670
|
| |
|
|
| |
llvm-svn: 271668
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
modifiers.
Summary: Depends on D20625
Reviewers: tstellarAMD, vpykhtin, artem.tamazov
Subscribers: arsenm, kzhuravl
Differential Revision: http://reviews.llvm.org/D20674
llvm-svn: 271662
|
| |
|
|
|
|
| |
These currently all lower to regular loads, generic nontemporal load support will be added in a future patch
llvm-svn: 271659
|
| |
|
|
| |
llvm-svn: 271656
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
_dpp and _sdwa suffixes in mnemonics.
Summary:
Added custom converters for SDWA instruction to support optional operands and modifiers.
Support for _dpp and _sdwa suffixes that allows to force DPP or SDWA encoding for instructions.
Reviewers: tstellarAMD, vpykhtin, artem.tamazov
Subscribers: arsenm, kzhuravl
Differential Revision: http://reviews.llvm.org/D20625
llvm-svn: 271655
|
| |
|
|
|
|
| |
types
llvm-svn: 271654
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Summary: They aren't necessary since llvm-objdump can auto-detect the architecture.
Reviewers: sdardis
Subscribers: jfb, dsanders, llvm-commits, sdardis
Differential Revision: http://reviews.llvm.org/D20904
llvm-svn: 271653
|
| |
|
|
|
|
| |
vector types
llvm-svn: 271651
|
| |
|
|
| |
llvm-svn: 271646
|
| |
|
|
|
|
| |
256-bit vector types
llvm-svn: 271645
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
N32 support will follow in a later patch since the symbol version of 'la'
incorrectly believes N32 to have 64-bit pointers and rejects it early.
This fixes the three incorrectly expanded 'la' macros found in bionic.
Reviewers: sdardis
Subscribers: dsanders, llvm-commits, sdardis
Differential Revision: http://reviews.llvm.org/D20820
llvm-svn: 271644
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This patch begins adding support for lowering to the XOP VPERMIL2PD/VPERMIL2PS shuffle instructions - adding the X86ISD::VPERMIL2 opcode and cleaning up the usage.
The internal llvm intrinsics were assuming the shuffle mask operand was the same type as the float/double input operands (I guess to simplify the intrinsic definitions in X86InstrXOP.td to a single value type). These needed changing to integer types (matching the clang builtin and the AMD intrinsics definitions), an auto upgrade path is added to convert old calls.
Mask decoding/target shuffle support will be added in future patches.
Differential Revision: http://reviews.llvm.org/D20049
llvm-svn: 271633
|
| |
|
|
|
|
|
|
|
|
|
|
| |
When printing line information and file checksums, we were printing
the file offset field from the struct header. This teaches
llvm-pdbdump how to turn those numbers into the filename. In the
case of file checksums, this is done by looking in the global
string table. In the case of line contributions, this is done
by indexing into the file names buffer of the DBI stream. Why
they use a different technique I don't know.
llvm-svn: 271630
|
| |
|
|
|
|
| |
the VEX encoded ones.
llvm-svn: 271629
|