| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
| |
llvm-svn: 310253
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
X86InterleavedAccess (VF16 stride 4).
This patch expands the support of lowerInterleavedStore to 16x8i stride 4.
LLVM creates suboptimal shuffle code-gen for AVX2. In overall, this patch is a specific fix for the pattern (Strid=4 VF=16) and we plan to include more patterns in the future.
The patch goal is to optimize the following sequence:
At the end of the computation, we have ymm2, ymm0, ymm12 and ymm3 holding
each 16 chars:
c0, c1, , c16
m0, m1, , m16
y0, y1, , y16
k0, k1, ., k16
And these need to be transposed/interleaved and stored like so:
c0 m0 y0 k0 c1 m1 y1 k1 c2 m2 y2 k2 c3 m3 y3 k3 ....
Differential Revision: https://reviews.llvm.org/D35829
llvm-svn: 310252
|
| |
|
|
|
|
|
|
|
|
| |
See bug 32621: https://bugs.llvm.org//show_bug.cgi?id=32621
Reviewers: vpykhtin, SamWot, arsenm
Differential Revision: https://reviews.llvm.org/D35902
llvm-svn: 310251
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch addresses two issues with assembly and disassembly for VMRS/VMSR:
1.currently VMRS/VMSR instructions accessing fpsid, mvfr{0-2} and fpexc, are
accepted for non ARMv8-A targets.
2. all VMRS/VMSR instructions accept writing/reading to PC and SP, when only
ARMv7-A and ARMv8-A should be allowed to write/read to SP and none to PC.
This patch addresses those issues and adds tests for these cases.
Differential Revision: https://reviews.llvm.org/D36306
llvm-svn: 310243
|
| |
|
|
| |
llvm-svn: 310242
|
| |
|
|
|
|
|
|
| |
--asan-force-dynamic-shadow
Fails with "Instruction does not dominate all uses!"
llvm-svn: 310241
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
The NewNodesMustHaveLegalTypes flag is set to false at the beginning of CodeGenAndEmitDAG, and set to true after legalizing types.
But before calling CodeGenAndEmitDAG we build the DAG for the basic block.
So for the first basic block NewNodesMustHaveLegalTypes would be 'false' during the SDAG building, and for all other basic blocks it would be 'true'.
This patch sets the flag to false before SDAG building each basic block.
Differential Revision:
https://reviews.llvm.org/D33435
llvm-svn: 310239
|
| |
|
|
|
|
|
|
| |
While here, rename `i` to `Rank` as the latter is more
self-explanatory (and this code also uses `I` two lines below to
identify an Instruction).
llvm-svn: 310238
|
| |
|
|
|
|
|
| |
This commit rearranges the checks to avoid calls to getRank()
when not needed (e.g. when RHS == LHS).
llvm-svn: 310237
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary: This is all handled by SimplifyDemandedBits.
Reviewers: spatel, davide
Reviewed By: davide
Subscribers: davide, llvm-commits
Differential Revision: https://reviews.llvm.org/D36382
llvm-svn: 310234
|
| |
|
|
| |
llvm-svn: 310233
|
| |
|
|
|
|
| |
C) ^ signmask -> (X + C + signmask)' for vector splats.
llvm-svn: 310232
|
| |
|
|
|
|
| |
Differential Revision: https://reviews.llvm.org/D36365
llvm-svn: 310223
|
| |
|
|
| |
llvm-svn: 310217
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We can convert any select-of-constants to math ops:
http://rise4fun.com/Alive/d7d
For this patch, I'm enhancing an existing x86 transform that uses fake multiplies
(they always become shl/lea) to avoid cmov or branching. The current code misses
cases where we have a negative constant and a positive constant, so this is just
trying to plug that hole.
The DAGCombiner diff prevents us from hitting a terrible inefficiency: we can start
with a select in IR, create a select DAG node, convert it into a sext, convert it
back into a select, and then lower it to sext machine code.
Some notes about the test diffs:
1. 2010-08-04-MaskedSignedCompare.ll - We were creating control flow that didn't exist in the IR.
2. memcmp.ll - Choose -1 or 1 is the case that got me looking at this again. I
think we could avoid the push/pop in some cases if we used 'movzbl %al' instead of an xor on
a different reg? That's a post-DAG problem though.
3. mul-constant-result.ll - The trade-off between sbb+not vs. setne+neg could be addressed if
that's a regression, but I think those would always be nearly equivalent.
4. pr22338.ll and sext-i1.ll - These tests have undef operands, so I don't think we actually care about these diffs.
5. sbb.ll - This shows a win for what I think is a common case: choose -1 or 0.
6. select.ll - There's another borderline case here: cmp+sbb+or vs. test+set+lea? Also, sbb+not vs. setae+neg shows up again.
7. select_const.ll - These are motivating cases for the enhancement; replace cmov with cheaper ops.
Assembly differences between movzbl and xor to avoid a partial reg stall are caused later by the X86 Fixup SetCC pass.
Differential Revision: https://reviews.llvm.org/D35340
llvm-svn: 310208
|
| |
|
|
| |
llvm-svn: 310202
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The patch from r310028 fixed things to work with the new
`LLVMTargetMachine` constructor that came in on r309911.
However, the fix was partial since an object of type
`CodeModel::Model` must be passed to `LLVMTargetMachine`
(not one of `Optional<CodeModel::Model>`).
This patch fixes the problem in the same fashion that r309911
did for other machines: by checking if the passed optional
code model has a value and using `CodeModel::Small` if not.
llvm-svn: 310200
|
| |
|
|
|
|
| |
vectors.
llvm-svn: 310195
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
On older processors this instruction encoding is treated as a NOP.
MSVC doesn't disable intrinsics based on features the way clang/gcc does. Because the PAUSE instruction encoding doesn't crash older processors, some software out there uses these intrinsics without checking for SSE2.
This change also seems to also be consistent with gcc behavior.
Fixes PR34079
Reviewers: RKSimon, zvi
Reviewed By: RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D36361
llvm-svn: 310190
|
| |
|
|
| |
llvm-svn: 310186
|
| |
|
|
|
|
| |
shifts to handle vector splats too.
llvm-svn: 310185
|
| |
|
|
|
|
| |
Unfortunately, it looks like there's some other missed optimizations in the generated code for some of these cases. I'll try to look at some of those next.
llvm-svn: 310184
|
| |
|
|
|
|
|
|
| |
different opcodes, NFCI.
Differential Revision: https://reviews.llvm.org/D35769
llvm-svn: 310183
|
| |
|
|
|
|
|
| |
In addition to moving the shift transforms over, we may want to
detect too-wide rotate patterns here (PR34046).
llvm-svn: 310181
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary: Thanks everyone involved in fixing the outstanding issues.
Reviewers: rovka, MatzeB, efriedma
Reviewed By: MatzeB
Subscribers: aemerson, javed.absar, llvm-commits, kristof.beyls
Differential Revision: https://reviews.llvm.org/D36153
llvm-svn: 310180
|
| |
|
|
| |
llvm-svn: 310174
|
| |
|
|
|
|
| |
No functionality change intended.
llvm-svn: 310173
|
| |
|
|
|
|
| |
Kernels aren't callable, so they don't have a call preserved mask.
llvm-svn: 310172
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After the previous series of patches, this is now trivial and deletes
a pretty astonishing amount of complexity. This has been a long time
coming, as the move toward a PO sequence of RefSCCs started eroding the
underlying use cases for this half of the data structure.
Among the biggest advantages here is that now there aren't two
independent data structures that need to stay in sync.
Some of my profiling has also indicated that updating the parent sets
was among the most expensive parts of the lazy call graph. Eliminating
it whole sale is likely to be a nice win in terms of compile time.
Last but not least, I had discussed with some folks previously keeping
it around for asserts and other correctness checking, but once the
fundamentals of the parent and child checking were implemented without
the parent sets their value in correctness checking was tiny and no
where near worth the cost of the complexity required to keep everything
up-to-date.
llvm-svn: 310171
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
isDescendantOf methods on RefSCCs in terms of the forward edges rather
than the parent sets.
This is technically slower, but probably not interestingly slower, and
all of these routines were already so expensive that they're guarded
behind both !NDEBUG and EXPENSIVE_CHECKS.
This removes another non-critical usage of parent sets.
I've also added some comments to try and help clarify to any potential
users the costs of these routines. They're mostly useful for debugging,
asserts, or other queries.
llvm-svn: 310170
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
walk over the parent set.
When removing a single function from the call graph, we previously would
walk the entire RefSCC's parent set and then walk every outgoing edge
just to find the ones to remove. In addition to this being quite high
complexity in theory, it is also the last fundamental use of the parent
sets.
With this change, when we remove a function we transform the node
containing it to be recognizably "dead" and then teach the edge
iterators to recognize edges to such nodes and skip them the same way
they skip null edges.
We can't move fully to using "dead" nodes -- when disconnecting two live
nodes we need to null out the edge. But the complexity this adds to the
edge sequence isn't too bad and the simplification of lazily handling
this seems like a significant win.
llvm-svn: 310169
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add memory synchronization semantics to LSE Atomics.
The memory semantics feature will be added in a subsequent patch.
In this patch, several corrections were added to the existing LSE Atomics
implementation, based on the ARM Errata D11904 from 05/12/2017.
Patch by: steleman
Differential Revision: https://reviews.llvm.org/D35319
llvm-svn: 310167
|
| |
|
|
|
|
|
|
|
| |
The definition of 'false' here was already pretty vague and debatable,
and I'm about to add another potential 'false' that would actually make
much more sense in a bool operator. Especially given how rarely this is
used, a nicely named method seems better.
llvm-svn: 310165
|
| |
|
|
|
|
|
|
| |
structures, actually null out the graph pointers as well. We won't ever
update these, and we certainly shouldn't be calling any methods on them,
so it seems good to defensively nuke them.
llvm-svn: 310164
|
| |
|
|
|
|
|
|
|
| |
pointers in node objects, just walk the map from function to node.
It doesn't have stable ordering, but works just as well and is much
simpler. We don't need ordering when just updating internal pointers.
llvm-svn: 310163
|
| |
|
|
|
|
|
|
|
| |
pointers.
This is completely unnecessary as we have a trivial list of RefSCCs now
that we can walk.
llvm-svn: 310162
|
| |
|
|
|
|
|
|
|
|
| |
merging RefSCCs.
The logic to directly use the reference edges is simpler and not
substantially slower (despite the comments to the contrary) because this
is not actually an especially hot part of LCG in practice.
llvm-svn: 310161
|
| |
|
|
|
|
|
|
|
|
|
|
| |
type to the 'select' type, do it after shifting right instead of just bailing.
Previously we were always trying to emit the zext or truncate before any shift. This meant if the 'and' mask was larger than the size of the truncate we would skip the transformation.
Now we shift the result of the and right first leaving the bit within the range of the truncate.
This matches what we are doing in foldSelectICmpAndOr for the same problem.
llvm-svn: 310159
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Direct calls to dllimport functions are very common Windows. We should
add them to the -O0 fast path.
Reviewers: rafael
Subscribers: llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D36197
llvm-svn: 310152
|
| |
|
|
|
|
| |
to implement -exit_on_src_pos
llvm-svn: 310151
|
| |
|
|
|
|
| |
captured at run-time
llvm-svn: 310148
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This extends the native reader to enable llvm-pdbutil to list the enums in a
PDB and it includes a simple test. It does not yet list the values in the
enumerations, which requires an actual implementation of
NativeEnumSymbol::FindChildren.
To exercise this code, use a command like:
llvm-pdbutil pretty -native -enums foo.pdb
Differential Revision: https://reviews.llvm.org/D35738
llvm-svn: 310144
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Name: narrow_sub
%sub = sub i32 C1, %x
%r = trunc i32 %sub to i8
=>
%xn = trunc i32 %x to i8
%narrowC = trunc i32 C1 to i8
%r = sub i8 %narrowC, %xn
Name: narrow_add
%add = add i32 %x, C1
%r = trunc i32 %add to i8
=>
%xn = trunc i32 %x to i8
%narrowC = trunc i32 C1 to i8
%r = add i8 %xn, %narrowC
Name: narrow_mul
%mul = mul i32 %x, C1
%r = trunc i32 %mul to i8
=>
%xn = trunc i32 %x to i8
%narrowC = trunc i32 C1 to i8
%r = mul i8 %xn, %narrowC
http://rise4fun.com/Alive/QpS
This doesn't solve PR34046 (failure to recognize rotate):
https://bugs.llvm.org/show_bug.cgi?id=34046
...but it reduces an extra complication in the description examples
to a form that we can more easily match.
llvm-svn: 310141
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Tools like clang that use RemoveFileOnSignal on their output files
weren't actually able to clean up their outputs before this change. Now
the call to llvm::sys::fs::remove succeeds and the temporary file is
deleted. This is a stop-gap to fix clang before implementing the
solution outlined in PR34070.
Reviewers: davide
Subscribers: llvm-commits, hiraditya
Differential Revision: https://reviews.llvm.org/D36337
llvm-svn: 310137
|
| |
|
|
|
|
| |
NFC.
llvm-svn: 310129
|
| |
|
|
| |
llvm-svn: 310126
|
| |
|
|
| |
llvm-svn: 310123
|
| |
|
|
| |
llvm-svn: 310118
|
| |
|
|
|
|
|
|
|
|
| |
Pushes the sext onto the operands of a Sub if NSW is present.
Also adds support for propagating the nowrap flags of the
llvm.ssub.with.overflow intrinsic during analysis.
Differential Revision: https://reviews.llvm.org/D35256
llvm-svn: 310117
|
| |
|
|
|
|
|
|
|
|
| |
Its sole purpose was to avoid spreading around ifdefs related to
building global-isel. Since r309990, GlobalISel is not optional anymore,
thus, we can get rid of this mechanism all together.
NFC.
llvm-svn: 310115
|