| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
No functionality change intended, maybe a tiny performance improvement.
llvm-svn: 270997
|
| |
|
|
|
|
|
| |
This just makes it a bit more clear that we don't intend to use a
deleted node for anything here.
llvm-svn: 270931
|
| |
|
|
| |
llvm-svn: 270749
|
| |
|
|
| |
llvm-svn: 270747
|
| |
|
|
| |
llvm-svn: 270738
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
LegalizeIntegerTypes does not have a way to expand multiplications for large
integer types (i.e. larger than twice the native bit width). There's no
standard runtime call to use in that case, and so we'd just assert.
Unfortunately, as it turns out, it is possible to hit this case from
standard-ish C code in rare cases. A particular case a user ran into yesterday
involved an __int128 induction variable and a loop with a quadratic (not
linear) recurrence which triggered some backend logic using SCEVExpander. In
this case, the BinomialCoefficient code in SCEV generates some i129 variables,
which get widened to i256. At a high level, this is not actually good (i.e. the
underlying optimization, PPCLoopPreIncPrep, should not be transforming the loop
in question for performance reasons), but regardless, the backend shouldn't
crash because of cost-modeling issues in the optimizer.
This is a straightforward implementation of the multiplication expansion, based
on the algorithm in Hacker's Delight. I validated it against the code for the
mul256b function from http://locklessinc.com/articles/256bit_arithmetic/ using
random inputs. There should be no functional change for previously-working code
(the new expansion code only replaces an assert).
Fixes PR19797.
llvm-svn: 270720
|
| |
|
|
| |
llvm-svn: 270190
|
| |
|
|
|
|
|
|
|
| |
There are at least 2 places (DAGCombiner, X86ISelLowering) where this could be used instead
of ad-hoc and watered down code that is trying to match a power-of-2 pattern.
Differential Revision: http://reviews.llvm.org/D20439
llvm-svn: 270073
|
| |
|
|
| |
llvm-svn: 270007
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
When processing inline asm that contains errors, make sure we can recover
gracefully by creating an UNDEF SDValue for the inline asm statement before
returning from SelectionDAGBuilder::visitInlineAsm. This is necessary for
consumers that don't exit on the first error that is emitted (e.g. clang)
and that would assert later on.
Fixes PR24071.
Patch by Diana Picus.
llvm-svn: 269811
|
| |
|
|
|
|
|
|
|
|
|
| |
Allow two users of the condition if the other user
is also a min/max select. i.e.
%c = icmp slt i32 %x, %y
%min = select i1 %c, i32 %x, i32 %y
%max = select i1 %c, i32 %y, i32 %x
llvm-svn: 269699
|
| |
|
|
| |
llvm-svn: 269685
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
and BITREVERSE stages
For BITREVERSE, bit shifting/masking every bit in a vector element is a very lengthy procedure.
If the input vector type is a whole multiple of bytes wide then we can split this into a BSWAP shuffle stage (to reverse at the byte level) and then a BITREVERSE stage applied to each byte. Most vector capable targets can efficiently BSWAP using shuffles resulting in a considerable reduction in instructions.
With this patch targets would only need to implement a target specific vXi8 BITREVERSE implementation to efficiently reverse most legal vector types.
Differential Revision: http://reviews.llvm.org/D19978
llvm-svn: 269290
|
| |
|
|
|
|
|
|
| |
It's awkward to force callers of SelectNodeTo to figure out whether
the node was morphed or CSE'd. Update uses here instead of requiring
callers to (sometimes) do it.
llvm-svn: 269235
|
| |
|
|
| |
llvm-svn: 269206
|
| |
|
|
|
|
|
|
|
|
|
| |
This means SelectCode unconditionally returns nullptr now. I'll follow
up with a change to make that return void as well, but it seems best
to keep that one very mechanical.
This is part of the work to have Select return void instead of an
SDNode *, which is in turn part of llvm.org/pr26808.
llvm-svn: 269136
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Currently, SelectionDAG assumes 8/16-bit cmpxchg returns either a sign
extended result, or a zero extended result. SystemZ takes a third
option by returning junk in the high bits (rotated contents of the other
bytes in the memory word). In that case, don't use Assert*ext, and
zero-extend the result ourselves if a comparison is needed.
Differential Revision: http://reviews.llvm.org/D19800
llvm-svn: 269075
|
| |
|
|
|
|
|
|
|
|
| |
After looking at D19087 again, it occurred to me that we can do better. If we consolidate
the valueHasExactlyOneBitSet() transforms, we won't incur extra overhead from calling it a
2nd time, and we can shrink SimplifySetCC() a bit. No functional change intended.
Differential Revision: http://reviews.llvm.org/D20050
llvm-svn: 268932
|
| |
|
|
| |
llvm-svn: 268867
|
| |
|
|
|
|
| |
Added bitreverse creation testing
llvm-svn: 268865
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the sake of minimalism, this patch is x86 only, but I think that at least
PPC, ARM, AArch64, and Sparc probably want to do this too.
We might want to generalize the hook and pattern recognition for a target like
PPC that has a full assortment of negated logic ops (orc, nand).
Note that http://reviews.llvm.org/D18842 will cause this transform to trigger
more often.
For reference, this relates to:
https://llvm.org/bugs/show_bug.cgi?id=27105
https://llvm.org/bugs/show_bug.cgi?id=27202
https://llvm.org/bugs/show_bug.cgi?id=27203
https://llvm.org/bugs/show_bug.cgi?id=27328
Differential Revision: http://reviews.llvm.org/D19087
llvm-svn: 268858
|
| |
|
|
|
|
|
| |
Relying on the caller to clean up after we've replaced all uses of a
node won't work when we've migrated to the `void Select(...)` API.
llvm-svn: 268774
|
| |
|
|
|
|
|
|
|
|
|
|
| |
If we don't, values that aren't precisely representable in f16 could
be used as-is in a promoted f32 operation, which would produce
incorrect results.
AArch64 had the correct behavior; add a focused test.
Fixes http://llvm.org/PR26871
llvm-svn: 268700
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a step towards removing the rampant undefined behaviour in
SelectionDAG, which is a part of llvm.org/PR26808.
We rename SelectionDAGISel::Select to SelectImpl and update targets to
match, and then change Select to return void and consolidate the
sketchy behaviour we're trying to get away from there.
Next, we'll update backends to implement `void Select(...)` instead of
SelectImpl and eventually drop the base Select implementation.
llvm-svn: 268693
|
| |
|
|
|
|
|
|
|
| |
This opcode never happens in practice, and yet the logic we have in
place to handle it would be undefined behaviour if we ever executed
it. Remove it rather than trying to refactor code that's never
reached.
llvm-svn: 268692
|
| |
|
|
| |
llvm-svn: 268564
|
| |
|
|
|
|
|
|
| |
Some vector bit operations are promoted instead of having custom lowering. This patch changes the isOperationLegalOrCustom tests for vector AND/OR operations to use a new TLI helper isOperationLegalOrCustomOrPromote instead, allowing the SSE implementations to stay on the simd unit.
Differential Revision: http://reviews.llvm.org/D19805
llvm-svn: 268561
|
| |
|
|
| |
llvm-svn: 268526
|
| |
|
|
|
|
|
|
| |
Vector bit operations are typically promoted instead of having custom lowering. This patch changes the isOperationLegalOrCustom tests for vector AND/OR operations to use isOperationLegalOrPromote instead, allowing the SSE implementations to stay on the simd unit.
Differential Revision: http://reviews.llvm.org/D19805
llvm-svn: 268504
|
| |
|
|
|
|
|
|
| |
implicitly indicate the number of result VTs. This shaves about 16K off the X86 matching table taking it down to about 470K.
Overall this reduces the llc binary size with all in-tree targets by about 40K.
llvm-svn: 268365
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
When SelectionDAG performs CSE it is possible that the context's source
location is different from that of the selected node. This can lead to
incorrect line number records. We update the debug location to the
one that occurs earlier in the instruction sequence.
This fixes PR21006.
Reviewers: echristo, sdmitrouk
Subscribers: jevinskie, asl, llvm-commits
Differential Revision: http://reviews.llvm.org/D12094
llvm-svn: 268323
|
| |
|
|
|
|
| |
rather suboptimal.
llvm-svn: 268211
|
| |
|
|
|
|
| |
to optimize table size. Shaves about 12K off the X86 matcher table.
llvm-svn: 268209
|
| |
|
|
|
|
| |
Differential Revision: http://reviews.llvm.org/D19775
llvm-svn: 268195
|
| |
|
|
| |
llvm-svn: 268094
|
| |
|
|
|
|
| |
Instead of SelectionDAG::getConstant directly to make it more obvious that we're creating target constants.
llvm-svn: 268074
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the cmake build to enable them.
Summary:
Historically, we had a switch in the Makefiles for turning on "expensive
checks". This has never been ported to the cmake build, but the
(dead-ish) code is still around.
This will also make it easier to turn it on in buildbots.
Reviewers: chandlerc
Subscribers: jyknight, mzolotukhin, RKSimon, gberry, llvm-commits
Differential Revision: http://reviews.llvm.org/D19723
llvm-svn: 268050
|
| |
|
|
| |
llvm-svn: 267745
|
| |
|
|
| |
llvm-svn: 267723
|
| |
|
|
|
|
|
|
| |
the pattern is matched.
Differential revision: http://reviews.llvm.org/D14840
llvm-svn: 267649
|
| |
|
|
|
|
| |
Differential Revision: http://reviews.llvm.org/D17176
llvm-svn: 267606
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
visitAND, when folding and (load) forgets to check which output of
an indexed load is involved, happily folding the updated address
output on the following testcase:
target datalayout = "e-m:e-i64:64-n32:64"
target triple = "powerpc64le-unknown-linux-gnu"
%typ = type { i32, i32 }
define signext i32 @_Z8access_pP1Tc(%typ* %p, i8 zeroext %type) {
%b = getelementptr inbounds %typ, %typ* %p, i64 0, i32 1
%1 = load i32, i32* %b, align 4
%2 = ptrtoint i32* %b to i64
%3 = and i64 %2, -35184372088833
%4 = inttoptr i64 %3 to i32*
%_msld = load i32, i32* %4, align 4
%zzz = add i32 %1, %_msld
ret i32 %zzz
}
Fix this by checking ResNo.
I've found a few more places that currently neglect to check for
indexed load, and tightened them up as well, but I don't have test
cases for them. In fact, they might not be triggerable at all,
at least with current targets. Still, better safe than sorry.
Differential Revision: http://reviews.llvm.org/D19202
llvm-svn: 267420
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The original patch caused crashes because it could derefence a null pointer
for SelectionDAGTargetInfo for targets that do not define it.
Evaluates fmul+fadd -> fmadd combines and similar code sequences in the
machine combiner. It adds support for float and double similar to the existing
integer implementation. The key features are:
- DAGCombiner checks whether it should combine greedily or let the machine
combiner do the evaluation. This is only supported on ARM64.
- It gives preference to throughput over latency: the heuristic used is
to combine always in loops. The targets decides whether the machine
combiner should optimize for throughput or latency.
- Supports for fmadd, f(n)msub, fmla, fmls patterns
- On by default at O3 ffast-math
llvm-svn: 267328
|
| |
|
|
|
|
|
|
| |
ctlz_zero_undef(X) -> ctlz(X). InstCombine already does this for IR and X86 pattern matches this during isel.
A follow up commit will remove the X86 patterns to allow this to be tested.
llvm-svn: 267325
|
| |
|
|
|
|
| |
select to detect if the input is zero to return the original size instead of the extended size. Instead just set the first bit in the zero extended part.
llvm-svn: 267280
|
| |
|
|
|
|
| |
If the target allows the alignment, this should be OK.
llvm-svn: 267217
|
| |
|
|
|
|
| |
If the target allows the alignment, this should still be OK.
llvm-svn: 267209
|
| |
|
|
|
|
| |
It introduced buildbot failures on clang-cmake-mips, clang-ppc64le-linux, among others.
llvm-svn: 267127
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Evaluates fmul+fadd -> fmadd combines and similar code sequences in the
machine combiner. It adds support for float and double similar to the existing
integer implementation. The key features are:
- DAGCombiner checks whether it should combine greedily or let the machine
combiner do the evaluation. This is only supported on ARM64.
- It gives preference to throughput over latency: the heuristic used is
to combine always in loops. The targets decides whether the machine
combiner should optimize for throughput or latency.
- Supports for fmadd, f(n)msub, fmla, fmls patterns
- On by default at O3 ffast-math
llvm-svn: 267098
|
| |
|
|
|
|
|
|
| |
When custom lowered, this is not called if the store is custom
lowered. Move it to be a utility function so targets can
easily expand unaligned accesses when custom lowering.
llvm-svn: 267029
|