| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The intention of these is to be a corollary to ISD::FMINNUM/FMAXNUM,
differing only on how NaNs are treated. FMINNUM returns the non-NaN
input (when given one NaN and one non-NaN), FMINNAN returns the NaN
input instead.
This patch includes support for scalarizing, widening and splitting
vectors, but not expansion or softening. The reason is that these
should never be needed - FMINNAN nodes are only going to be created
in one place (SDAGBuilder::visitSelect) and there we'll check if the
node is legal or custom. I could preemptively add expand and soften
code, but I'm fairly opposed to adding code I can't test. It's bad
enough I can't create tests with this patch, but at least this code
will be exercised by the ARM and AArch64 backends fairly shortly.
llvm-svn: 244581
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The select pattern recognition in ValueTracking (as used by InstCombine
and SelectionDAGBuilder) only knew about integer patterns. This teaches
it about minimum and maximum operations.
matchSelectPattern() has been extended to return a struct containing the
existing Flavor and a new enum defining the pattern's behavior when
given one NaN operand.
C minnum() is defined to return the non-NaN operand in this case, but
the idiomatic C "a < b ? a : b" would return the NaN operand.
ARM and AArch64 at least have different instructions for these different cases.
llvm-svn: 244580
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
frame setup instruction.
This commit ensures that the stack map lowering code in FastISel adds an
appropriate number of immediate operands to the frame setup instruction.
The previous code added just one immediate operand, which was fine for a target
like AArch64, but on X86 the ADJCALLSTACKDOWN64 instruction needs two explicit
operands. This caused the machine verifier to report an error when the old code
added just one.
Reviewers: Juergen Ributzka
Differential Revision: http://reviews.llvm.org/D11853
llvm-svn: 244508
|
|
|
|
| |
llvm-svn: 244402
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
around a DataLayout interface in favor of directly querying DataLayout.
This wrapper specifically helped handle the case where this no
DataLayout, but LLVM now requires it simplifynig all of this. I've
updated callers to directly query DataLayout. This in turn exposed
a bunch of places where we should have DataLayout readily available but
don't which I've fixed. This then in turn exposed that we were passing
DataLayout around in a bunch of arguments rather than making it readily
available so I've also fixed that.
No functionality changed.
llvm-svn: 244189
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We can't propagate FMF partially without breaking DAG-level CSE. We either need to
relax CSE to account for mismatched FMF as a temporary work-around or fully propagate
FMF throughout the DAG.
Surprisingly, there are no existing regression tests for this, but here's an example:
define float @fmf(float %a, float %b) {
%mul1 = fmul fast float %a, %b
%nega = fsub fast float 0.0, %a
%mul2 = fmul fast float %nega, %b
%abx2 = fsub fast float %mul1, %mul2
ret float %abx2
}
$ llc -o - badflags.ll -march=x86-64 -mattr=fma -enable-unsafe-fp-math -enable-fmf-dag=0
...
vmulss %xmm1, %xmm0, %xmm0
vaddss %xmm0, %xmm0, %xmm0
retq
$ llc -o - badflags.ll -march=x86-64 -mattr=fma -enable-unsafe-fp-math -enable-fmf-dag=1
...
vmulss %xmm1, %xmm0, %xmm2
vfmadd213ss %xmm2, %xmm1, %xmm0 <--- failed to recognize that (a * b) was already calculated
retq
llvm-svn: 244053
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Create wrapper methods in the Function class for the OptimizeForSize and MinSize
attributes. We want to hide the logic of "or'ing" them together when optimizing
just for size (-Os).
Currently, we are not consistent about this and rely on a front-end to always set
OptimizeForSize (-Os) if MinSize (-Oz) is on. Thus, there are 18 FIXME changes here
that should be added as follow-on patches with regression tests.
This patch is NFC-intended: it just replaces existing direct accesses of the attributes
by the equivalent wrapper call.
Differential Revision: http://reviews.llvm.org/D11734
llvm-svn: 243994
|
|
|
|
|
|
|
|
|
|
|
|
| |
On the code path in ExpandUnalignedLoad which expands an unaligned vector/fp
value in terms of a legal integer load of the same size, the ChainResult needs
to be the chain result of the integer load.
No in-tree test case is currently available.
Patch by Jan Hranac!
llvm-svn: 243956
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We used to legalize it like it's any other binary operations. It's not,
because it accepts mismatched operand types. Because of that, we used
to hit various asserts and miscompiles.
Specialize vector legalizations to, in the worst case, unroll, or, when
possible, to just legalize the operand that needs legalization.
Scalarization isn't covered, because I can't think of a target where
some but not all of the 1-element vector types are to be scalarized.
llvm-svn: 243924
|
|
|
|
| |
llvm-svn: 243838
|
|
|
|
| |
llvm-svn: 243837
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
byte level
The XformToShuffleWithZero method currently checks AND masks at the per-lane level for all-one and all-zero constants and attempts to convert them to legal shuffle clear masks.
This patch generalises XformToShuffleWithZero, splitting and checking the sub-lanes of the constants down to the byte level to see if any legal shuffle clear masks are possible. This allows a lot of masks (often from legalization or truncation) to be folded into existing shuffle patterns and removes a lot of constant mask loading.
There are a few examples of poor shuffle lowering that are exposed by this patch that will be cleaned up in future patches (e.g. merging shuffles that are separated by bitcasts, x86 legalized v8i8 zero extension uses PMOVZX+AND+AND instead of AND+PMOVZX, etc.)
Differential Revision: http://reviews.llvm.org/D11518
llvm-svn: 243831
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the fake `DW_TAG_auto_variable` and `DW_TAG_arg_variable` tags,
using `DW_TAG_variable` in their place Stop exposing the `tag:` field at
all in the assembly format for `DILocalVariable`.
Most of the testcase updates were generated by the following sed script:
find test/ -name "*.ll" -o -name "*.mir" |
xargs grep -l 'DILocalVariable' |
xargs sed -i '' \
-e 's/tag: DW_TAG_arg_variable, //' \
-e 's/tag: DW_TAG_auto_variable, //'
There were only a handful of tests in `test/Assembly` that I needed to
update by hand.
(Note: a follow-up could change `DILocalVariable::DILocalVariable()` to
set the tag to `DW_TAG_formal_parameter` instead of `DW_TAG_variable`
(as appropriate), instead of having that logic magically in the backend
in `DbgVariable`. I've added a FIXME to that effect.)
llvm-svn: 243774
|
|
|
|
|
|
|
|
|
|
| |
This introduces new instructions neccessary to implement MSVC-compatible
exception handling support. Most of the middle-end and none of the
back-end haven't been audited or updated to take them into account.
Differential Revision: http://reviews.llvm.org/D11097
llvm-svn: 243766
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixing MinSize attribute handling was discussed in D11363.
This is a prerequisite patch to doing that.
The handling of OptSize when lowering mem* functions was broken
on Darwin because it wants to ignore -Os for these cases, but the
existing logic also made it ignore -Oz (MinSize).
The Linux change demonstrates a widespread problem. The backend
doesn't usually recognize the MinSize attribute by itself; it
assumes that if the MinSize attribute exists, then the OptSize
attribute must also exist.
Fixing this more generally will be a follow-on patch or two.
Differential Revision: http://reviews.llvm.org/D11568
llvm-svn: 243693
|
|
|
|
|
|
|
|
|
|
|
| |
This uncovered latent bugs previously:
http://reviews.llvm.org/D10403
...but it's time to try again because internal tests aren't finding more.
If time passes and no other bugs are reported, we can remove this cl::opt.
llvm-svn: 243687
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Making allowableAlignment() more accessible was suggested as a predecessor patch
for D10662, so I've pulled it into TargetLowering. This let's us remove 4 instances
of duplicate logic in LegalizeDAG.
There's a subtle functional change in the implementation: the existing
allowableAlignment() code was using getPrefTypeAlignment() when checking
alignment with the DataLayout and assumed that was fast. In this implementation,
we use getABITypeAlignment() and assume that is fast. See the TODO comment or the
discussion in the Phab review for future improvements in this implementation
(don't use the data layout at all).
There are no regression test changes from this difference, and I'm not sure how to
expose it via a test. I think we actually do want to provide the 'Fast' param when
checking this from DAGCombiner::MergeConsecutiveStores(). Ie, we shouldn't merge
stores if the new stores are not going to be fast. But that change will require
fixing allowsMisalignedMemoryAccess() overrides as noted in D10662.
Differential Revision: http://reviews.llvm.org/D10905
llvm-svn: 243549
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
As added initially, statepoints required their call targets to be a
constant pointer null if ``numPatchBytes`` was non-zero. This turns out
to be a problem ergonomically, since there is no way to mark patchable
statepoints as calling a (readable) symbolic value.
This change remove the restriction of requiring ``null`` call targets
for patchable statepoints, and changes PlaceSafepoints to maintain the
symbolic call target through its transformation.
Reviewers: reames, swaroop.sridhar
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11550
llvm-svn: 243502
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(PR24141)
PR24141: https://llvm.org/bugs/show_bug.cgi?id=24141
contains a test case where we have duplicate entries in a node's uses() list.
After r241826, we use CombineTo() to delete dead nodes when combining the uses into
reciprocal multiplies, but this fails if we encounter the just-deleted node again in
the list.
The solution in this patch is to not add duplicate entries to the list of users that
we will subsequently iterate over. For the test case, this avoids triggering the
combine divisors logic entirely because there really is only one user of the divisor.
Differential Revision: http://reviews.llvm.org/D11345
llvm-svn: 243500
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
threshold
This fix was suggested as part of D11345 and is part of fixing PR24141.
With this change, we can avoid walking the uses of a divisor node if the target
doesn't want the combineRepeatedFPDivisors transform in the first place.
There is no NFC-intended other than that.
Differential Revision: http://reviews.llvm.org/D11531
llvm-svn: 243498
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The 'common' section TLS is not implemented.
Current C/C++ TLS variables are not placed in common section.
DWARF debug info to get the address of TLS variables is not generated yet.
clang and driver changes in http://reviews.llvm.org/D10524
Added -femulated-tls flag to select the emulated TLS model,
which will be used for old targets like Android that do not
support ELF TLS models.
Added TargetLowering::LowerToTLSEmulatedModel as a target-independent
function to convert a SDNode of TLS variable address to a function call
to __emutls_get_address.
Added into lib/Target/*/*ISelLowering.cpp to call LowerToTLSEmulatedModel
for TLSModel::Emulated. Although all targets supporting ELF TLS models are
enhanced, emulated TLS model has been tested only for Android ELF targets.
Modified AsmPrinter.cpp to print the emutls_v.* and emutls_t.* variables for
emulated TLS variables.
Modified DwarfCompileUnit.cpp to skip some DIE for emulated TLS variabls.
TODO: Add proper DIE for emulated TLS variables.
Added new unit tests with emulated TLS.
Differential Revision: http://reviews.llvm.org/D10522
llvm-svn: 243438
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The previous way of overriding it was relying on calling "setDefault"
on the global registry, which implies global mutable state.
Reviewers: echristo, atrick
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11538
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 243388
|
|
|
|
| |
llvm-svn: 243293
|
|
|
|
|
|
|
|
|
|
| |
through APIs that are no longer necessary now that the update API has
been removed.
This will make changes to the AA interfaces significantly less
disruptive (I hope). Either way, it seems like a really nice cleanup.
llvm-svn: 242882
|
|
|
|
|
|
| |
We don't bitcast the UNDEFs - that is done in visitVECTOR_SHUFFLE, and the getValueType should come from the operand's SDValue not the SDNode.
llvm-svn: 242640
|
|
|
|
| |
llvm-svn: 242636
|
|
|
|
|
|
| |
This was increasing the instruction count if the fadd has multiple uses.
llvm-svn: 242498
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
llvm.eh.sjlj.setjmp was used as part of the SjLj exception handling
style but is also used in clang to implement __builtin_setjmp. The ARM
backend needs to output additional dispatch tables for the SjLj
exception handling style, these tables however can't be emitted if
llvm.eh.sjlj.setjmp is simply used for __builtin_setjmp and no actual
landing pad blocks exist.
To solve this issue a new llvm.eh.sjlj.setup_dispatch intrinsic is
introduced which is used instead of llvm.eh.sjlj.setjmp in the SjLj
exception handling lowering, so we can differentiate between the case
where we actually need to setup a dispatch table and the case where we
just need the __builtin_setjmp semantic.
Differential Revision: http://reviews.llvm.org/D9313
llvm-svn: 242481
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
difference operation
This adds new intrinsics "*absdiff" for absolute difference ops to facilitate efficient code generation for "sum of absolute differences" operation.
The patch also contains the introduction of corresponding SDNodes and basic legalization support.Sanity of the generated code is tested on X86.
This is 1st of the three patches.
Patch by Shahid Asghar-ahmad!
llvm-svn: 242409
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This change is part of a series of commits dedicated to have a single
DataLayout during compilation by using always the one owned by the
module.
This patch is quite boring overall, except for some uglyness in
ASMPrinter which has a getDataLayout function but has some clients
that use it without a Module (llmv-dsymutil, llvm-dwarfdump), so
some methods are taking a DataLayout as parameter.
Reviewers: echristo
Subscribers: yaron.keren, rafael, llvm-commits, jholewinski
Differential Revision: http://reviews.llvm.org/D11090
From: Mehdi Amini <mehdi.amini@apple.com>
llvm-svn: 242386
|
|
|
|
|
|
|
|
| |
This new wrapper pass is useful when we want to do branch probability analysis conditionally (e.g. only in PGO mode) but don't want to add one more pass dependence.
http://reviews.llvm.org/D11241
llvm-svn: 242349
|
|
|
|
|
|
|
|
|
|
|
| |
Nadolskiy)
Current implementation handles unordered comparison poorly in soft-float mode.
Consider (a ULE b) which is a <= b. It is lowered to (ledf2(a, b) <= 0 || unorddf2(a, b) != 0) (in general). We can do better job by lowering it to (__gtdf2(a, b) <= 0).
Such replacement is true for other CMP's (ult, ugt, uge). In general, we just call same function as for ordered case but negate comparison against zero.
Differential Revision: http://reviews.llvm.org/D10804
llvm-svn: 242280
|
|
|
|
|
|
|
|
| |
The unsigned opcode argument here was the result of BinaryOperator->getOpcode().
That returns a BinaryOps enum which is more accurate than passing around an
unsigned.
llvm-svn: 242265
|
|
|
|
|
|
|
|
|
|
| |
This code was checking if we are an ICmpInst or FCmpInst then throwing
unreachable if we are neither. We must be one or the other, so use a
cast on the FCmpInst case to ensure that we are that case. Then we can
avoid having an unreachable but still catch an error if we ever had another
subclass of CmpInst.
llvm-svn: 242264
|
|
|
|
| |
llvm-svn: 242263
|
|
|
|
|
|
|
| |
The code here was doing exactly what is already in getAnyExtOrTrunc().
Just use that method instead.
llvm-svn: 242261
|
|
|
|
|
|
|
| |
The code here was doing exactly what is already in getZExtOrTrunc().
Just use that method instead.
llvm-svn: 242260
|
|
|
|
|
|
|
|
| |
The calls here were both to getStoreSizeInBits() which multiplies by 8.
We then immediately divided by 8. Calling getStoreSize() returns the
values we need without the extra arithmetic.
llvm-svn: 242254
|
|
|
|
| |
llvm-svn: 242249
|
|
|
|
|
|
|
|
|
|
|
| |
SelectionDAG already had begin/end methods for iterating over all
the nodes, but didn't define an iterator_range for us in foreach
loops.
This adds such a method and uses it in some of the eligible places
throughout the backends.
llvm-svn: 242212
|
|
|
|
| |
llvm-svn: 242210
|
|
|
|
|
|
|
|
| |
The simplify_type specialisation allows us to cast directly from
SDValue to an SDNode* subclass so we don't need to pass a SDNode*
to cast<>.
llvm-svn: 242209
|
|
|
|
| |
llvm-svn: 242208
|
|
|
|
|
|
|
|
| |
Accidental commit, needs review first.
This reverts commit r242107.
llvm-svn: 242108
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Factor out code to query and modify the sign bit of a floatingpoint
value as an integer. This also works if none of the targets integer
types is big enough to hold all bits of the floatingpoint value.
- Legalize FABS(x) as FCOPYSIGN(x, 0.0) if FCOPYSIGN is available,
otherwise perform bit manipulation on the sign bit. The previous code
used "x >u 0 ? x : -x" which is incorrect for x being -0.0! It also
takes 34 instructions on ARM Cortex-M4. With this patch we only
require 5:
vldr d0, LCPI0_0
vmov r2, r3, d0
lsrs r2, r3, #31
bfi r1, r2, #31, #1
bx lr
(This could be further improved if the compiler would recognize that
r2, r3 is zero).
- Only lower FCOPYSIGN(x, y) = sign(x) ? -FABS(x) : FABS(x) if FABS is
available otherwise perform bit manipulation on the sign bit.
- Perform the sign(x) test by masking out the sign bit and comparing
with 0 rather than shifting the sign bit to the highest position and
testing for "<s 0". For x86 copysignl (on 80bit values) this gets us:
testl $32768, %eax
rather than:
shlq $48, %rax
sets %al
testb %al, %al
llvm-svn: 242107
|
|
|
|
|
|
|
|
|
|
| |
It had accidently accepted a symbol+offset value (and emitted
incorrect code for it, keeping only the offset part) instead of
properly reporting the constraint as invalid.
Differential Revision: http://reviews.llvm.org/D11039
llvm-svn: 242040
|
|
|
|
|
|
|
|
|
|
| |
The motivation is to allow GatherAllAliases / FindBetterChain
to not give up on dependent loads of a pointer from constant memory.
This is important for AMDGPU, because most loads are pointers
derived from a load of a kernel argument from constant memory.
llvm-svn: 241948
|
|
|
|
|
|
|
|
| |
If our two inputs have known top-zero bit counts M and N, we trivially
know that the output cannot have any bits set in the top (min(M, N)-1)
bits, since nothing could carry past that point.
llvm-svn: 241927
|
|
|
|
|
|
| |
This reverts commits r241888-r241891, I didn't mean to commit them.
llvm-svn: 241893
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This introduces new instructions neccessary to implement MSVC-compatible
exception handling support. Most of the middle-end and none of the
back-end haven't been audited or updated to take them into account.
Reviewers: rnk, JosephTremoulet, reames, nlewycky, rjmccall
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11041
llvm-svn: 241888
|