| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
|
| |
They just check for certain opcodes and opcode enums are available in Instruction.h.
llvm-svn: 298237
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
ConstantRange class currently has a method getSetSize, which is mostly used to
compare set sizes of two constant ranges (there is only one spot where it's used
in a slightly different scenario). This patch introduces setSizeSmallerThanOf
method, which does such comparison in a more efficient way. In the original
method we have to extend our types to (BitWidth+1), which can result it using
slow case of APInt, extra memory allocations, etc.
The change is supposed to not change any functionality, but it slightly improves
compile time. Here is compile time improvements that I observed on CTMark:
* tramp3d-v4 -2.02%
* pairlocalalign -1.82%
* lencod -1.67%
Reviewers: sanjoy, atrick, pete
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31104
llvm-svn: 298236
|
| |
|
|
|
|
| |
were transitively depending on it. NFC
llvm-svn: 298235
|
| |
|
|
| |
llvm-svn: 298234
|
| |
|
|
|
|
|
|
|
|
| |
initSlowCase and other init methods.
I'm not sure if zeroing VAL before writing pVal is really necessary, but at least one other place did it in code.
But by taking the store out of line, this reduces the opt binary by about 20k on my local x86-64 build.
llvm-svn: 298233
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary: This Idom check seems unnecessary. The immediate children of a node on the Dominator Tree should always be the IDom of its immediate children in this case.
Reviewers: hfinkel, majnemer, dberlin
Reviewed By: dberlin
Subscribers: dberlin, davide, llvm-commits
Differential Revision: https://reviews.llvm.org/D26954
llvm-svn: 298232
|
| |
|
|
| |
llvm-svn: 298231
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
instead of isel
Summary:
Currently we handle these intrinsics at isel with special patterns. But as they just map to normal logic operations, we should just handle them at lowering. This will expose them to DAG combine optimizations. Right now the kor-sequence test generates a bunch of regclass copies between GR16 and VK16 that the peephole optimizer and/or register coallescing are removing to keep everything in the mask domain. By handling the logic op intrinsics earlier, these copies become bitcasts in the DAG and get removed by DAG combine which seems more robust.
This should help enable my plan to stop copying between K registers and GR8/GR16. The peephole optimizer can't remove a chain of copies between K and GR32 with insert_subreg/extract_subreg present in the chain so the kor-sequence test break. But this patch should dodge the problem entirely.
Reviewers: zvi, delena, RKSimon, igorb
Reviewed By: igorb
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31056
llvm-svn: 298228
|
| |
|
|
|
|
|
|
|
|
|
|
| |
We make the assumption in most of our constant folding code that a fp2int will target an integer of 128-bits or less, calling the APFloat::convertToInteger with only uint64_t[2] of raw bits for the result.
Fuzz testing (PR24662) showed that we don't handle other cases at all, resulting in stack overflows and all sorts of crashes.
This patch uses the APSInt version of APFloat::convertToInteger instead to better handle such cases.
Differential Revision: https://reviews.llvm.org/D31074
llvm-svn: 298226
|
| |
|
|
|
|
| |
labels". NFCI.
llvm-svn: 298225
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Folding instructions when selecting can cause them to become dead.
Don't select these dead instructions (if they don't have other side
effects, and don't define physical registers).
Preserve existing tests by adding COPYs.
In some tests, the G_CONSTANT vregs never get constrained to a class:
the only use of the vreg was folded into another instruction, so the
G_CONSTANT, now dead, never gets selected.
llvm-svn: 298224
|
| |
|
|
| |
llvm-svn: 298221
|
| |
|
|
|
|
| |
Left out AA in jumpthreading SimplifyPartiallyRedundantLoad
llvm-svn: 298219
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
In case we are loading on a phi-load in SimplifyPartiallyRedundantLoad.
Try to phi translate it into incoming values in the predecessors before
we search for available loads.
This needs https://reviews.llvm.org/D30524
Reviewers: davide, sanjoy, efriedma, dberlin, rengolin
Reviewed By: dberlin
Subscribers: junbuml, llvm-commits
Differential Revision: https://reviews.llvm.org/D30543
llvm-svn: 298217
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Extract FindAvailablePtrLoadStore out of FindAvailableLoadedValue.
Prepare for upcoming change which will do phi-translation for load on
phi pointer in jump threading SimplifyPartiallyRedundantLoad.
This is in preparation for https://reviews.llvm.org/D30543
Reviewers: efriedma, sanjoy, davide, dberlin
Reviewed By: davide
Subscribers: junbuml, davide, llvm-commits
Differential Revision: https://reviews.llvm.org/D30524
llvm-svn: 298216
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
I found that stripDebugInfo was still leaving significant amounts of
debug info due to !llvm.loop that contained DILocation after stripping.
The support for stripping debug info on !llvm.loop added in r293377 only
removes a single DILocation. Enhance that to remove all DILocation from
!llvm.loop.
Reviewers: hfinkel, aprantl, dsanders
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31117
llvm-svn: 298213
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
The MIR printer dumps a string that describe the register mask of a function.
A static predefined list of register masks matches a static list of strings.
However when the register mask is not from the static predefined list, there is no descriptor string and the printer fails.
This patch adds support to custom register mask printing and dumping.
Also the list of callee saved registers (describing the registers that must be preserved for the caller) might be dynamic.
As such this data needs to be dumped and parsed back to the Machine Register Info.
Differential Revision: https://reviews.llvm.org/D30971
llvm-svn: 298207
|
| |
|
|
|
|
| |
getLowBitsSet/getHighBitsSet.
llvm-svn: 298204
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
The reverse of an artbitrary bitpattern is also an arbitrary
bitpattern.
Reviewers: trentxintong, arsenm, majnemer
Reviewed By: majnemer
Subscribers: majnemer, wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D31118
llvm-svn: 298201
|
| |
|
|
|
|
| |
the predicateuser set each time we mark it touched
llvm-svn: 298199
|
| |
|
|
| |
llvm-svn: 298198
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
iterateOnFunction
Summary:
iterateOnFunction creates a ReversePostOrderTraversal object which does a post order traversal in its constructor and stores the results in an internal vector. Iteration over it just reads from the internal vector in reverse order.
The GVN code seems to be unaware of this and iterates over ReversePostOrderTraversal object and makes a copy of the vector into a local vector. (I think at one point in time we used a DFS here instead which would have required the local vector).
The net affect of this is that we have two vectors containing the basic block list. As I didn't want to expose the implementation detail of ReversePostOrderTraversal's constructor to GVN, I've changed the code to do an explicit post order traversal storing into the local vector and then reverse iterate over that.
I've also removed the reserve(256) since the ReversePostOrderTraversal wasn't doing that. I can add it back if we thinks it important. Though it seemed weird that it wasn't based on the size of the function.
Reviewers: davide, anemet, dberlin
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31084
llvm-svn: 298191
|
| |
|
|
|
|
| |
The code assigned to KnownZero, but later code unconditionally assigned over it. I'm pretty sure the later code can handle the same cases and more equally well.
llvm-svn: 298190
|
| |
|
|
|
|
| |
issues, subsuming previous verifier.
llvm-svn: 298188
|
| |
|
|
|
|
| |
whether the incoming block was reachable instead of whether the specific edge was reachable
llvm-svn: 298187
|
| |
|
|
|
|
|
|
| |
Let targets specialize the pass with the register class so we can get a
parameterless default constructor and can put the pass into the pass
registry to enable testing with -run-pass=.
llvm-svn: 298184
|
| |
|
|
|
|
|
| |
Normalize ExeDepsFix, execution-fix, ExecutionDependencyFix and
ExecutionDepsFix to the last one.
llvm-svn: 298183
|
| |
|
|
| |
llvm-svn: 298182
|
| |
|
|
|
|
| |
getSignBit which will malloc if the bit width is larger than 64.
llvm-svn: 298180
|
| |
|
|
|
|
|
|
|
|
| |
Reviewers: mkuper, rnk
Subscribers: mehdi_amini, jyknight, aemerson, llvm-commits, rengolin
Differential Revision: https://reviews.llvm.org/D27050
llvm-svn: 298179
|
| |
|
|
| |
llvm-svn: 298178
|
| |
|
|
|
|
|
|
|
|
| |
Go back to behavior pre-r231309 and reduce the timeout from 8 to ~1.5
min now that we have (a) PCMCache mechanism (r298165) and (b) timeout
that doesn't cause a failure, but actually build the module (r298175).
rdar://problem/30297862
llvm-svn: 298176
|
| |
|
|
|
|
|
|
|
| |
This is direct port of HSAILAliasAnalysis pass, just cleaned for
style and renamed.
Differential Revision: https://reviews.llvm.org/D31103
llvm-svn: 298172
|
| |
|
|
|
|
|
|
|
|
| |
When InstCombine calls into SimplifyLibCalls and it createa putChar calls, we don't infer the attributes. And since SimplifyLibCalls doesn't use InstCombine's IRBuilder the calls doesn't end up in the worklist on this iteration of InstCombine. So it gets picked up on the next iteration where it causes an IR change. This of course causes InstCombine to run another iteration.
So this patch just gets the attributes right the first time. We already did this for puts and some other libcalls.
Differential Revision: https://reviews.llvm.org/D31094
llvm-svn: 298171
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit adds the necessary target hooks for outlining in AArch64. It also
refactors the switch statement used in `getMemOpBaseRegImmOfsWidth` into a
more general function, `getMemOpInfo`. This allows the outliner to share that
code without copying and pasting it.
The AArch64 outliner can be run using -mllvm -enable-machine-outliner, as with
the X86-64 outliner.
The test for this pass verifies that the outliner does, in fact outline
functions, fixes up the stack accesses properly, and can correctly generate a
tail call. In the future, this test should be replaced with a MIR test, so that
we can properly test immediate offset overflows in fixed-up instructions.
llvm-svn: 298162
|
| |
|
|
|
|
|
|
| |
Use const pointer in the trip count and trip multiple calculations.
Patch by Huihui Zhang <huihuiz@codeaurora.org>
llvm-svn: 298161
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use a combination of !associated, comdat, @llvm.compiler.used and
custom sections to allow dead stripping of globals and their asan
metadata. Sometimes.
Currently this works on LLD, which supports SHF_LINK_ORDER with
sh_link pointing to the associated section.
This also works on BFD, which seems to treat comdats as
all-or-nothing with respect to linker GC. There is a weird quirk
where the "first" global in each link is never GC-ed because of the
section symbols.
At this moment it does not work on Gold (as in the globals are never
stripped).
Differential Revision: https://reviews.llvm.org/D30121
llvm-svn: 298158
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is an ELF-specific thing that adds SHF_LINK_ORDER to the global's section
pointing to the metadata argument's section. The effect of that is a reverse dependency
between sections for the linker GC.
!associated does not change the behavior of global-dce. The global
may also need to be added to llvm.compiler.used.
Since SHF_LINK_ORDER is per-section, !associated effectively enables
fdata-sections for the affected globals, the same as comdats do.
Differential Revision: https://reviews.llvm.org/D29104
llvm-svn: 298157
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Handle TokenFactors more aggressively in
SDValue::reachesChainWithoutSideEffects. This isn't really a
very effective change anymore because of other changes to
chain handling, but it's a cheap check, and the expanded
comments are still useful.
It might be possible to loosen the hasOneUse() requirement with a
deeper analysis, but a naive implementation of that check would be
expensive.
Differential Revision: https://reviews.llvm.org/D29845
llvm-svn: 298156
|
| |
|
|
| |
llvm-svn: 298127
|
| |
|
|
|
|
| |
Fixes bug 32248.
llvm-svn: 298125
|
| |
|
|
|
|
| |
Patch by John Harvey!
llvm-svn: 298122
|
| |
|
|
|
|
|
|
| |
If the loop condition was an i1 phi with a constantexpr input, this
would add a loop intrinsic fed by a phi dependent on a call to
if.break in the same block. Insert the call in the loop header.
llvm-svn: 298121
|
| |
|
|
| |
llvm-svn: 298120
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move backend internal intrinsics along with the rest of the
normal intrinsics, and use the Intrinsic::getDeclaration
API instead of manually constructing the type list.
It's surprising this was working before. fdiv.fast had
the wrong number of parameters. The control flow intrinsic
declaration attributes were not being applied, and
their types were inconsistent. The actual IR use types
did not match the declaration, and were closer to the
types used for the patterns. The brcond lowering
was changing the types, so introduce new nodes for those.
llvm-svn: 298119
|
| |
|
|
| |
llvm-svn: 298118
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Use this code pattern when RAX is live, instead of emitting up to 2
billion adjustments:
pushq %rax
movabsq +-$Offset+-8, %rax
addq %rsp, %rax
xchg %rax, (%rsp)
movq (%rsp), %rsp
Try to clean this code up a bit while I'm here. In particular, hoist the
logic that handles the entire adjustment with `movabsq $imm, %rax` out
of the loop.
This negates the offset in the prologue and uses ADD because X86 only
has a two operand subtract which always subtracts from the destination
register, which can no longer be RSP.
Fixes PR31962
Reviewers: majnemer, sdardis
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D30052
llvm-svn: 298116
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Instead of just looking for a load which is mergable with Ext to form ExtLoad, trying to promote Exts as long as the cost is acceptable. This change is not a NFC as it continue promoting Exts even after finding a load during promotions; the change in arm64-codegen-prepare-extload.ll described in 2.b might show the case.
This change was motivated from D26524. Based on this change, I will move the transformation performed in aarch64-type-promotion into CGP.
Reviewers: jmolloy, qcolombet, mcrosier, javed.absar
Reviewed By: qcolombet
Subscribers: rengolin, llvm-commits, aemerson
Differential Revision: https://reviews.llvm.org/D27853
llvm-svn: 298114
|
| |
|
|
|
|
|
|
| |
This patch annotates the valuesites profile to memory intrinsics.
Differential Revision: http://reviews.llvm.org/D31002
llvm-svn: 298110
|
| |
|
|
| |
llvm-svn: 298108
|