| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
if the top level addition in (D + (C-D + x + ...)) could be proven to
not wrap, where the choice of D also maximizes the number of trailing
zeroes of (C-D + x + ...), ensuring homogeneous behaviour of the
transformation and better canonicalization of such expressions.
This enables better canonicalization of expressions like
1 + zext(5 + 20 * %x + 24 * %y) and
zext(6 + 20 * %x + 24 * %y)
which get both transformed to
2 + zext(4 + 20 * %x + 24 * %y)
This pattern is common in address arithmetics and the transformation
makes it easier for passes like LoadStoreVectorizer to prove that 2 or
more memory accesses are consecutive and optimize (vectorize) them.
Reviewed By: mzolotukhin
Differential Revision: https://reviews.llvm.org/D48853
llvm-svn: 337859
|
| |
|
|
|
|
|
|
|
|
| |
like 31, don't generate a negate of a subtract that we'll never optimize.
We generated a subtract for the power of 2 minus one then negated the result. The negate can be optimized away by swapping the subtract operands, but DAG combine doesn't know how to do that and we don't add any of the new nodes to the worklist anyway.
This patch makes use explicitly emit the swapped subtract.
llvm-svn: 337858
|
| |
|
|
|
|
|
|
|
|
| |
minus 2.
Use a left shift and 2 subtracts like we do for 30. Move this out from behind the slow lea check since it doesn't even use an LEA.
Use this for multiply by 14 as well.
llvm-svn: 337856
|
| |
|
|
|
|
| |
The new lowering can be done in 2 LEAs. The old code took 1 LEA, 1 shift, and 1 sub.
llvm-svn: 337851
|
| |
|
|
|
|
|
| |
This pulls the OutlinedFunction remark out into its own function to make
the code a bit easier to read.
llvm-svn: 337849
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Just some gardening here.
Similar to how we moved call information into Candidates, this moves outlined
frame information into OutlinedFunction. This allows us to remove
TargetCostInfo entirely.
Anywhere where we returned a TargetCostInfo struct, we now return an
OutlinedFunction. This establishes OutlinedFunctions as more of a general
repeated sequence, and Candidates as occurrences of those repeated sequences.
llvm-svn: 337848
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
34169)
When building with LTO, builtin functions that are defined but whose calls have not been inserted yet, get internalized. The Global Dead Code Elimination phase in the new LTO implementation then removes these function definitions. Later optimizations add calls to those functions, and the linker then dies complaining that there are no definitions. This CL fixes the new LTO implementation to check if a function is builtin, and if so, to not internalize (and later DCE) the function. As part of this fix I needed to move the RuntimeLibcalls.{def,h} files from the CodeGen subidrectory to the IR subdirectory. I have updated all the files that accessed those two files to access their new location.
Fixes PR34169
Patch by Caroline Tice!
Differential Revision: https://reviews.llvm.org/D49434
llvm-svn: 337847
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TCRETURNr* and fix latent bugs with register class updates.
Summary:
Enabling this fully exposes a latent bug in the instruction folding: we
never update the register constraints for the register operands when
fusing a load into another operation. The fused form could, in theory,
have different register constraints on its operands. And in fact,
TCRETURNm* needs its memory operands to use tailcall compatible
registers.
I've updated the folding code to re-constrain all the registers after
they are mapped onto their new instruction.
However, we still can't enable folding in the general case from
TCRETURNr* to TCRETURNm* because doing so may require more registers to
be available during the tail call. If the call itself uses all but one
register, and the folded load would require both a base and index
register, there will not be enough registers to allocate the tail call.
It would be better, IMO, to teach the register allocator to *unfold*
TCRETURNm* when it runs out of registers (or specifically check the
number of registers available during the TCRETURNr*) but I'm not going
to try and solve that for now. Instead, I've just blocked the forward
folding from r -> m, leaving LLVM free to unfold from m -> r as that
doesn't introduce new register pressure constraints.
The down side is that I don't have anything that will directly exercise
this. Instead, I will be immediately using this it my SLH patch. =/
Still worse, without allowing the TCRETURNr* -> TCRETURNm* fold, I don't
have any tests that demonstrate the failure to update the memory operand
register constraints. This patch still seems correct, but I'm nervous
about the degree of testing due to this.
Suggestions?
Reviewers: craig.topper
Subscribers: sanjoy, mcrosier, hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D49717
llvm-svn: 337845
|
| |
|
|
|
|
|
|
|
|
|
|
| |
When we inline a function with a min-legal-vector-width attribute we need to make sure the caller also ends up with at least that vector width.
This patch is necessary to make always_inline functions like intrinsics propagate their min-legal-vector-width. Though nothing uses min-legal-vector-width yet.
A future patch will add heuristics to preventing inlining with different vector width mismatches. But that code would need to be in inline cost analysis which is separate from the code added here.
Differential Revision: https://reviews.llvm.org/D49162
llvm-svn: 337844
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Before this, TCI contained all the call information for each Candidate.
This moves that information onto the Candidates. As a result, each Candidate
can now supply how it ought to be called. Thus, Candidates will be able to,
say, call the same function in cheaper ways when possible. This also removes
that information from TCI, since it's no longer used there.
A follow-up patch for the AArch64 outliner will demonstrate this.
llvm-svn: 337840
|
| |
|
|
|
|
|
|
| |
Having the missed remark code in the middle of `findCandidates` made the
function hard to follow. This yanks that out into a new function,
`emitNotOutliningCheaperRemark`.
llvm-svn: 337839
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Just some simple gardening to improve clarity.
Before, we had something along the lines of
1) Create a std::vector of Candidates
2) Create an OutlinedFunction
3) Create a std::vector of pointers to Candidates
4) Copy those over to the OutlinedFunction and the Candidate list
Now, OutlinedFunctions create the Candidate pointers. They're still copied
over to the main list of Candidates, but it makes it a bit clearer what's
going on.
llvm-svn: 337838
|
| |
|
|
|
|
|
|
| |
This patch uses SCEV to avoid inserting some bounds checks when they are not needed. This slightly improves the performance of code compiled with the bounds check sanitizer.
Differential Revision: https://reviews.llvm.org/D49602
llvm-svn: 337830
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a workaround and it would be better to fix this generally, but
doing it generally is quite tricky. See D48541 and PR38117.
Doing it in PredicateInfo directly allows us to use the type address to
differentiate different unnamed types, because neither the created
declarations nor the ssa_copy calls should be visible after
PredicateInfo got destroyed.
Reviewers: efriedma, davide
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D49126
llvm-svn: 337828
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the final DTPREL addition, rather than a lui/daddiu/daddu triple,
LLVM was erronously emitting a daddiu/daddiu pair, treating the %dtprel_hi
as if it were a %dtprel_lo, since Mips::Hi expands unshifted for Sym64.
Instead, use a new TlsHi node and, although unnecessary due to the exact
structure of the nodes emitted, use TlsHi for local exec too to prevent
future bugs. Also garbage-collect the unused TprelLo and TlsGd nodes,
and TprelHi since its functionality is provided by the new common TlsHi node.
Patch by James Clarke.
Differential revision: https://reviews.llvm.org/D49259
llvm-svn: 337827
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
helper and restructure the post-load hardening to use this.
This isn't as trivial as I would have liked because the post-load
hardening used a trick that only works for it where it swapped in
a temporary register to the load rather than replacing anything.
However, there is a simple way to do this without that trick that allows
this to easily reuse a friendly API for hardening a value in a register.
That API will in turn be usable in subsequent patcehs.
This also techincally changes the position at which we insert the subreg
extraction for the predicate state, but that never resulted in an actual
instruction and so tests don't change at all.
llvm-svn: 337825
|
| |
|
|
|
|
| |
be more accurate and understandable.
llvm-svn: 337822
|
| |
|
|
|
|
|
|
| |
ARM Stage 2 builders have been suspiciously broken since the pass was
committed. Disabling to hopefully fix the bots and give me time to
debug.
llvm-svn: 337821
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
SmallVectorTemplateCommon wants to know the address of the first element
so it can detect whether it's in "small size" mode.
The old implementation split the small array, creating the storage for
the first element in SmallVectorTemplateCommon, and pulling the rest
into SmallVectorStorage where we know the size of the array. This
bloats SmallVector size 0 by the larger of sizeof(void*) and sizeof(T),
and we're not even using the storage.
The new implementation leaves the full small storage to
SmallVectorStorage. To calculate the offset of the first element in
SmallVectorTemplateCommon, we just need to know how far to jump, which
we can calculate out-of-band. One subtlety is that we need
SmallVectorStorage to be properly aligned even when the size is 0, to be
sure that (for large alignments) we actually have the padding and it's
well defined to do the pointer math.
llvm-svn: 337820
|
| |
|
|
|
|
| |
This reverts commit b454fa1b4079b6c0a5b1565982d16516385838d7.
llvm-svn: 337812
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are two forms for label debug information in DWARF format.
1. Labels in a non-inlined function:
DW_TAG_label
DW_AT_name
DW_AT_decl_file
DW_AT_decl_line
DW_AT_low_pc
2. Labels in an inlined function:
DW_TAG_label
DW_AT_abstract_origin
DW_AT_low_pc
We will collect label information from DBG_LABEL. Before every DBG_LABEL,
we will generate a temporary symbol to denote the location of the label.
The symbol could be used to get DW_AT_low_pc afterwards. So, we create a
mapping between 'inlined label' and DBG_LABEL MachineInstr in DebugHandlerBase.
The DBG_LABEL in the mapping is used to query the symbol before it.
The AbstractLabels in DwarfCompileUnit is used to process labels in inlined
functions.
We also keep a mapping between scope and labels in DwarfFile to help to
generate correct tree structure of DIEs.
Differential Revision: https://reviews.llvm.org/D45556
Patch by Hsiangkai Wang.
llvm-svn: 337799
|
| |
|
|
|
|
|
|
|
|
| |
Reviewers: arsenm
Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, rovka, kristof.beyls, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D49601
llvm-svn: 337798
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
We were marking G_EXTRACT operations unsupported if the output type
was larger than the input type. I don't see how this could ever actually
happen, so I dropped the constraint. Doing this makes it possible to
reuse the same legality code for G_INSERT.
Reviewers: arsenm
Reviewed By: arsenm
Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, rovka, kristof.beyls, dstuttard, tpr, t-tye, llvm-commits
Differential Revision: https://reviews.llvm.org/D49600
llvm-svn: 337794
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This new JIT event listener supports generating profiling data for
the linux 'perf' profiling tool, allowing it to generate function and
instruction level profiles.
Currently this functionality is not enabled by default, but must be
enabled with LLVM_USE_PERF=yes. Given that the listener has no
dependencies, it might be sensible to enable by default once the
initial issues have been shaken out.
I followed existing precedent in registering the listener by default
in lli. Should there be a decision to enable this by default on linux,
that should probably be changed.
Please note that until https://reviews.llvm.org/D47343 is resolved,
using this functionality with mcjit rather than orcjit will not
reliably work.
Disregarding the previous comment, here's an example:
$ cat /tmp/expensive_loop.c
bool stupid_isprime(uint64_t num)
{
if (num == 2)
return true;
if (num < 1 || num % 2 == 0)
return false;
for(uint64_t i = 3; i < num / 2; i+= 2) {
if (num % i == 0)
return false;
}
return true;
}
int main(int argc, char **argv)
{
int numprimes = 0;
for (uint64_t num = argc; num < 100000; num++)
{
if (stupid_isprime(num))
numprimes++;
}
return numprimes;
}
$ clang -ggdb -S -c -emit-llvm /tmp/expensive_loop.c -o
/tmp/expensive_loop.ll
$ perf record -o perf.data -g -k 1 ./bin/lli -jit-kind=mcjit /tmp/expensive_loop.ll 1
$ perf inject --jit -i perf.data -o perf.jit.data
$ perf report -i perf.jit.data
- 92.59% lli jitted-5881-2.so [.] stupid_isprime
stupid_isprime
main
llvm::MCJIT::runFunction
llvm::ExecutionEngine::runFunctionAsMain
main
__libc_start_main
0x4bf6258d4c544155
+ 0.85% lli ld-2.27.so [.] do_lookup_x
And line-level annotations also work:
│ for(uint64_t i = 3; i < num / 2; i+= 2) {
│1 30: movq $0x3,-0x18(%rbp)
0.03 │1 38: mov -0x18(%rbp),%rax
0.03 │ mov -0x10(%rbp),%rcx
│ shr $0x1,%rcx
3.63 │ ┌──cmp %rcx,%rax
│ ├──jae 6f
│ │ if (num % i == 0)
0.03 │ │ mov -0x10(%rbp),%rax
│ │ xor %edx,%edx
89.00 │ │ divq -0x18(%rbp)
│ │ cmp $0x0,%rdx
0.22 │ │↓ jne 5f
│ │ return false;
│ │ movb $0x0,-0x1(%rbp)
│ │↓ jmp 73
│ │ }
3.22 │1 5f:│↓ jmp 61
│ │ for(uint64_t i = 3; i < num / 2; i+= 2) {
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D44892
llvm-svn: 337789
|
| |
|
|
|
|
|
| |
This is in preparation for extracting this into a re-usable utility in
this code.
llvm-svn: 337785
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This code was really nasty, had several bugs in it originally, and
wasn't carrying its weight. While on Zen we have all 4 ports available
for SHRX, on all of the Intel parts with Agner's tables, SHRX can only
execute on 2 ports, giving it 1/2 the throughput of OR.
Worse, all too often this pattern required two SHRX instructions in
a chain, hurting the critical path by a lot.
Even if we end up needing to safe/restore EFLAGS, that is no longer so
bad. We pay for a uop to save the flag, but we very likely get fusion
when it is used by forming a test/jCC pair or something similar. In
practice, I don't expect the SHRX to be a significant savings here, so
I'd like to avoid the complex code required. We can always resurrect
this if/when someone has a specific performance issue addressed by it.
llvm-svn: 337781
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Summary: SmallVector's elements are moved when resizing and cause use-after-free.
Reviewers: probinson, dblaikie
Subscribers: JDevlieghere, llvm-commits
Differential Revision: https://reviews.llvm.org/D49702
llvm-svn: 337772
|
| |
|
|
| |
llvm-svn: 337770
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
handling tables of lists.
The intent is to use it for location list tables as well. Change is almost NFC with the exception
of the spelling of some strings used during dumping (all lowercase now).
Reviewer: JDevlieghere
Differential Revision: https://reviews.llvm.org/D49500
llvm-svn: 337763
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Similar to what lld already does for dllimport symbols which are
prefaced with __imp_ (see lld patch r240620), strip off the __imp_
prefix in LTO. Otherwise we can get 2 separate GlobalResolution for
a single symbol, the dllimport declaration, and the definition, which
leads to incorrect LTO handling.
Fixes PR38105.
Reviewers: pcc
Subscribers: mehdi_amini, inglorion, steven_wu, dexonsmith, llvm-commits
Differential Revision: https://reviews.llvm.org/D49138
llvm-svn: 337762
|
| |
|
|
|
|
|
|
|
|
| |
We really should set *status to memory_alloc_failure, but we need to refactor
the demangler a bit to properly propagate the failure up the stack. Until then,
its better to explicitly terminate then rely on a null dereference crash.
rdar://31240372
llvm-svn: 337759
|
| |
|
|
|
|
|
|
|
|
|
| |
This actually has nothing to do with the associative comdat sections
that aren't supported by GNU binutils ld.
Clarify the comments from SVN r335918 and use a separate flag for it.
Differential Revision: https://reviews.llvm.org/D49645
llvm-svn: 337757
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since SVN r335286, the .xdata sections are produced without an attached
symbol, which requires using a different syntax when printing assembly
output.
Instead of the usual syntax of '.section <name>,"dr",discard,<symbol>',
use '.section <name>,"dr"' + '.linkonce discard' (which is what GCC
uses for all assembly output).
This fixes PR38254.
Differential Revision: https://reviews.llvm.org/D49651
llvm-svn: 337756
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This matches the structure used on X86 and ARM. This requires
a little bit of duplication of the parts that are equal in both
AArch64 COFF variants though.
Before SVN r335286, these classes didn't add anything that MCAsmInfoCOFF
didn't, but now they do.
This makes AArch64 match X86 in how comdat is used for float constants
for MinGW.
Differential Revision: https://reviews.llvm.org/D49637
llvm-svn: 337755
|
| |
|
|
|
|
|
| |
This avoids approx. 2 x 10^5 DenseMap insertions in both non-debug and
debug -O2 builds of the sqlite3 amalgamation.
llvm-svn: 337751
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Without this change, the WholeProgramDevirt pass, which requires the
TargetLibraryInfo, will construct one from the default triple.
Fixes PR38139.
Reviewers: pcc
Subscribers: mehdi_amini, inglorion, steven_wu, dexonsmith, llvm-commits
Differential Revision: https://reviews.llvm.org/D49278
llvm-svn: 337750
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch makes debug counters keep track of the total number of times
we've called `shouldExecute` for each counter, so it's easier to build
automated tooling on top of these.
A patch to print these counts is coming soon.
Patch by Zhizhou Yang!
Differential Revision: https://reviews.llvm.org/D49560
llvm-svn: 337748
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Check if the parent basic block and caller exists
before calling CS.getCaller when constant folding
strip.invariant.group instrinsic.
This avoids a crash when the function containing the intrinsic
is being inlined. The instruction is checked for any simplifiction
but has not yet been added to a basic block.
Reviewers: Prazek, rsmith, efriedma
Reviewed By: efriedma
Subscribers: eraman, llvm-commits
Differential Revision: https://reviews.llvm.org/D49690
llvm-svn: 337742
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
models"
Don't try to generate large PIC code for non-ELF targets. Neither COFF
nor MachO have relocations for large position independent code, and
users have been using "large PIC" code models to JIT 64-bit code for a
while now. With this change, if they are generating ELF code, their
JITed code will truly be PIC, but if they target MachO or COFF, it will
contain 64-bit immediates that directly reference external symbols. For
a JIT, that's perfectly fine.
llvm-svn: 337740
|
| |
|
|
|
|
|
|
|
|
|
| |
RegScavenger::unprocess walks backward, so it should undo the effects
of defs before undoing effects of kills. Previously it did things in
the opposite order, leaving a register apparently unused (dead) in the
case where an instruction both used (killed) and defined a register.
Differential Revision: https://reviews.llvm.org/D42200
llvm-svn: 337735
|
| |
|
|
|
|
| |
Instead of comparing names, compare positions in the parent module.
llvm-svn: 337723
|
| |
|
|
| |
llvm-svn: 337720
|
| |
|
|
| |
llvm-svn: 337714
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
OpChain has subclasses, so add a virtual destructor.
This fixes an issue when deleting subclasses of OpChain (see MatchSMLAD() specifically) in r337701.
Reviewers: javed.absar
Subscribers: llvm-commits, SjoerdMeijer, samparker
Differential Revision: https://reviews.llvm.org/D49681
llvm-svn: 337713
|
| |
|
|
|
|
| |
Fix double-free.
llvm-svn: 337711
|
| |
|
|
|
|
|
| |
Attempt to fix the leak introduced in r337687 and make sanitizer
buildbots green again.
llvm-svn: 337709
|
| |
|
|
|
|
|
| |
scalarizeVectorLoad creates MERGE_VALUES nodes which are immediately
decomposed in expandLoad. Elide the node in these cases.
llvm-svn: 337708
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In preparing to allow ARMParallelDSP pass to parallelise more than
smlads, I've restructed some elements:
- The ParallelMAC struct has been renamed to BinOpChain.
- The BinOpChain struct holds two value lists: LHS and RHS, as well
as inheriting from the OpChain base class.
- The OpChain struct holds all the values of the represented chain
and has had the memory locations functionality inserted into it.
- ParallelMACList becomes OpChainList and it now holds pointers
instead of objects.
Differential Revision: https://reviews.llvm.org/D49020
llvm-svn: 337701
|
| |
|
|
|
|
|
|
| |
Two minor issues: The new MCD SchedWrite name does not contain "Unit" like
all the others, so a check is needed. Also, print "LSU" instead of "LS".
Review: Ulrich Weigand
llvm-svn: 337700
|
| |
|
|
|
|
| |
Differential Revision: https://reviews.llvm.org/D48809
llvm-svn: 337698
|