| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
this is now the default.
llvm-svn: 49667
|
| |
|
|
|
|
| |
memory intrinsic expansion code.
llvm-svn: 49666
|
| |
|
|
|
|
|
| |
stack tracebacks on Darwin x86-64 won't work by default;
nevertheless, everybody but me thinks this is a good idea.
llvm-svn: 49663
|
| |
|
|
| |
llvm-svn: 49661
|
| |
|
|
| |
llvm-svn: 49657
|
| |
|
|
| |
llvm-svn: 49652
|
| |
|
|
|
|
| |
Matthijs Kooijman
llvm-svn: 49648
|
| |
|
|
| |
llvm-svn: 49646
|
| |
|
|
|
|
|
|
| |
much simpler than in LegalizeDAG because calls are
not yet expanded into call sequences: that happens
after type legalization has finished.
llvm-svn: 49634
|
| |
|
|
| |
llvm-svn: 49617
|
| |
|
|
| |
llvm-svn: 49616
|
| |
|
|
|
|
| |
which is significantly more efficient.
llvm-svn: 49614
|
| |
|
|
|
|
|
|
|
| |
in its maps. Add some sanity checks that catch
this kind of thing. Hopefully these can be
removed one day (once all problems are fixed!)
but for the moment it seems wise to have them in.
llvm-svn: 49612
|
| |
|
|
|
|
| |
parts. Fixes PR1643
llvm-svn: 49611
|
| |
|
|
| |
llvm-svn: 49610
|
| |
|
|
| |
llvm-svn: 49606
|
| |
|
|
|
|
| |
the result IRBuilder. Patch by Dominic Hamon.
llvm-svn: 49604
|
| |
|
|
|
|
| |
not # of operands as an input.
llvm-svn: 49599
|
| |
|
|
| |
llvm-svn: 49593
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
optimized x86-64 (and x86) calls so that they work (... at least for
my test cases).
Should fix the following problems:
Problem 1: When i introduced the optimized handling of arguments for
tail called functions (using a sequence of copyto/copyfrom virtual
registers instead of always lowering to top of the stack) i did not
handle byval arguments correctly e.g they did not work at all :).
Problem 2: On x86-64 after the arguments of the tail called function
are moved to their registers (which include ESI/RSI etc), tail call
optimization performs byval lowering which causes xSI,xDI, xCX
registers to be overwritten. This is handled in this patch by moving
the arguments to virtual registers first and after the byval lowering
the arguments are moved from those virtual registers back to
RSI/RDI/RCX.
llvm-svn: 49584
|
| |
|
|
| |
llvm-svn: 49583
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on any current target and aren't optimized in DAGCombiner. Instead
of using intermediate nodes, expand the operations, choosing between
simple loads/stores, target-specific code, and library calls,
immediately.
Previously, the code to emit optimized code for these operations
was only used at initial SelectionDAG construction time; now it is
used at all times. This fixes some cases where rep;movs was being
used for small copies where simple loads/stores would be better.
This also cleans up code that checks for alignments less than 4;
let the targets make that decision instead of doing it in
target-independent code. This allows x86 to use rep;movs in
low-alignment cases.
Also, this fixes a bug that resulted in the use of rep;stos for
memsets of 0 with non-constant memory size when the alignment was
at least 4. It's better to use the library in this case, which
can be significantly faster when the size is large.
This also preserves more SourceValue information when memory
intrinsics are lowered into simple loads/stores.
llvm-svn: 49572
|
| |
|
|
|
|
| |
8-byte-aligned data.
llvm-svn: 49571
|
| |
|
|
| |
llvm-svn: 49569
|
| |
|
|
| |
llvm-svn: 49568
|
| |
|
|
| |
llvm-svn: 49566
|
| |
|
|
| |
llvm-svn: 49548
|
| |
|
|
|
|
| |
backtracking.
llvm-svn: 49544
|
| |
|
|
|
|
| |
implicit_def instead of a copy.
llvm-svn: 49543
|
| |
|
|
|
|
| |
the uses when the live interval is being spilled.
llvm-svn: 49542
|
| |
|
|
| |
llvm-svn: 49540
|
| |
|
|
| |
llvm-svn: 49538
|
| |
|
|
|
|
|
| |
cannot be build with GNAT GPL 2006, only with
GNAT GPL 2005.
llvm-svn: 49529
|
| |
|
|
| |
llvm-svn: 49524
|
| |
|
|
| |
llvm-svn: 49517
|
| |
|
|
|
|
| |
of calls and less aggressive with non-readnone calls.
llvm-svn: 49516
|
| |
|
|
| |
llvm-svn: 49514
|
| |
|
|
| |
llvm-svn: 49513
|
| |
|
|
| |
llvm-svn: 49512
|
| |
|
|
| |
llvm-svn: 49504
|
| |
|
|
| |
llvm-svn: 49502
|
| |
|
|
|
|
| |
wrong order.
llvm-svn: 49499
|
| |
|
|
| |
llvm-svn: 49496
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
in addition to integer expressions. Rewrite GetOrEnforceKnownAlignment
as a ComputeMaskedBits problem, moving all of its special alignment
knowledge to ComputeMaskedBits as low-zero-bits knowledge.
Also, teach ComputeMaskedBits a few basic things about Mul and PHI
instructions.
This improves ComputeMaskedBits-based simplifications in a few cases,
but more noticeably it significantly improves instcombine's alignment
detection for loads, stores, and memory intrinsics.
llvm-svn: 49492
|
| |
|
|
|
|
| |
them all.
llvm-svn: 49491
|
| |
|
|
| |
llvm-svn: 49469
|
| |
|
|
| |
llvm-svn: 49466
|
| |
|
|
| |
llvm-svn: 49465
|
| |
|
|
|
|
|
|
| |
MOVZQI2PQIrr. This would be better handled as a dag combine
(with the goal of eliminating the bitconvert) but I don't know
how to do that safely. Thoughts welcome.
llvm-svn: 49463
|
| |
|
|
|
|
|
| |
def : Pat<((v2f64 (vector_shuffle immAllZerosV_bc,
^
llvm-svn: 49462
|