| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
|
| |
since a caller uses preserved registers across the call.
llvm-svn: 175043
|
| |
|
|
|
|
| |
assembly.
llvm-svn: 175036
|
| |
|
|
| |
llvm-svn: 175035
|
| |
|
|
| |
llvm-svn: 175034
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
RegisterCoalescer used to depend on LiveDebugVariable. LDV removes DBG_VALUEs
without emitting them at the end.
We fix this by removing LDV from RegisterCoalescer. Also add an assertion to
make sure we call emitDebugValues if DBG_VALUEs are removed at
runOnMachineFunction.
rdar://problem/13183203
Reviewed by Andy & Jakob
llvm-svn: 175023
|
| |
|
|
| |
llvm-svn: 174992
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is complicated by backward labels (e.g., 0b can be both a backward label
and a binary zero). The current implementation assumes [0-9]b is always a
label and thus it's possible for 0b and 1b to not be interpreted correctly for
ms-style inline assembly. However, this is relatively simple to fix in the
inline assembly (i.e., drop the [bB]).
This patch also limits backward labels to [0-9]b, so that only 0b and 1b are
ambiguous.
Part of rdar://12470373
llvm-svn: 174983
|
| |
|
|
|
|
| |
option "generate-dwarf-pubnames" to control it, set to "false" by default.
llvm-svn: 174981
|
| |
|
|
| |
llvm-svn: 174979
|
| |
|
|
|
|
| |
Patch by: Kevin Schoedel
llvm-svn: 174974
|
| |
|
|
|
|
| |
instructions.
llvm-svn: 174973
|
| |
|
|
|
|
|
|
|
|
|
| |
DAGCombiner::ReduceLoadWidth was converting (trunc i32 (shl i64 v, 32))
into (shl i32 v, 32) into undef. To prevent this, check the shift count
against the final result size.
Patch by: Kevin Schoedel
Reviewed by: Nadav Rotem
llvm-svn: 174972
|
| |
|
|
|
|
|
|
|
|
|
| |
Vectors were being manually scalarized by the backend. Instead,
let the target-independent code do all of the work. The manual
scalarization was from a time before good target-independent support
for scalarization in LLVM. However, this forces us to specially-handle
vector loads and stores, which we can turn into PTX instructions that
produce/consume multiple operands.
llvm-svn: 174968
|
| |
|
|
| |
llvm-svn: 174959
|
| |
|
|
| |
llvm-svn: 174954
|
| |
|
|
|
|
|
|
|
| |
A reverse shuffle is lowered to a vrev and possibly a vext instruction (quad
word).
radar://13171406
llvm-svn: 174933
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Lower reverse shuffles to a vrev64 and a vext instruction instead of the default
legalization of storing and loading to the stack. This is important because we
generate reverse shuffles in the loop vectorizer when we reverse store to an
array.
uint8_t Arr[N];
for (i = 0; i < N; ++i)
Arr[N - i - 1] = ...
radar://13171760
llvm-svn: 174929
|
| |
|
|
|
|
| |
Part of rdar://12470373
llvm-svn: 174926
|
| |
|
|
|
|
| |
bitcast X to ...
llvm-svn: 174905
|
| |
|
|
|
|
|
|
| |
- variety of compare instructions,
- loops with no preheader,
- arbitrary lower and upper bounds.
llvm-svn: 174904
|
| |
|
|
| |
llvm-svn: 174903
|
| |
|
|
|
|
| |
*added file for test cases for i386 intel syntax
llvm-svn: 174900
|
| |
|
|
|
|
| |
is not valid in this case, and was causing incorrect optimizations.
llvm-svn: 174896
|
| |
|
|
| |
llvm-svn: 174891
|
| |
|
|
|
|
|
| |
This allows llvm-dwarfdump to handle the relocations needed, at least
for LLVM-produced code.
llvm-svn: 174874
|
| |
|
|
|
|
| |
This broke on Windows, presumably due to interleaving of output streams.
llvm-svn: 174873
|
| |
|
|
|
|
|
|
|
| |
This gives a DiagnosticType to all AsmOperands in sight. This replaces all
"invalid operand" diagnostics with something more specific. The messages given
should still be sufficiently vague that they're not usually actively misleading
when LLVM guesses your instruction incorrectly.
llvm-svn: 174871
|
| |
|
|
| |
llvm-svn: 174865
|
| |
|
|
| |
llvm-svn: 174864
|
| |
|
|
|
|
|
|
|
| |
Handle chains in which the same offset is used for both loads and
stores to the same array.
Fixes rdar://11410078.
llvm-svn: 174789
|
| |
|
|
|
|
| |
line table entries in assembly.
llvm-svn: 174785
|
| |
|
|
|
|
|
|
|
|
| |
same so we put in the comment field an indicator when we think we are
emitting the 16 bit version. For the direct object emitter, the difference is
important as well as for other passes which need an accurate count of
program size. There will be other similar putbacks to this for various
instructions.
llvm-svn: 174747
|
| |
|
|
|
|
|
|
|
|
|
| |
Previously, even when a pre-increment load or store was generated,
we often needed to keep a copy of the original base register for use
with other offsets. If all of these offsets are constants (including
the offset which was combined into the addressing mode), then this is
clearly unnecessary. This change adjusts these other offsets to use the
new incremented address.
llvm-svn: 174746
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Aside from the question of whether we report a warning or an error when we
can't satisfy a requested stack object alignment, the current implementation
of this is not good. We're not providing any source location in the diagnostics
and the current warning is not connected to any warning group so you can't
control it. We could improve the source location somewhat, but we can do a
much better job if this check is implemented in the front-end, so let's do that
instead. <rdar://problem/13127907>
llvm-svn: 174741
|
| |
|
|
|
|
|
|
|
| |
Thanks to help from Nadav and Hal, I have a more reasonable (and even
correct!) approach. This specifically penalizes the insertelement
and extractelement operations for the performance hit that will occur
on PowerPC processors.
llvm-svn: 174725
|
| |
|
|
|
|
|
|
| |
isn't using the default calling convention. However, if the transformation is
from a call to inline IR, then the calling convention doesn't matter.
rdar://13157990
llvm-svn: 174724
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds a function to target transform info to query for the cost of address
computation. The cost model analysis pass now also queries this interface.
The code in LoopVectorize adds the cost of address computation as part of the
memory instruction cost calculation. Only there, we know whether the instruction
will be scalarized or not.
Increase the penality for inserting in to D registers on swift. This becomes
necessary because we now always assume that address computation has a cost and
three is a closer value to the architecture.
radar://13097204
llvm-svn: 174713
|
| |
|
|
|
|
| |
and provide build instructions
llvm-svn: 174711
|
| |
|
|
|
|
|
|
| |
allowed size for the instruction. This code uses RegScavenger to fix this.
We sometimes need 2 registers for Mips16 so we must handle things
differently than how register scavenger is normally used.
llvm-svn: 174696
|
| |
|
|
|
|
|
|
|
|
| |
included."
This reverts commit 3854a5d90fee52af1065edbed34521fff6cdc18d.
This causes a clang unit test to hang: vtable-available-externally.cpp.
llvm-svn: 174692
|
| |
|
|
| |
llvm-svn: 174675
|
| |
|
|
|
|
|
|
| |
Change the
original JALR instruction with one register operand to be a pseudo-instruction.
llvm-svn: 174657
|
| |
|
|
| |
llvm-svn: 174650
|
| |
|
|
| |
llvm-svn: 174639
|
| |
|
|
|
|
|
|
| |
Vector selects are cheap on NEON. They get lowered to a vbsl instruction.
radar://13158753
llvm-svn: 174631
|
| |
|
|
|
|
|
|
|
|
|
| |
These instructions compare two floating point values and return an
integer true (-1) or false (0) value.
When compiling code generated by the Mesa GLSL frontend, the SET*_DX10
instructions save us four instructions for most branch decisions that
use floating-point comparisons.
llvm-svn: 174609
|
| |
|
|
|
|
| |
All of the le and lt variants are unsupported.
llvm-svn: 174608
|
| |
|
|
| |
llvm-svn: 174607
|
| |
|
|
| |
llvm-svn: 174591
|
| |
|
|
| |
llvm-svn: 174588
|