| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
| |
It is OK for an alias live range to overlap if there is a copy to or from the
physical register. CoalescerPair can work out if the copy is coalescable
independently of the alias.
This means that we can join with the actual destination interval instead of
using the getOrigDstReg() hack. It is no longer necessary to merge clobber
ranges into subregisters.
llvm-svn: 107695
|
| |
|
|
|
|
|
| |
the block before calling the expansion hook. And don't
put EFLAGS in a mbb's live-in list twice.
llvm-svn: 107691
|
| |
|
|
| |
llvm-svn: 107684
|
| |
|
|
| |
llvm-svn: 107678
|
| |
|
|
|
|
| |
if profitable.
llvm-svn: 107673
|
| |
|
|
| |
llvm-svn: 107670
|
| |
|
|
| |
llvm-svn: 107668
|
| |
|
|
|
|
| |
which do not depend on SelectionDAG.
llvm-svn: 107666
|
| |
|
|
|
|
| |
from getPhysicalRegisterRegClass.
llvm-svn: 107660
|
| |
|
|
|
|
|
|
| |
section means that it is used only during the program load and can be discarded afterwards.
This way *only* debug sections can be discarded, but not the opposite. Seems like the copy-and-pasto from ELF code, since there it contains the reverse flag ('alloc').
llvm-svn: 107658
|
| |
|
|
| |
llvm-svn: 107657
|
| |
|
|
| |
llvm-svn: 107656
|
| |
|
|
|
|
| |
the pseudo instruction is not at the end of the block.
llvm-svn: 107655
|
| |
|
|
|
|
|
|
|
| |
registers. Split out testcases per architecture and os
now.
Patch from Nelson Elhage.
llvm-svn: 107640
|
| |
|
|
| |
llvm-svn: 107637
|
| |
|
|
| |
llvm-svn: 107625
|
| |
|
|
| |
llvm-svn: 107622
|
| |
|
|
| |
llvm-svn: 107615
|
| |
|
|
| |
llvm-svn: 107613
|
| |
|
|
| |
llvm-svn: 107612
|
| |
|
|
| |
llvm-svn: 107610
|
| |
|
|
|
|
| |
v2f32 is illegal on x86.
llvm-svn: 107609
|
| |
|
|
| |
llvm-svn: 107602
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the example in the testcase, we now generate:
_test1: ## @test1
movss 4(%esp), %xmm0
addss 8(%esp), %xmm0
movl 12(%esp), %eax
movss %xmm0, (%eax)
ret
instead of:
_test1: ## @test1
subl $20, %esp
movl 24(%esp), %eax
movq %mm0, (%esp)
movq %mm0, 8(%esp)
movss (%esp), %xmm0
addss 12(%esp), %xmm0
movss %xmm0, (%eax)
addl $20, %esp
ret
v2f32 support did not work reliably because most of the X86
backend didn't know it was legal. It was apparently only added
to support returning source-level v2f32 values in MMX registers
in x86-32 mode. If ABI compatibility is important on this
GCC-extended-vector type for some reason, then the frontend
should generate IR that returns v2i32 instead of v2f32. However,
we generally don't try very hard to be abi compatible on gcc
extended vectors.
llvm-svn: 107601
|
| |
|
|
|
|
|
|
| |
v2f32 as legal in 32-bit mode. It is just as terrible there,
but I just care about x86-64 and noone claims it is valuable
in 64-bit mode.
llvm-svn: 107600
|
| |
|
|
| |
llvm-svn: 107599
|
| |
|
|
|
|
| |
ensures remat'ed loads from fixed slots have the right alignments.
llvm-svn: 107591
|
| |
|
|
| |
llvm-svn: 107585
|
| |
|
|
|
|
|
| |
(SDNPMemOperand). This way when they're morphed the memory operands will be
copied as well.
llvm-svn: 107583
|
| |
|
|
| |
llvm-svn: 107581
|
| |
|
|
| |
llvm-svn: 107569
|
| |
|
|
| |
llvm-svn: 107565
|
| |
|
|
| |
llvm-svn: 107560
|
| |
|
|
| |
llvm-svn: 107558
|
| |
|
|
| |
llvm-svn: 107556
|
| |
|
|
| |
llvm-svn: 107552
|
| |
|
|
|
|
| |
slots so it's always false.
llvm-svn: 107550
|
| |
|
|
| |
llvm-svn: 107549
|
| |
|
|
|
|
|
| |
This code is transitional, it will soon be possible to eliminate
isExtractSubreg, isInsertSubreg, and isMoveInstr in most places.
llvm-svn: 107547
|
| |
|
|
| |
llvm-svn: 107540
|
| |
|
|
| |
llvm-svn: 107537
|
| |
|
|
|
|
|
|
|
|
|
| |
The COPY instruction is intended to replace the target specific copy
instructions for virtual registers as well as the EXTRACT_SUBREG and
INSERT_SUBREG instructions in MachineFunctions. It won't we used in a selection
DAG.
COPY is lowered to native register copies by LowerSubregs.
llvm-svn: 107529
|
| |
|
|
|
|
|
| |
- Fix VEX prefix to be emitted with 3 bytes whenever VEX_5M
represents a REX equivalent two byte leading opcode
llvm-svn: 107523
|
| |
|
|
|
|
|
|
|
|
|
|
| |
new basic blocks, and if used as a function argument, that can cause call frame
setup / destroy pairs to be split across a basic block boundary. That prevents
us from doing a simple assertion to check that the pairs match and alloc/
dealloc the same amount of space. Modify the assertion to only check the
amount allocated when there are matching pairs in the same basic block.
rdar://8022442
llvm-svn: 107517
|
| |
|
|
| |
llvm-svn: 107516
|
| |
|
|
| |
llvm-svn: 107513
|
| |
|
|
|
|
|
|
|
| |
- X86 unfolding should check if the instructions being unfolded has memoperands.
If there is no memoperands, then it must assume conservative alignment. If this
would introduce an expensive sse unaligned load / store, then unfoldMemoryOperand
etc. should not unfold the instruction.
llvm-svn: 107509
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PrologEpilog code, and use it to determine whether
the asm forces stack alignment or not. gcc consistently
does not do this for GCC-style asms; Apple gcc inconsistently
sometimes does it for asm blocks. There is no
convenient place to put a bit in either the SDNode or
the MachineInstr form, so I've added an extra operand
to each; unlovely, but it does allow for expansion for
more bits, should we need it. PR 5125. Some
existing testcases are affected.
The operand lists of the SDNode and MachineInstr forms
are indexed with awesome mnemonics, like "2"; I may
fix this someday, but not now. I'm not making it any
worse. If anyone is inspired I think you can find all
the right places from this patch.
llvm-svn: 107506
|
| |
|
|
| |
llvm-svn: 107505
|
| |
|
|
| |
llvm-svn: 107503
|