| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
llvm-svn: 44261
|
| |
|
|
|
|
| |
for Darwin PPC, but it's not fully working yet.
llvm-svn: 44258
|
| |
|
|
| |
llvm-svn: 44240
|
| |
|
|
| |
llvm-svn: 44204
|
| |
|
|
| |
llvm-svn: 44183
|
| |
|
|
| |
llvm-svn: 44128
|
| |
|
|
|
|
|
|
| |
applied
to all targets uses GOT-relative offsets for PIC (Alpha?)
llvm-svn: 44108
|
| |
|
|
|
|
|
|
| |
in favour of teaching CCAssignToStack that size 0 and/or align
0 means to use the ABI values. This seems a neater solution.
It is safe since no legal value type has size 0.
llvm-svn: 44107
|
| |
|
|
|
|
|
|
|
|
|
| |
MachineOperand auxInfo. Previous clunky implementation uses an external map
to track sub-register uses. That works because register allocator uses
a new virtual register for each spilled use. With interval splitting (coming
soon), we may have multiple uses of the same register some of which are
of using different sub-registers from others. It's too fragile to constantly
update the information.
llvm-svn: 44104
|
| |
|
|
| |
llvm-svn: 44057
|
| |
|
|
|
|
|
|
| |
to use different mappings for EH and debug info;
no functional change yet.
Fix warning in X86CodeEmitter.
llvm-svn: 44056
|
| |
|
|
| |
llvm-svn: 44048
|
| |
|
|
| |
llvm-svn: 44045
|
| |
|
|
|
|
|
|
|
|
|
| |
adjustment fields, and an optional flag. If there is a "dynamic_stackalloc" in
the code, make sure that it's bracketed by CALLSEQ_START and CALLSEQ_END. If
not, then there is the potential for the stack to be changed while the stack's
being used by another instruction (like a call).
This can only result in tears...
llvm-svn: 44037
|
| |
|
|
|
|
|
|
| |
should fix
some regressions on ppc nightly tests.
llvm-svn: 44029
|
| |
|
|
|
|
|
| |
Fixed some AsmPrinter issues
Added GLOBAL_OFFSET_TABLE Node handle.
llvm-svn: 44024
|
| |
|
|
|
|
| |
Target maintainers: please check that the instructions for your target are correctly marked.
llvm-svn: 44012
|
| |
|
|
| |
llvm-svn: 43998
|
| |
|
|
|
|
|
|
| |
This makes DwarfRegNum to accept list of numbers instead.
Added three different "flavours", but only slightly tested on x86-32/linux.
Please check another subtargets if possible,
llvm-svn: 43997
|
| |
|
|
|
|
|
| |
dealing with types whose size & alignment are
different on different subtargets. Use it for x86 f80.
llvm-svn: 43988
|
| |
|
|
| |
llvm-svn: 43978
|
| |
|
|
| |
llvm-svn: 43955
|
| |
|
|
| |
llvm-svn: 43954
|
| |
|
|
| |
llvm-svn: 43950
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Then:
call "L1$pb"
"L1$pb":
popl %eax
...
LBB1_1: # entry
imull $4, %ecx, %ecx
leal LJTI1_0-"L1$pb"(%eax), %edx
addl LJTI1_0-"L1$pb"(%ecx,%eax), %edx
jmpl *%edx
.align 2
.set L1_0_set_3,LBB1_3-LJTI1_0
.set L1_0_set_2,LBB1_2-LJTI1_0
.set L1_0_set_5,LBB1_5-LJTI1_0
.set L1_0_set_4,LBB1_4-LJTI1_0
LJTI1_0:
.long L1_0_set_3
.long L1_0_set_2
Now:
call "L1$pb"
"L1$pb":
popl %eax
...
LBB1_1: # entry
addl LJTI1_0-"L1$pb"(%eax,%ecx,4), %eax
jmpl *%eax
.align 2
.set L1_0_set_3,LBB1_3-"L1$pb"
.set L1_0_set_2,LBB1_2-"L1$pb"
.set L1_0_set_5,LBB1_5-"L1$pb"
.set L1_0_set_4,LBB1_4-"L1$pb"
LJTI1_0:
.long L1_0_set_3
.long L1_0_set_2
llvm-svn: 43924
|
| |
|
|
| |
llvm-svn: 43918
|
| |
|
|
| |
llvm-svn: 43892
|
| |
|
|
|
|
|
|
| |
Would somebody not on Darwin please make sure this
doesn't break anything. Exception handling failures
would be the most likely symptom.
llvm-svn: 43844
|
| |
|
|
|
|
| |
Much improvement in exception handling.
llvm-svn: 43794
|
| |
|
|
| |
llvm-svn: 43749
|
| |
|
|
|
|
| |
Thanks for the suggestions Bill :-)
llvm-svn: 43742
|
| |
|
|
|
|
|
|
|
| |
static __thread struct {
int a;
int b;
} teste = {0, 0};
llvm-svn: 43722
|
| |
|
|
|
|
|
| |
less than 16. This is a temporary solution until dynamic stack alignment is
implemented.
llvm-svn: 43703
|
| |
|
|
|
|
|
| |
Removed all macro code for PIC (goodbye "la").
Support tested with shootout bench.
llvm-svn: 43697
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
should only effect x86 when using long double. Now
12/16 bytes are output for long double globals (the
exact amount depends on the alignment). This brings
globals in line with the rest of LLVM: the space
reserved for an object is now always the ABI size.
One tricky point is that only 10 bytes should be
output for long double if it is a field in a packed
struct, which is the reason for the additional
argument to EmitGlobalConstant.
llvm-svn: 43688
|
| |
|
|
|
|
| |
Evan, please review this.
llvm-svn: 43680
|
| |
|
|
| |
llvm-svn: 43676
|
| |
|
|
|
|
| |
regs on x86-64.
llvm-svn: 43669
|
| |
|
|
| |
llvm-svn: 43646
|
| |
|
|
| |
llvm-svn: 43642
|
| |
|
|
| |
llvm-svn: 43630
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The meaning of getTypeSize was not clear - clarifying it is important
now that we have x86 long double and arbitrary precision integers.
The issue with long double is that it requires 80 bits, and this is
not a multiple of its alignment. This gives a primitive type for
which getTypeSize differed from getABITypeSize. For arbitrary precision
integers it is even worse: there is the minimum number of bits needed to
hold the type (eg: 36 for an i36), the maximum number of bits that will
be overwriten when storing the type (40 bits for i36) and the ABI size
(i.e. the storage size rounded up to a multiple of the alignment; 64 bits
for i36).
This patch removes getTypeSize (not really - it is still there but
deprecated to allow for a gradual transition). Instead there is:
(1) getTypeSizeInBits - a number of bits that suffices to hold all
values of the type. For a primitive type, this is the minimum number
of bits. For an i36 this is 36 bits. For x86 long double it is 80.
This corresponds to gcc's TYPE_PRECISION.
(2) getTypeStoreSizeInBits - the maximum number of bits that is
written when storing the type (or read when reading it). For an
i36 this is 40 bits, for an x86 long double it is 80 bits. This
is the size alias analysis is interested in (getTypeStoreSize
returns the number of bytes). There doesn't seem to be anything
corresponding to this in gcc.
(3) getABITypeSizeInBits - this is getTypeStoreSizeInBits rounded
up to a multiple of the alignment. For an i36 this is 64, for an
x86 long double this is 96 or 128 depending on the OS. This is the
spacing between consecutive elements when you form an array out of
this type (getABITypeSize returns the number of bytes). This is
TYPE_SIZE in gcc.
Since successive elements in a SequentialType (arrays, pointers
and vectors) need to be aligned, the spacing between them will be
given by getABITypeSize. This means that the size of an array
is the length times the getABITypeSize. It also means that GEP
computations need to use getABITypeSize when computing offsets.
Furthermore, if an alloca allocates several elements at once then
these too need to be aligned, so the size of the alloca has to be
the number of elements multiplied by getABITypeSize. Logically
speaking this doesn't have to be the case when allocating just
one element, but it is simpler to also use getABITypeSize in this
case. So alloca's and mallocs should use getABITypeSize. Finally,
since gcc's only notion of size is that given by getABITypeSize, if
you want to output assembler etc the same as gcc then getABITypeSize
is the size you want.
Since a store will overwrite no more than getTypeStoreSize bytes,
and a read will read no more than that many bytes, this is the
notion of size appropriate for alias analysis calculations.
In this patch I have corrected all type size uses except some of
those in ScalarReplAggregates, lib/Codegen, lib/Target (the hard
cases). I will get around to auditing these too at some point,
but I could do with some help.
Finally, I made one change which I think wise but others might
consider pointless and suboptimal: in an unpacked struct the
amount of space allocated for a field is now given by the ABI
size rather than getTypeStoreSize. I did this because every
other place that reserves memory for a type (eg: alloca) now
uses getABITypeSize, and I didn't want to make an exception
for unpacked structs, i.e. I did it to make things more uniform.
This only effects structs containing long doubles and arbitrary
precision integers. If someone wants to pack these types more
tightly they can always use a packed struct.
llvm-svn: 43620
|
| |
|
|
| |
llvm-svn: 43609
|
| |
|
|
|
|
|
|
|
|
| |
getMaxInlineSizeThreshold
and by restructuring the X86 version.
New I just have to move this to a common place :-)
llvm-svn: 43554
|
| |
|
|
|
|
|
|
| |
Now both subtarget define getMaxInlineSizeThreshold and the expansion uses it.
This should not change generated code.
llvm-svn: 43552
|
| |
|
|
| |
llvm-svn: 43535
|
| |
|
|
|
|
| |
CVTTPD2PI, CVTTPS2PI, CVTPI2PD, CVTPI2PS.
llvm-svn: 43523
|
| |
|
|
| |
llvm-svn: 43500
|
| |
|
|
| |
llvm-svn: 43488
|
| |
|
|
|
|
|
|
| |
transformation. Previously, it's restricted by ensuring the number of load uses
is one. Now the restriction is loosened up by allowing setcc uses to be
"extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq).
llvm-svn: 43465
|