diff options
| author | Reid Kleckner <rnk@google.com> | 2018-06-25 18:16:27 +0000 |
|---|---|---|
| committer | Reid Kleckner <rnk@google.com> | 2018-06-25 18:16:27 +0000 |
| commit | 88fee5fdbc2a4769f1f9428e4d4017815aef1849 (patch) | |
| tree | 6ad5e61da7fd03725892c5a429d610b025a2bfd9 /llvm/lib/CodeGen/MachineFrameInfo.cpp | |
| parent | b8128476470836c8aeecdf4c2eea25077c36eea1 (diff) | |
| download | bcm5719-llvm-88fee5fdbc2a4769f1f9428e4d4017815aef1849.tar.gz bcm5719-llvm-88fee5fdbc2a4769f1f9428e4d4017815aef1849.zip | |
Re-land r335297 "[X86] Implement more of x86-64 large and medium PIC code models"
The large code model allows code and data segments to exceed 2GB, which
means that some symbol references may require a displacement that cannot
be encoded as a displacement from RIP. The large PIC model even relaxes
the assumption that the GOT itself is within 2GB of all code. Therefore,
we need a special code sequence to materialize it:
.LtmpN:
leaq .LtmpN(%rip), %rbx
movabsq $_GLOBAL_OFFSET_TABLE_-.LtmpN, %rax # Scratch
addq %rax, %rbx # GOT base reg
From that, non-local references go through the GOT base register instead
of being PC-relative loads. Local references typically use GOTOFF
symbols, like this:
movq extern_gv@GOT(%rbx), %rax
movq local_gv@GOTOFF(%rbx), %rax
All calls end up being indirect:
movabsq $local_fn@GOTOFF, %rax
addq %rbx, %rax
callq *%rax
The medium code model retains the assumption that the code segment is
less than 2GB, so calls are once again direct, and the RIP-relative
loads can be used to access the GOT. Materializing the GOT is easy:
leaq _GLOBAL_OFFSET_TABLE_(%rip), %rbx # GOT base reg
DSO local data accesses will use it:
movq local_gv@GOTOFF(%rbx), %rax
Non-local data accesses will use RIP-relative addressing, which means we
may not always need to materialize the GOT base:
movq extern_gv@GOTPCREL(%rip), %rax
Direct calls are basically the same as they are in the small code model:
They use direct, PC-relative addressing, and the PLT is used for calls
to non-local functions.
This patch adds reasonably comprehensive testing of LEA, but there are
lots of interesting folding opportunities that are unimplemented.
I restricted the MCJIT/eh-lg-pic.ll test to Linux, since the large PIC
code model is not implemented for MachO yet.
Differential Revision: https://reviews.llvm.org/D47211
llvm-svn: 335508
Diffstat (limited to 'llvm/lib/CodeGen/MachineFrameInfo.cpp')
0 files changed, 0 insertions, 0 deletions

