diff options
author | Reid Kleckner <rnk@google.com> | 2018-06-21 21:55:08 +0000 |
---|---|---|
committer | Reid Kleckner <rnk@google.com> | 2018-06-21 21:55:08 +0000 |
commit | 247fe6aeab964e31cb03beba3cf3f6a2cb972c31 (patch) | |
tree | 8dd9dfd25d4b8bd3e969dfb8030c2260e4f57716 /llvm/utils/UpdateTestChecks/asm.py | |
parent | 307c2eb94fcf943d30346c5a24a44cbdf00ac024 (diff) | |
download | bcm5719-llvm-247fe6aeab964e31cb03beba3cf3f6a2cb972c31.tar.gz bcm5719-llvm-247fe6aeab964e31cb03beba3cf3f6a2cb972c31.zip |
[X86] Implement more of x86-64 large and medium PIC code models
Summary:
The large code model allows code and data segments to exceed 2GB, which
means that some symbol references may require a displacement that cannot
be encoded as a displacement from RIP. The large PIC model even relaxes
the assumption that the GOT itself is within 2GB of all code. Therefore,
we need a special code sequence to materialize it:
.LtmpN:
leaq .LtmpN(%rip), %rbx
movabsq $_GLOBAL_OFFSET_TABLE_-.LtmpN, %rax # Scratch
addq %rax, %rbx # GOT base reg
From that, non-local references go through the GOT base register instead
of being PC-relative loads. Local references typically use GOTOFF
symbols, like this:
movq extern_gv@GOT(%rbx), %rax
movq local_gv@GOTOFF(%rbx), %rax
All calls end up being indirect:
movabsq $local_fn@GOTOFF, %rax
addq %rbx, %rax
callq *%rax
The medium code model retains the assumption that the code segment is
less than 2GB, so calls are once again direct, and the RIP-relative
loads can be used to access the GOT. Materializing the GOT is easy:
leaq _GLOBAL_OFFSET_TABLE_(%rip), %rbx # GOT base reg
DSO local data accesses will use it:
movq local_gv@GOTOFF(%rbx), %rax
Non-local data accesses will use RIP-relative addressing, which means we
may not always need to materialize the GOT base:
movq extern_gv@GOTPCREL(%rip), %rax
Direct calls are basically the same as they are in the small code model:
They use direct, PC-relative addressing, and the PLT is used for calls
to non-local functions.
This patch adds reasonably comprehensive testing of LEA, but there are
lots of interesting folding opportunities that are unimplemented.
Reviewers: chandlerc, echristo
Subscribers: hiraditya, llvm-commits
Differential Revision: https://reviews.llvm.org/D47211
llvm-svn: 335297
Diffstat (limited to 'llvm/utils/UpdateTestChecks/asm.py')
-rw-r--r-- | llvm/utils/UpdateTestChecks/asm.py | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/llvm/utils/UpdateTestChecks/asm.py b/llvm/utils/UpdateTestChecks/asm.py index 6bdaaa46ca5..0ece3e131f5 100644 --- a/llvm/utils/UpdateTestChecks/asm.py +++ b/llvm/utils/UpdateTestChecks/asm.py @@ -110,8 +110,9 @@ def scrub_asm_x86(asm, args): asm = SCRUB_X86_SPILL_RELOAD_RE.sub(r'{{[-0-9]+}}(%\1{{[sb]}}p)\2', asm) # Generically match the stack offset of a memory operand. asm = SCRUB_X86_SP_RE.sub(r'{{[0-9]+}}(%\1)', asm) - # Generically match a RIP-relative memory operand. - asm = SCRUB_X86_RIP_RE.sub(r'{{.*}}(%rip)', asm) + if getattr(args, 'x86_scrub_rip', False): + # Generically match a RIP-relative memory operand. + asm = SCRUB_X86_RIP_RE.sub(r'{{.*}}(%rip)', asm) # Generically match a LCP symbol. asm = SCRUB_X86_LCP_RE.sub(r'{{\.LCPI.*}}', asm) if getattr(args, 'extra_scrub', False): |