diff options
author | Sean Christopherson <sean.j.christopherson@intel.com> | 2019-01-23 14:39:24 -0800 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2019-03-23 20:11:40 +0100 |
commit | f88f29f81d5921b03f60bb9d707b2a377e0875c4 (patch) | |
tree | 93effdac697c275fd0c2fbac6ebd655c9f9a2453 /arch/x86 | |
parent | 64d259a7064214e8b6cc4f807a71987ee73235d2 (diff) | |
download | blackbird-obmc-linux-f88f29f81d5921b03f60bb9d707b2a377e0875c4.tar.gz blackbird-obmc-linux-f88f29f81d5921b03f60bb9d707b2a377e0875c4.zip |
KVM: nVMX: Apply addr size mask to effective address for VMX instructions
commit 8570f9e881e3fde98801bb3a47eef84dd934d405 upstream.
The address size of an instruction affects the effective address, not
the virtual/linear address. The final address may still be truncated,
e.g. to 32-bits outside of long mode, but that happens irrespective of
the address size, e.g. a 32-bit address size can yield a 64-bit virtual
address when using FS/GS with a non-zero base.
Fixes: 064aea774768 ("KVM: nVMX: Decoding memory operands of VMX instructions")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'arch/x86')
-rw-r--r-- | arch/x86/kvm/vmx/nested.c | 25 |
1 files changed, 23 insertions, 2 deletions
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 30c3da69d14f..e915c970e657 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4044,20 +4044,41 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification, if (index_is_valid) off += kvm_register_read(vcpu, index_reg)<<scaling; vmx_get_segment(vcpu, &s, seg_reg); - *ret = s.base + off; + /* + * The effective address, i.e. @off, of a memory operand is truncated + * based on the address size of the instruction. Note that this is + * the *effective address*, i.e. the address prior to accounting for + * the segment's base. + */ if (addr_size == 1) /* 32 bit */ - *ret &= 0xffffffff; + off &= 0xffffffff; + else if (addr_size == 0) /* 16 bit */ + off &= 0xffff; /* Checks for #GP/#SS exceptions. */ exn = false; if (is_long_mode(vcpu)) { + /* + * The virtual/linear address is never truncated in 64-bit + * mode, e.g. a 32-bit address size can yield a 64-bit virtual + * address when using FS/GS with a non-zero base. + */ + *ret = s.base + off; + /* Long mode: #GP(0)/#SS(0) if the memory address is in a * non-canonical form. This is the only check on the memory * destination for long mode! */ exn = is_noncanonical_address(*ret, vcpu); } else if (is_protmode(vcpu)) { + /* + * When not in long mode, the virtual/linear address is + * unconditionally truncated to 32 bits regardless of the + * address size. + */ + *ret = (s.base + off) & 0xffffffff; + /* Protected mode: apply checks for segment validity in the * following order: * - segment type check (#GP(0) may be thrown) |