diff options
author | Ben Gardon <bgardon@google.com> | 2019-03-12 11:45:58 -0700 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2019-03-23 20:11:34 +0100 |
commit | 98ab3b877400c2cbd025c112fb3d2b759f067193 (patch) | |
tree | 69c0add8903cb68e9e25132f9ccf1e9d736ec992 /arch/x86/kvm | |
parent | bf5615991a915bfce37a6abdd8419325a4ac2f9a (diff) | |
download | blackbird-obmc-linux-98ab3b877400c2cbd025c112fb3d2b759f067193.tar.gz blackbird-obmc-linux-98ab3b877400c2cbd025c112fb3d2b759f067193.zip |
Revert "KVM/MMU: Flush tlb directly in the kvm_zap_gfn_range()"
commit 92da008fa21034c369cdb8ca2b629fe5c196826b upstream.
This reverts commit 71883a62fcd6c70639fa12cda733378b4d997409.
The above commit contains an optimization to kvm_zap_gfn_range which
uses gfn-limited TLB flushes, if enabled. If using these limited flushes,
kvm_zap_gfn_range passes lock_flush_tlb=false to slot_handle_level_range
which creates a race when the function unlocks to call cond_resched.
See an example of this race below:
CPU 0 CPU 1 CPU 3
// zap_direct_gfn_range
mmu_lock()
// *ptep == pte_1
*ptep = 0
if (lock_flush_tlb)
flush_tlbs()
mmu_unlock()
// In invalidate range
// MMU notifier
mmu_lock()
if (pte != 0)
*ptep = 0
flush = true
if (flush)
flush_remote_tlbs()
mmu_unlock()
return
// Host MM reallocates
// page previously
// backing guest memory.
// Guest accesses
// invalid page
// through pte_1
// in its TLB!!
Tested: Ran all kvm-unit-tests on a Intel Haswell machine with and
without this patch. The patch introduced no new failures.
Signed-off-by: Ben Gardon <bgardon@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'arch/x86/kvm')
-rw-r--r-- | arch/x86/kvm/mmu.c | 16 |
1 files changed, 3 insertions, 13 deletions
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index f2d1d230d5b8..631d74e864d6 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -5635,13 +5635,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) { struct kvm_memslots *slots; struct kvm_memory_slot *memslot; - bool flush_tlb = true; - bool flush = false; int i; - if (kvm_available_flush_tlb_with_range()) - flush_tlb = false; - spin_lock(&kvm->mmu_lock); for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { slots = __kvm_memslots(kvm, i); @@ -5653,17 +5648,12 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) if (start >= end) continue; - flush |= slot_handle_level_range(kvm, memslot, - kvm_zap_rmapp, PT_PAGE_TABLE_LEVEL, - PT_MAX_HUGEPAGE_LEVEL, start, - end - 1, flush_tlb); + slot_handle_level_range(kvm, memslot, kvm_zap_rmapp, + PT_PAGE_TABLE_LEVEL, PT_MAX_HUGEPAGE_LEVEL, + start, end - 1, true); } } - if (flush) - kvm_flush_remote_tlbs_with_address(kvm, gfn_start, - gfn_end - gfn_start + 1); - spin_unlock(&kvm->mmu_lock); } |