diff options
author | Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> | 2010-12-01 15:13:34 -0800 |
---|---|---|
committer | Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> | 2011-05-20 14:14:31 -0700 |
commit | a99ac5e8619c27dbb8e7fb5a4e0ca8c8aa214909 (patch) | |
tree | 3ebc55308915871b7589e9a430b7eddc5d1df26e /arch | |
parent | 331468b11b94428a9eb2ed8b3240c17612533a99 (diff) | |
download | blackbird-op-linux-a99ac5e8619c27dbb8e7fb5a4e0ca8c8aa214909.tar.gz blackbird-op-linux-a99ac5e8619c27dbb8e7fb5a4e0ca8c8aa214909.zip |
xen: use mmu_update for xen_set_pte_at()
In principle update_va_mapping is a good match for set_pte_at, since
it gets the address being mapped, which allows Xen to use its linear
pagetable mapping.
However that assumes that the pmd for the address is attached to the
current pagetable, which may not be true for a given user address space
because the kernel pmd is not shared (at least on 32-bit guests).
Normally the kernel will automatically sync a missing part of the
pagetable with the init_mm pagetable transparently via faults, but that
fails when a missing address is passed to Xen.
And while the linear pagetable mapping is very useful for 32-bit Xen
(as it avoids an explicit domain mapping), 32-bit Xen is deprecated.
64-bit Xen has all memory mapped all the time, so it makes no real
difference.
The upshot is that we should use mmu_update, since it can operate on
non-current pagetables or detached pagetables.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/xen/mmu.c | 26 |
1 files changed, 11 insertions, 15 deletions
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c index 4f5e0dc5f6e5..fb3e92e077e2 100644 --- a/arch/x86/xen/mmu.c +++ b/arch/x86/xen/mmu.c @@ -342,22 +342,18 @@ void xen_set_pte_at(struct mm_struct *mm, unsigned long addr, ADD_STATS(set_pte_at_current, mm == current->mm); ADD_STATS(set_pte_at_kernel, mm == &init_mm); - if (mm == current->mm || mm == &init_mm) { - if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU) { - struct multicall_space mcs; - mcs = xen_mc_entry(0); - - MULTI_update_va_mapping(mcs.mc, addr, pteval, 0); - ADD_STATS(set_pte_at_batched, 1); - xen_mc_issue(PARAVIRT_LAZY_MMU); - goto out; - } else - if (HYPERVISOR_update_va_mapping(addr, pteval, 0) == 0) - goto out; - } - xen_set_pte(ptep, pteval); + if(paravirt_get_lazy_mode() == PARAVIRT_LAZY_MMU) { + struct mmu_update u; + + xen_mc_batch(); + + u.ptr = virt_to_machine(ptep).maddr | MMU_NORMAL_PT_UPDATE; + u.val = pte_val_ma(pteval); + xen_extend_mmu_update(&u); -out: return; + xen_mc_issue(PARAVIRT_LAZY_MMU); + } else + native_set_pte(ptep, pteval); } pte_t xen_ptep_modify_prot_start(struct mm_struct *mm, |