summaryrefslogtreecommitdiffstats
path: root/include/asm-x86
Commit message (Collapse)AuthorAgeFilesLines
* x86: use k modifier for 4-byte access.Glauber Costa2008-07-091-1/+1
| | | | | | | | | Do it in a separate patch for bisectability. Goal is to have put_user_size integrated. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: move __addr_ok to uaccess.h.Glauber Costa2008-07-093-7/+4
| | | | | | | | | Take it out of uaccess_32.h. Since it seems that no users of the x86_64 exists, we simply pick the i386 version. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: merge getuser.Glauber Costa2008-07-093-72/+55
| | | | | | | | | | Merge versions of getuser from uaccess_32.h and uaccess_64.h into uaccess.h. There is a part which is 64-bit only (for now), and for that, we use a __get_user_8 macro. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: merge common parts of uaccess.Glauber Costa2008-07-093-193/+124
| | | | | | | | | | | Common parts of uaccess_32.h and uaccess_64.h are put in uaccess.h. Bits in uaccess_32.h and uaccess_64.h that come to this file are equal except for comments and whitespaces differences. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: use something common for both architectures.Glauber Costa2008-07-092-2/+2
| | | | | | | | | Using explicit hexa (0xFFFFFFUL) introduces an unnecessary difference between i386 and x86_64 because of the size of their long. Use -1UL instead. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: use long instead of int.Glauber Costa2008-07-091-1/+1
| | | | | | | | | Do not refer to the processor word-size with int, as it won't work with x86_64. Use long instead. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: introduce likely in macro.Glauber Costa2008-07-091-1/+1
| | | | | | | | | Put the likely hint in access_ok. Just for bisectability. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: change asm constraint.Glauber Costa2008-07-091-1/+1
| | | | | | | | | | | | | | | | | Our integration efforts broke a build with this function being used with i386. Reason is "g" can put the operand in an imm32, which according to The Book (tm), is invalid as the second operand. This is actually a bug in x86_64 too, since the x86_64 instruction set reference does not list it as valid. We probably didn't trigger this before due to the ammount of registers available for 64-bit platforms. But that's just my guess. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: commonize __range_not_ok.Glauber Costa2008-07-092-5/+4
| | | | | | | | | | For i386, __range_not_ok is a better name than __range_ok, since it returns 0 when it is in fact okay. Other than that, both versions does not need the word size specifiers, and we remove them. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: use macros from asm.h.Glauber Costa2008-07-091-0/+2
| | | | | | | | | | In putuser_32.S and putuser_64.S, replace things like .quad, .long, and explicit references to [r|e]ax for the apropriate macros in asm/asm.h. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: pass argument to putuser_64 functions in ax register.Glauber Costa2008-07-091-1/+1
| | | | | | | | This is consistent with i386 usage. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: clobber rbx in putuser_64.S.Glauber Costa2008-07-091-1/+1
| | | | | | | | Instead of clobbering r8, clobber rbx, which is the i386 way. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: user put_user_x instead of all variants.Glauber Costa2008-07-091-18/+7
| | | | | | | | | | | | Follow the pattern, and define a single put_user_x, instead of defining macros for all available sizes. Exception is put_user_8, since the "A" constraint does not give us enough power to specify which register (a or d) to use in the 32-bit common case. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: don't save ebx in putuser_32.S.Glauber Costa2008-07-091-5/+5
| | | | | | | | Clobber it in the inline asm macros, and let the compiler do this for us. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: merge getuser asm functions.Glauber Costa2008-07-091-1/+3
| | | | | | | | getuser_32.S and getuser_64.S are merged into getuser.S. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: introduce __ASM_REG macro.Glauber Costa2008-07-091-0/+3
| | | | | | | | | | | | There are situations in which the architecture wants to use the register that represents its word-size, whatever it is. For those, introduce __ASM_REG in asm.h, along with the first users _ASM_AX and _ASM_DX. They have users waiting for it, namely the getuser functions. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: don't clobber r8 nor use rcx.Glauber Costa2008-07-091-2/+1
| | | | | | | | | | There's really no reason to clobber r8 or pass the address in rcx. We can safely use only two registers (which we already have to touch anyway) to do the job. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: delay lib unification build fixIngo Molnar2008-07-091-4/+0
| | | | | | | | | fix: arch/x86/lib/delay.c:93:24: error: macro "use_tsc_delay" passed 1 arguments, but takes just 0 arch/x86/lib/delay.c:94: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘{’ token Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, uv: build fix #2 for "x86, uv: update x86 mmr list for SGI uv"Ingo Molnar2008-07-091-26/+0
| | | | | | | | | | | | | | | fix: In file included from arch/x86/kernel/tlb_uv.c:14: include/asm/uv/uv_mmrs.h:986: error: redefinition of ‘union uvh_rh_gam_cfg_overlay_config_mmr_u’ include/asm/uv/uv_mmrs.h:988: error: redefinition of ‘struct uvh_rh_gam_cfg_overlay_config_mmr_s’ include/asm/uv/uv_mmrs.h:1064: error: redefinition of ‘union uvh_rh_gam_mmioh_overlay_config_mmr_u’ include/asm/uv/uv_mmrs.h:1066: error: redefinition of ‘struct uvh_rh_gam_mmioh_overlay_config_mmr_s’ caused by another duplicate section (cut & paste error) in commit 5d061e397db1 "x86, uv: update x86 mmr list for SGI uv". Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, uv: build fix for "x86, uv: update x86 mmr list for SGI uv"Ingo Molnar2008-07-091-20/+0
| | | | | | | | | | | | | | | fix: In file included from arch/x86/kernel/genx2apic_uv_x.c:25: include/asm/uv/uv_mmrs.h:986: error: redefinition of ‘union uvh_rh_gam_cfg_overlay_config_mmr_u’ include/asm/uv/uv_mmrs.h:988: error: redefinition of ‘struct uvh_rh_gam_cfg_overlay_config_mmr_s’ include/asm/uv/uv_mmrs.h:1064: error: redefinition of ‘union uvh_rh_gam_mmioh_overlay_config_mmr_u’ include/asm/uv/uv_mmrs.h:1066: error: redefinition of ‘struct uvh_rh_gam_mmioh_overlay_config_mmr_s’ caused by duplicate section (cut & paste error) in commit 5d061e397db1 "x86, uv: update x86 mmr list for SGI uv". Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: rename paravirtualized TSC functionsAlok Kataria2008-07-093-5/+5
| | | | | | | | | | Rename the paravirtualized calculate_cpu_khz to calibrate_tsc. In all cases, we actually calibrate_tsc and use that as the cpu_khz value. Signed-off-by: Alok N Kataria <akataria@vmware.com> Signed-off-by: Dan Hecht <dhecht@vmware.com> Cc: Dan Hecht <dhecht@vmware.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: merge tsc_init and clocksource codeAlok Kataria2008-07-094-2/+12
| | | | | | | | | | Unify the clocksource code. Unify the tsc_init code. Signed-off-by: Alok N Kataria <akataria@vmware.com> Signed-off-by: Dan Hecht <dhecht@vmware.com> Cc: Dan Hecht <dhecht@vmware.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: merge tsc calibrationAlok Kataria2008-07-092-2/+1
| | | | | | | | | | | | | | | Merge the tsc calibration code for the 32bit and 64bit kernel. The paravirtualized calculate_cpu_khz for 64bit now points to the correct tsc_calibrate code as in 32bit. Original native_calculate_cpu_khz for 64 bit is now called as calibrate_cpu. Also moved the recalibrate_cpu_khz function in the common file. Note that this function is called only from powernow K7 cpu freq driver. Signed-off-by: Alok N Kataria <akataria@vmware.com> Signed-off-by: Dan Hecht <dhecht@vmware.com> Cc: Dan Hecht <dhecht@vmware.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, uv: update x86 mmr list for SGI uvDimitri Sivanich2008-07-091-57/+478
| | | | | | | | | This patch updates the X86 mmr list for SGI uv. Signed-off-by: Dimitri Sivanich <sivanich@sgi.com> Cc: Jack Steiner <steiner@sgi.com> Cc: Russ Anderson <rja@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: map UV chipset space - UV supportJack Steiner2008-07-092-0/+48
| | | | | | | | | Create page table entries to map the SGI UV chipset GRU. local MMR & global MMR ranges. Signed-off-by: Jack Steiner <steiner@sgi.com> Cc: linux-mm@kvack.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: map UV chipset space - pagetableJack Steiner2008-07-092-0/+6
| | | | | | | | | Add boot-time function for creating additional 2MB page table entries for mapping chipset specific cached/uncached ranges. Signed-off-by: Jack Steiner <steiner@sgi.com> Cc: linux-mm@kvack.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: use FIRMWARE_MEMMAP on x86/E820Bernhard Walle2008-07-081-0/+2
| | | | | | | | | | | | | | This patch uses the /sys/firmware/memmap interface provided in the last patch on the x86 architecture when E820 is used. The patch copies the E820 memory map very early, and registers the E820 map afterwards via firmware_map_add_early(). Signed-off-by: Bernhard Walle <bwalle@suse.de> Acked-by: Greg KH <gregkh@suse.de> Acked-by: Vivek Goyal <vgoyal@redhat.com> Cc: kexec@lists.infradead.org Cc: yhlu.kernel@gmail.com Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: remove acpi_srat config v2Yinghai Lu2008-07-081-1/+1
| | | | | | | | | use ACPI_NUMA directly and move srat_32.c to mm/ Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86_32: remove __PAGE_KERNEL(_EXEC)Jeremy Fitzhardinge2008-07-081-10/+0
| | | | | | | | | | | | | | | | | | | | | | | | | From: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Older x86-32 processors do not support global mappings (PGD), so must only use it if the processor supports it. The _PAGE_KERNEL* flags always have _PAGE_KERNEL set, since logically we always want it set. This is OK even on processors which do not support PGD, since all _PAGE flags are masked with __supported_pte_mask before being turned into a real in-pagetable pte. On 32-bit systems, __supported_pte_mask is initialized to not contain _PAGE_GLOBAL, and it is then added if the CPU is found to support it. The x86-32 code used to use __PAGE_KERNEL/__PAGE_KERNEL_EXEC for this purpose, but they're now redundant and can be removed. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: always set _PAGE_GLOBAL in _PAGE_KERNEL* flagsJeremy Fitzhardinge2008-07-081-19/+13
| | | | | | | | | | | | | | | | Consistently set _PAGE_GLOBAL in _PAGE_KERNEL flags. This makes 32- and 64-bit code consistent, and removes some special cases where __PAGE_KERNEL* did not have _PAGE_GLOBAL set, causing confusion as a result of the inconsistencies. This patch only affects x86-64, which generally always supports PGD. The x86-32 patch is next. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: move prefill_possible_map calling early, fixIngo Molnar2008-07-081-3/+5
| | | | | | | | | fix: arch/x86/kernel/built-in.o: In function `setup_arch': : undefined reference to `prefill_possible_map' Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: move prefill_possible_map calling earlyYinghai Lu2008-07-081-0/+4
| | | | | | | | | | call it right after we are done with MADT/mptable handling, instead of doing that in setup_per_cpu_areas() later on... this way for_possible_cpu() can be used early. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: merge zones_sizes_init for numa and non numa on 32-bitYinghai Lu2008-07-081-1/+0
| | | | | | | | | | | move out e820_register_active_regions from non numa zones_sizes_init() and remove numa version zones_sizes_init(). and let 32 bit call remove_all_active_ranges() in setup_arch() directly like 64-bit Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: change copy_e820_map to append_e820_mapYinghai Lu2008-07-081-1/+0
| | | | | | | | so it has a more meaningful name. also change it to static. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: cleanup e820_setup_gap(), v2Alok Kataria2008-07-081-1/+1
| | | | | | | | | | e820_search_gap also take a end_addr parameter to limit search from start_addr to end_addr. Signed-off-by: AloK N Kataria <akataria@vmware.com> Acked-by: Yinghai Lu <yhlu.kernel@gmail.com> Cc: "lenb@kernel.org" <lenb@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: add check for node passed to node_to_cpumask, v3Mike Travis2008-07-081-1/+6
| | | | | | | | | | | | | | | | * When CONFIG_DEBUG_PER_CPU_MAPS is set, the node passed to node_to_cpumask and node_to_cpumask_ptr should be validated. If invalid, then a dump_stack is performed and a zero cpumask is returned. v2: Slightly different version to remove a compiler warning. v3: Redone to reflect moving setup.c -> setup_percpu.c Signed-off-by: Mike Travis <travis@sgi.com> Cc: Vegard Nossum <vegard.nossum@gmail.com> Cc: "akpm@linux-foundation.org" <akpm@linux-foundation.org> Cc: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: move reserve_setup_data to setup.cYinghai Lu2008-07-082-2/+3
| | | | | | | | | | | | | | | | | | | | Ying Huang would like setup_data to be reserved, but not included in the no save range. Here we try to modify the e820 table to reserve that range early. also add that in early_res in case bootloader messes up with the ramdisk. other solution would be 1. add early_res_to_highmem... 2. early_res_to_e820... but they could reserve another type memory wrongly, if early_res has some resource reserved early, and not needed later, but it is not removed from early_res in time. Like the RAMDISK (already handled). Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Cc: andi@firstfloor.org Tested-by: Huang, Ying <ying.huang@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* - x86: move early_ioremap prototypes to io.hIngo Molnar2008-07-081-0/+14
| | | | | | | | | | | | | now that the early-ioremap code is unified, move the prototypes too from io_32.h to io.h. this fixes: arch/x86/kernel/setup.c:531: error: implicit declaration of function ‘early_ioremap_init' on 64-bit. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: use disable_apic in 32bitYinghai Lu2008-07-081-4/+0
| | | | | | | change the enable_local_apic to static force_enable_local_apic for 32bit Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86_64: fix non-paravirt compilationJeremy Fitzhardinge2008-07-082-12/+13
| | | | | | | | | | | | | | | | | | | | | Make sure SWAPGS and PARAVIRT_ADJUST_EXCEPTION_FRAME are properly defined when CONFIG_PARAVIRT is off. Fixes Ingo's build failure: arch/x86/kernel/entry_64.S: Assembler messages: arch/x86/kernel/entry_64.S:1201: Error: invalid character '_' in mnemonic arch/x86/kernel/entry_64.S:1205: Error: invalid character '_' in mnemonic arch/x86/kernel/entry_64.S:1209: Error: invalid character '_' in mnemonic arch/x86/kernel/entry_64.S:1213: Error: invalid character '_' in mnemonic Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Mark McLoughlin <markmc@redhat.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Vegard Nossum <vegard.nossum@gmail.com> Cc: Stephen Tweedie <sct@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: let setup_arch call init_apic_mappings for 32bitYinghai Lu2008-07-082-0/+2
| | | | | | | | | | | instead of calling it from trap_init() also move init ioapic mapping out of apic_32.c so 32 bit do same as 64 bit Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: clean up ARCH_SETUPYinghai Lu2008-07-082-6/+0
| | | | | | | asm-x86/paravirt.h already have protection with CONFIG_PARAVIRT inside Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86/paravirt, 64-bit: make load_gs_index() a paravirt operationJeremy Fitzhardinge2008-07-083-2/+13
| | | | | | | | | | Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86/paravirt, 64-bit: add adjust_exception_frameJeremy Fitzhardinge2008-07-082-0/+11
| | | | | | | | | | | | 64-bit Xen pushes a couple of extra words onto an exception frame. Add a hook to deal with them. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, 64-bit: swapgs pvop with a user-stack can never be calledJeremy Fitzhardinge2008-07-082-1/+11
| | | | | | | | | | | | | | It's never safe to call a swapgs pvop when the user stack is current - it must be inline replaced. Rather than making a call, the SWAPGS_UNSAFE_STACK pvop always just puts "swapgs" as a placeholder, which must either be replaced inline or trap'n'emulated (somehow). Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86/paravirt: add sysret/sysexit pvops for returning to 32-bit compatibility ↵Jeremy Fitzhardinge2008-07-082-14/+56
| | | | | | | | | | | | | | userspace In a 64-bit system, we need separate sysret/sysexit operations to return to a 32-bit userspace. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citirx.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86/paravirt, 64-bit: don't restore user rsp within sysretJeremy Fitzhardinge2008-07-082-6/+5
| | | | | | | | | | | | | There's no need to combine restoring the user rsp within the sysret pvop, so split it out. This makes the pvop's semantics closer to the machine instruction. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citirx.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86/paravirt: split sysret and sysexitJeremy Fitzhardinge2008-07-082-7/+12
| | | | | | | | | | | | | | | | | | | | Don't conflate sysret and sysexit; they're different instructions with different semantics, and may be in use at the same time (at least within the same kernel, depending on whether its an Intel or AMD system). sysexit - just return to userspace, does no register restoration of any kind; must explicitly atomically enable interrupts. sysret - reloads flags from r11, so no need to explicitly enable interrupts on 64-bit, responsible for restoring usermode %gs Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citirx.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, 64-bit: split set_pte_vaddr()Eduardo Habkost2008-07-081-0/+3
| | | | | | | | | | | | | | | We will need to set a pte on l3_user_pgt. Extract set_pte_vaddr_pud() from set_pte_vaddr(), that will accept the l3 page table as parameter. This change should be a no-op for existing code. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, 64-bit: PSE no longer a hard requirementJeremy Fitzhardinge2008-07-081-1/+1
| | | | | | | | | | | | | | Because Xen doesn't support PSE mappings in guests, all code which assumed the presence of PSE has been changed to fall back to smaller mappings if necessary. As a result, PSE is optional rather than required (though still used whereever possible). Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
OpenPOWER on IntegriCloud