summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* x86: convert TSC disabling to generic cpuid disable bitmapAndi Kleen2008-01-308-26/+11
| | | | | | | | Fix from: Ian Campbell <ijc@hellion.org.uk> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: don't disable RDTSC in userland for 32bit notscAndi Kleen2008-01-301-1/+0
| | | | | | | | | | Modern 32bit userland doesn't even boot when the TSC is disabled because ld.so tends to contain RDTSCs. So make notsc only effective for the kernel, similar to 64bit. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: convert some existing cpuid disable options to new generic bitmapAndi Kleen2008-01-302-34/+5
| | | | | | | | | This convers nofxsr, mem=nopentium and nosep to use the new generic cpuid disable bitmap instead of using own variables. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: add framework to disable CPUID bits on the command lineAndi Kleen2008-01-304-0/+17
| | | | | | | | | | There are already various options to disable specific cpuid bits on the command line. They all use their own variable. Add a generic mask to make this easier in the future. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: fill in missing pv_mmu_ops entries for PAGETABLE_LEVELS >= 3Eduardo Habkost2008-01-301-2/+9
| | | | | | | | This finally makes paravirt-ops able to compile and boot under x86_64. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: don't set pagetable_setup_{start,done} hooks on 64-bitEduardo Habkost2008-01-301-0/+2
| | | | | | | | | | paravirt_pagetable_setup_{start,done}() are not used (yet) under x86_64, and native_pagetable_setup_{start,done}() don't exist on x86_64. So they don't need to be set. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: include/asm-x86/paravirt.h: x86_64 mmu operationsEduardo Habkost2008-01-301-0/+55
| | | | | | | | | | | | | Add .set_pgd field to pv_mmu_ops. Implement pud_val(), __pud(), set_pgd(), pud_clear(), pgd_clear(). pud_clear() and pgd_clear() are implemented simply using set_pud() and set_pmd(). They don't have a field at pv_mmu_ops. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: change function orders in paravirt.hGlauber de Oliveira Costa2008-01-301-42/+42
| | | | | | | | | __pmd, pmd_val and set_pud are used before they are defined (as static) We move them a little up in the file, so it doesn't happen. Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: provide __parainstructions sectionGlauber de Oliveira Costa2008-01-301-0/+8
| | | | | | | | | This patch adds the __parainstructions section to vmlinux.lds.S. It's needed for the patching system. Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: add asm_offset PARAVIRT constantsGlauber de Oliveira Costa2008-01-301-0/+14
| | | | | | | | This patch adds the constant PARAVIRT needs in asm_offsets_64.c Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: fill pv_cpu_ops structure with cr8 fieldsGlauber de Oliveira Costa2008-01-301-0/+4
| | | | | | | | | This patch fills in the read and write cr8 fields with their native version. Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: provide read and write cr8 paravirt hooksGlauber de Oliveira Costa2008-01-302-14/+18
| | | | | | | | | | | | Since the cr8 manipulation functions ended up staying in the tree, they can't be defined just when PARAVIRT is off: In this patch, those functions are defined for the PARAVIRT case too. [ mingo@elte.hu: fixes ] Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: puts read and write cr8 into pv_cpu_opsGlauber de Oliveira Costa2008-01-301-0/+15
| | | | | | | | | This patch adds room for read and write_cr8 functions back in pv_cpu_ops struct Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: put generic mm_hooks include into PARAVIRTGlauber de Oliveira Costa2008-01-301-0/+2
| | | | | | | | | With PARAVIRT, we actually have arch_{dup,exit}_mmap functions, so we can't include the generic header Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: provide a native_init_IRQ function on 64-bitGlauber de Oliveira Costa2008-01-302-1/+4
| | | | | | | | | x86_64 lacks a native_init_IRQ() function, so we turn the arch's init_IRQ() function into a native construct Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: add stringify headerGlauber de Oliveira Costa2008-01-301-0/+1
| | | | | | | | | We use a __stringify construction at paravirt_patch_64.c. It's better practice to include the stringify header directly Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: checking aperture report for node insteadYinghai Lu2008-01-301-2/+5
| | | | | | | | | | | | | | | | | | | | | | currently when gart iommu is enabled by BIOS or previous we got " Checking aperture... CPU 0: aperture @4000000 size 64MB CPU 1: aperture @4000000 size 64MB " we should use use Node instead. we will get " Checking aperture... Node 0: aperture @4000000 size 64MB Node 1: aperture @4000000 size 64MB " Signed-off-by: Yinghai Lu <yinghai.lu@sun.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: move select_idle_routine() call after detect_ht()Hiroshi Shimamoto2008-01-301-1/+2
| | | | | | | | | | | | Move the select_idle_routine() call to after the detect_ht() call at identify_cpu() on 64-bit. This change is for printing the polling idle and HT enabled warning message properly. Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: move warning message of polling idle and HT enabledHiroshi Shimamoto2008-01-302-11/+24
| | | | | | | | | | | | The warning message at idle_setup() is never shown because smp_num_sibling hasn't been updated at this point yet. Move this polling idle and HT enabled warning to select_idle_routine(). I also implement this warning on 64-bit kernel. Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: msr for AMD Fam 10h mmioYinghai Lu2008-01-301-0/+8
| | | | | | Signed-off-by: Yinghai Lu <yinghai.lu@sun.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: fix unconditional arch/x86/kernel/pcspeaker.o compilingMichael Opdenacker2008-01-301-0/+3
| | | | | | | | do not add the pcspkr platform device if pcspkr support is disabled. Signed-off-by: Michael Opdenacker <michael@free-electrons.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: only call early_init_amd one timeYinghai Lu2008-01-301-8/+3
| | | | | | | | | | | | | | | | | | | | Andi's patch " x86: move X86_FEATURE_CONSTANT_TSC into early cpu feature detection Need this in the next patch in time_init and that happens early. This includes a minor fix on i386 where early_intel_workarounds() [which is now called early_init_intel] really executes early as the comments say. " calling early_init_amd in early_identify_cpu and identify_cpu two times. this patch remove the one in identify_cpu Signed-off-by: Yinghai Lu <yinghai.lu@sun.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86, 32-bit: trim memory not covered by wb mtrrsJesse Barnes2008-01-308-39/+140
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On some machines, buggy BIOSes don't properly setup WB MTRRs to cover all available RAM, meaning the last few megs (or even gigs) of memory will be marked uncached. Since Linux tends to allocate from high memory addresses first, this causes the machine to be unusably slow as soon as the kernel starts really using memory (i.e. right around init time). This patch works around the problem by scanning the MTRRs at boot and figuring out whether the current end_pfn value (setup by early e820 code) goes beyond the highest WB MTRR range, and if so, trimming it to match. A fairly obnoxious KERN_WARNING is printed too, letting the user know that not all of their memory is available due to a likely BIOS bug. Something similar could be done on i386 if needed, but the boot ordering would be slightly different, since the MTRR code on i386 depends on the boot_cpu_data structure being setup. This patch fixes a bug in the last patch that caused the code to run on non-Intel machines (AMD machines apparently don't need it and it's untested on other non-Intel machines, so best keep it off). Further enhancements and fixes from: Yinghai Lu <Yinghai.Lu@Sun.COM> Andi Kleen <ak@suse.de> Signed-off-by: Jesse Barnes <jesse.barnes@intel.com> Tested-by: Justin Piszcz <jpiszcz@lucidpixels.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: print which shared library/executable faulted in segfault etc. messages v3Andi Kleen2008-01-308-12/+63
| | | | | | | | | | | | | | | | | | | | | | | They now look like: hal-resmgr[13791]: segfault at 3c rip 2b9c8caec182 rsp 7fff1e825d30 error 4 in libacl.so.1.1.0[2b9c8caea000+6000] This makes it easier to pinpoint bugs to specific libraries. And printing the offset into a mapping also always allows to find the correct fault point in a library even with randomized mappings. Previously there was no way to actually find the correct code address inside the randomized mapping. Relies on earlier patch to shorten the printk formats. They are often now longer than 80 characters, but I think that's worth it. [includes fix from Eric Dumazet to check d_path error value] Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: don't disable the APIC if it hasn't been mapped yetAndi Kleen2008-01-302-5/+15
| | | | | | | | | | | When the kernel panics early for some unrelated reason there would be eventually an early exception inside panic because clear_local_APIC tried to disable the not yet mapped APIC. Check for that explicitely. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: optimize lock prefix switching to run less frequentlyAndi Kleen2008-01-303-10/+24
| | | | | | | | | | | | | | | | | | | | | | On VMs implemented using JITs that cache translated code changing the lock prefixes is a quite costly operation that forces the JIT to throw away and retranslate a lot of code. Previously a SMP kernel would rewrite the locks once for each CPU which is quite unnecessary. This patch changes the code to never switch at boot in the normal case (SMP kernel booting with >1 CPU) or only once for SMP kernel on UP. This makes a significant difference in boot up performance on AMD SimNow! Also I expect it to be a little faster on native systems too because a smp switch does a lot of text_poke()s which each synchronize the pipeline. v1->v2: Rename max_cpus v1->v2: Fix off by one in UP check (Thomas Gleixner) Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: replace hard coded reservations in 64-bit early boot code with dynamic ↵Andi Kleen2008-01-307-112/+110
| | | | | | | | | | | | | | | table On x86-64 there are several memory allocations before bootmem. To avoid them stomping on each other they used to be all hard coded in bad_area(). Replace this with an array that is filled as needed. This cleans up the code considerably and allows to expand its use. Cc: peterz@infradead.org Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: unify printk strings in fault_32|64.cHarvey Harrison2008-01-302-2/+2
| | | | | | | | | | | | Adding the address of the faulting library missed removing a line ending from X86_32. Also update the shorter printk format for X86_32 in fault_64.c to make it easier to se the remaining differences. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: use shorter addresses in i386 segfault printksAndi Kleen2008-01-301-1/+1
| | | | | | Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: use the correct cpuid method to detect MWAIT support for C statesAndi Kleen2008-01-304-9/+19
| | | | | | | | | | | | | | | | | | | | Previously there was a AMD specific quirk to handle the case of AMD Fam10h MWAIT not supporting any C states. But it turns out that CPUID already has ways to detectly detect that without using special quirks. The new code simply checks if MWAIT supports at least C1 and doesn't use it if it doesn't. No more vendor specific code. Note this is does not simply clear MWAIT because MWAIT can be still useful even without C states. Credit goes to Ben Serebrin for pointing out the (nearly) obvious. Cc: "Andreas Herrmann" <andreas.herrmann3@amd.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: move MWAIT idle check to generic CPU initialization on 32-bitAndi Kleen2008-01-302-1/+2
| | | | | | | | | | Previously it was only run for Intel CPUs, but AMD Fam10h implements MWAIT too. This matches 64bit behaviour. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: rename stack_pointer to kernel_trap_spHarvey Harrison2008-01-302-2/+8
| | | | | | | | | | | Choose a less generic name for such a special case. Add a comment explaining the odd use in X86_32. Change the one user of stack_pointer. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: clean up ptrace.hHarvey Harrison2008-01-301-27/+16
| | | | | | | | | | | Leave definition of pt_regs in its own section, move all kernel code to section afterwards, unify prototype definitions, has some conditional prototypes to make it clear what was only defined in 32 and 64 bit. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: unify pt_regs accessors ptrace.hHarvey Harrison2008-01-301-30/+54
| | | | | | | | | | | | | | | | | Unify the definiton of: v8086_mode user_mode user_mode_vm stack_pointer instruction_pointer frame_pointer in ptrace.h to make it clear where the differences are between 32 and 64 bit. Changes macros to static inlines as well. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: kdump failureHiroshi Shimamoto2008-01-301-0/+62
| | | | | | | | | | | | | | kdump needs ELF_CORE_COPY_REGS in crash_save_cpu(). This lack of the macro causes the following BUG. SysRq : Trigger a crashdump ------------[ cut here ]------------ kernel BUG at include/linux/elfcore.h:105! invalid opcode: 0000 [1] PREEMPT SMP Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86_32: remove the useless NR_syscalls macroDmitri Vorobiev2008-01-301-2/+0
| | | | | | | | | | | | | | | | | | | | | | This is against current x86.git. The size of the system call table for 32-bit x86 kernels is obtained by compile-time calculation of the sys_call_table array, not from the value, which the NR_syscalls macro expands to. This trivial patch removes the fossil macro. Manually tested by grepping the x86 files for the "NR_syscalls" string. No relevant use cases found. Build-tested using allyesconfig, allnoconfig and a couple of randconfig instances. All builds successfully finished. Runtime test performed using a stripped-down Debian-ish config. The system booted successfully. Signed-off-by: Dmitri Vorobiev <dmitri.vorobiev@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: 64-bit, remove redundant cpu_has_ definitionsKyle McMartin2008-01-301-15/+0
| | | | | | | | | | PSE, PGE, XMM, XMM2, and FXSR are defined as required features, and will be optimized to a constant at compile time. Remove their redundant definitions. Signed-off-by: Kyle McMartin <kyle@mcmartin.ca> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: fixup NR-CPUS patch for numatravis@sgi.com2008-01-301-3/+1
| | | | | | | | | | | | | | | This patch removes the EXPORT_SYMBOL for: x86_cpu_to_node_map_init x86_cpu_to_node_map_early_ptr ... thus fixing the section mismatch problem. Also, the mem -> node hash lookup is fixed. Signed-off-by: Mike Travis <travis@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86/paravirt: make set_pud operation commonJeremy Fitzhardinge2008-01-301-8/+10
| | | | | | | | Remove duplicate set_pud()s. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86/paravirt: make set_pmd operation commonJeremy Fitzhardinge2008-01-301-23/+20
| | | | | | | | Remove duplicate set_pmd()s. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86/paravirt: make set_pte operations commonJeremy Fitzhardinge2008-01-301-56/+60
| | | | | | | | | | Remove duplicate set_pte* operations. PAE still needs to have special variants of some of these because it can't atomically update a 64-bit pte, so there's still some duplication. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86/paravirt: common implementation for pmd value opsJeremy Fitzhardinge2008-01-301-7/+26
| | | | | | | | Remove duplicate __pmd/pmd_val functions. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86/paravirt: common implementation for pgd value opsJeremy Fitzhardinge2008-01-301-22/+28
| | | | | | | | Remove duplicate __pgd/pgd_val functions. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86/paravirt: common implementation for pte value opsJeremy Fitzhardinge2008-01-301-21/+27
| | | | | | | | Remove duplicate __pte/pte_val functions. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86/paravirt: rearrange common mmu_opsJeremy Fitzhardinge2008-01-301-13/+17
| | | | | | | | Rearrange the various pagetable mmu_ops to remove duplication. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* add native_pud_val and _pmd_val for 2 and 3Jeremy Fitzhardinge2008-01-301-0/+10
| | | | | | Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* arch/x86/mm/numa_64.c: section fixAndrew Morton2008-01-302-2/+2
| | | | | | | | | | | WARNING: vmlinux.o(__ksymtab+0x670): Section mismatch: reference to .init.data:x86_cpu_to_node_map_init (between '__ksymtab_x86_cpu_to_node_map_init' and '__ksymtab_node_data') Cc: Matthew Dobson <colpatch@us.ibm.com> Cc: Mike Travis <travis@sgi.com> Cc: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: reduce memory and intra-node effectsMike Travis2008-01-305-9/+14
| | | | | Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: adjust/fix LDT handling for XenJan Beulich2008-01-302-12/+4
| | | | | | | | | | | | | | | | | | | Based on patch from Jan Beulich <jbeulich@novell.com>. Don't rely on kmalloc(PAGE_SIZE) returning PAGE_SIZE aligned memory (Xen requires GDT *and* LDT to be page-aligned). Using the page allocator interface also removes the (albeit small) slab allocator overhead. The same change being done for 64-bits for consistency. Further, the Xen hypercall interface expects the LDT address to be virtual, not machine. [ Adjusted to unified ldt.c - Jeremy ] Signed-off-by: Jan Beulich <jbeulich@novell.com> Acked-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86-64: clean up linker scriptJan Beulich2008-01-301-8/+7
| | | | | | | | | Remove the dead .text.lock. Move _etext and __{start,stop}___ex_table into their sections. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
OpenPOWER on IntegriCloud