summaryrefslogtreecommitdiffstats
path: root/arch/x86/kernel/head_64.S
Commit message (Collapse)AuthorAgeFilesLines
* x86: move stack_canary into irq_stackBrian Gerst2009-01-201-8/+5
| | | | | | | | | | | | | | | | Impact: x86_64 percpu area layout change, irq_stack now at the beginning Now that the PDA is empty except for the stack canary, it can be removed. The irqstack is moved to the start of the per-cpu section. If the stack protector is enabled, the canary overlaps the bottom 48 bytes of the irqstack. tj: * updated subject * dropped asm relocation of irq_stack_ptr * updated comments a bit * rebased on top of stack canary changes Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
* x86: rework __per_cpu_load adjustmentsBrian Gerst2009-01-201-16/+8
| | | | | | | | | Impact: cleanup Use cpu_number to determine if the adjustment is necessary. Signed-off-by: Brian Gerst <brgerst@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
* x86: make pda a percpu variableTejun Heo2009-01-161-2/+3
| | | | | | | | | | | | [ Based on original patch from Christoph Lameter and Mike Travis. ] As pda is now allocated in percpu area, it can easily be made a proper percpu variable. Make it so by defining per cpu symbol from linker script and declaring it in C code for SMP and simply defining it for UP. This change cleans up code and brings SMP and UP closer a bit. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: fold pda into percpu area on SMPTejun Heo2009-01-161-4/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Based on original patch from Christoph Lameter and Mike Travis. ] Currently pdas and percpu areas are allocated separately. %gs points to local pda and percpu area can be reached using pda->data_offset. This patch folds pda into percpu area. Due to strange gcc requirement, pda needs to be at the beginning of the percpu area so that pda->stack_canary is at %gs:40. To achieve this, a new percpu output section macro - PERCPU_VADDR_PREALLOC() - is added and used to reserve pda sized chunk at the start of the percpu area. After this change, for boot cpu, %gs first points to pda in the data.init area and later during setup_per_cpu_areas() gets updated to point to the actual pda. This means that setup_per_cpu_areas() need to reload %gs for CPU0 while clearing pda area for other cpus as cpu0 already has modified it when control reaches setup_per_cpu_areas(). This patch also removes now unnecessary get_local_pda() and its call sites. A lot of this patch is taken from Mike Travis' "x86_64: Fold pda into per cpu area" patch. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: load pointer to pda into %gs while brining up a CPUTejun Heo2009-01-161-5/+10
| | | | | | | | | | | | | | | [ Based on original patch from Christoph Lameter and Mike Travis. ] CPU startup code in head_64.S loaded address of a zero page into %gs for temporary use till pda is loaded but address to the actual pda is available at the point. Load the real address directly instead. This will help unifying percpu and pda handling later on. This patch is mostly taken from Mike Travis' "x86_64: Fold pda into per cpu area" patch. Signed-off-by: Tejun Heo <tj@kernel.org>
* x86: make percpu symbols zerobased on SMPTejun Heo2009-01-161-1/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Based on original patch from Christoph Lameter and Mike Travis. ] This patch makes percpu symbols zerobased on x86_64 SMP by adding PERCPU_VADDR() to vmlinux.lds.h which helps setting explicit vaddr on the percpu output section and using it in vmlinux_64.lds.S. A new PHDR is added as existing ones cannot contain sections near address zero. PERCPU_VADDR() also adds a new symbol __per_cpu_load which always points to the vaddr of the loaded percpu data.init region. The following adjustments have been made to accomodate the address change. * code to locate percpu gdt_page in head_64.S is updated to add the load address to the gdt_page offset. * __per_cpu_load is used in places where access to the init data area is necessary. * pda->data_offset is initialized soon after C code is entered as zero value doesn't work anymore. This patch is mostly taken from Mike Travis' "x86_64: Base percpu variables at zero" patch. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: fix RIP printout in early_idt_handlerJiri Slaby2009-01-041-1/+1
| | | | | | | | | | | Impact: fix debug/crash printout Since errorcode is popped out, RIP is on the top of the stack. Use real RIP value instead of wrong CS. Signed-off-by: Jiri Slaby <jirislaby@gmail.com> Cc: <stable@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, cpa: rename PTE attribute macros for kernel direct mapping in early bootSuresh Siddha2008-10-101-2/+2
| | | | | | | | | Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: arjan@linux.intel.com Cc: venkatesh.pallipadi@intel.com Cc: jeremy@goop.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
* Revert "x86_64: there's no need to preallocate level1_fixmap_pgt"Ingo Molnar2008-07-161-0/+6
| | | | | | | | | This reverts commit 033786969d1d1b5af12a32a19d3a760314d05329. Suresh Siddha reported that this broke booting on his 2GB testbox. Reported-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* xen64: add xen-head code to head_64.SJeremy Fitzhardinge2008-07-161-0/+1
| | | | | | | | | | | Add the Xen entrypoint and ELF notes to head_64.S. Adapts xen-head.S to compile either 32-bit or 64-bit. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86_64: there's no need to preallocate level1_fixmap_pgtJeremy Fitzhardinge2008-07-161-6/+0
| | | | | | | | | | | Early fixmap will allocate its own L1 pagetable page for fixmap mappings, so there's no need to preallocate one. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: always set _PAGE_GLOBAL in _PAGE_KERNEL* flagsJeremy Fitzhardinge2008-07-081-2/+2
| | | | | | | | | | | | | | | | Consistently set _PAGE_GLOBAL in _PAGE_KERNEL flags. This makes 32- and 64-bit code consistent, and removes some special cases where __PAGE_KERNEL* did not have _PAGE_GLOBAL set, causing confusion as a result of the inconsistencies. This patch only affects x86-64, which generally always supports PGD. The x86-32 patch is next. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: fix CPA self-test for "x86/paravirt: groundwork for 64-bit Xen support"Jeremy Fitzhardinge2008-07-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ingo Molnar wrote: > -tip auto-testing found pagetable corruption (CPA self-test failure): > > [ 32.956015] CPA self-test: > [ 32.958822] 4k 2048 large 508 gb 0 x 2556[ffff880000000000-ffff88003fe00000] miss 0 > [ 32.964000] CPA ffff88001d54e000: bad pte 1d4000e3 > [ 32.968000] CPA ffff88001d54e000: unexpected level 2 > [ 32.972000] CPA ffff880022c5d000: bad pte 22c000e3 > [ 32.976000] CPA ffff880022c5d000: unexpected level 2 > [ 32.980000] CPA ffff8800200ce000: bad pte 200000e3 > [ 32.984000] CPA ffff8800200ce000: unexpected level 2 > [ 32.988000] CPA ffff8800210f0000: bad pte 210000e3 > > config and full log can be found at: > > http://redhat.com/~mingo/misc/config-Mon_Jun_30_11_11_51_CEST_2008.bad > http://redhat.com/~mingo/misc/log-Mon_Jun_30_11_11_51_CEST_2008.bad Phew. OK, I've worked this out. Short version is that's it's a false alarm, and there was no real failure here. Long version: * I changed the code to create the physical mapping pagetables to reuse any existing mapping rather than replace it. Specifically, reusing an pud pointed to by the pgd caused this symptom to appear. * The specific PUD being reused is the one created statically in head_64.S, which creates an initial 1GB mapping. * That mapping doesn't have _PAGE_GLOBAL set on it, due to the inconsistency between __PAGE_* and PAGE_*. * The CPA test attempts to clear _PAGE_GLOBAL, and then checks to see that the resulting range is 1) shattered into 4k pages, and 2) has no _PAGE_GLOBAL. * However, since it didn't have _PAGE_GLOBAL on that range to start with, change_page_attr_clear() had nothing to do, and didn't bother shattering the range, * resulting in the reported messages The simple fix is to set _PAGE_GLOBAL in level2_ident_pgt. An additional fix to make CPA testing more robust by using some other pagetable bit (one of the unused available-to-software ones). This would solve spurious CPA test warnings under Xen which uses _PAGE_GLOBAL for its own purposes (ie, not under guest control). Also, we should revisit the use of _PAGE_GLOBAL in asm-x86/pgtable.h, and use it consistently, and drop MAKE_GLOBAL. The first time I proposed it it caused breakages in the very early CPA code; with luck that's all fixed now. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Mark McLoughlin <markmc@redhat.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Vegard Nossum <vegard.nossum@gmail.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* paravirt/x86, 64-bit: move __PAGE_OFFSET to leave a space for hypervisorEduardo Habkost2008-07-081-5/+12
| | | | | | | | | | | | | | | | | | | Set __PAGE_OFFSET to the most negative possible address + 16*PGDIR_SIZE. The gap is to allow a space for a hypervisor to fit. The gap is more or less arbitrary, but it's what Xen needs. When booting native, kernel/head_64.S has a set of compile-time generated pagetables used at boot time. This patch removes their absolutely hard-coded layout, and makes it parameterised on __PAGE_OFFSET (and __START_KERNEL_map). Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: move x86_64 gdt closer to i386Glauber Costa2008-07-081-43/+5
| | | | | | | | | | | i386 and x86_64 used two different schemes for maintaining the gdt. With this patch, x86_64 initial gdt table is defined in a .c file, same way as i386 is now. Also, we call it "gdt_page", and the descriptor, "early_gdt_descr". This way we achieve common naming, which can allow for more code integration. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: use stack_start in x86_64Glauber Costa2008-07-081-2/+3
| | | | | | | | | call x86_64's init_rsp stack_start, just as i386 does. Put a zeroed stack segment for consistency. With this, we can eliminate one ugly ifdef in smpboot.c. Signed-off-by: Glauber Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
*-. Merge branches 'x86/numa-fixes', 'x86/apic', 'x86/apm', 'x86/bitops', ↵Ingo Molnar2008-07-081-17/+12
|\ \ | | | | | | | | | 'x86/build', 'x86/cleanups', 'x86/cpa', 'x86/cpu', 'x86/defconfig', 'x86/gart', 'x86/i8259', 'x86/intel', 'x86/irqstats', 'x86/kconfig', 'x86/ldt', 'x86/mce', 'x86/memtest', 'x86/pat', 'x86/ptemask', 'x86/resumetrace', 'x86/threadinfo', 'x86/timers', 'x86/vdso' and 'x86/xen' into x86/devel
| | * x86: head_64.S cleanup - use PMD_SHIFT instead of numeric constantCyrill Gorcunov2008-05-251-5/+5
| | | | | | | | | | | | | | | Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * x86: head_64.S cleanup - use straight move to CR4 registerCyrill Gorcunov2008-05-251-3/+1
| | | | | | | | | | | | | | | Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| | * x86: head_64.S cleanup - use predefined flags from processor-flags.hCyrill Gorcunov2008-05-251-8/+5
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We should better use already defined flags from processor-flags.h instead of defining own ones [>>> object code check >>>] original md5sum: 9cfa6dbf045a046bb5dfb85f8bcfe8c4 arch/x86/kernel/head_64.o text data bss dec hex filename 37361 4432 8192 49985 c341 arch/x86/kernel/head_64.o patched md5sum: 9cfa6dbf045a046bb5dfb85f8bcfe8c4 arch/x86/kernel/head_64.o text data bss dec hex filename 37361 4432 8192 49985 c341 arch/x86/kernel/head_64.o [<<< object code check <<<] Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> Acked-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86 ACPI: fix resume from suspend to RAM on uniprocessor x86-64Rafael J. Wysocki2008-07-051-1/+1
|/ | | | | | | | | | | | | | | Since the trampoline code is now used for ACPI resume from suspend to RAM, the trampoline page tables have to be fixed up during boot not only on SMP systems, but also on UP systems that use the trampoline. Reference: http://bugzilla.kernel.org/show_bug.cgi?id=10923 Reported-by: Dionisus Torimens <djtm@gmx.net> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Cc: Andi Kleen <andi@firstfloor.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: pm list <linux-pm@lists.linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: move suspend wakeup code to CPavel Machek2008-04-171-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Move wakeup code to .c, so that video mode setting code can be shared between boot and wakeup. Remove nasty assembly code in 64-bit case by re-using trampoline code. Stack setup was fixed to clear high 16bits of %esp, maybe that fixes some machines. .c code sharing and morse code was done H. Peter Anvin, Sam Ravnborg reviewed kbuild related stuff, and it seems okay to him. Rafael did some cleanups. [rjw: * Made the patch stop breaking compilation on x86-32 * Added arch/x86/kernel/acpi/sleep.h * Got rid of compiler warnings in arch/x86/kernel/acpi/sleep.c * Fixed 32-bit compilation on x86-64 systems * Added include/asm-x86/trampoline.h and fixed the non-SMP compilation on 64-bit x86 * Removed arch/x86/kernel/acpi/sleep_32.c which was not used * Fixed some breakage caused by the integration of smpboot.c done under us in the meantime] Signed-off-by: Pavel Machek <pavel@suse.cz> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Reviewed-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: move early exception handlers into init.textAndi Kleen2008-04-171-0/+2
| | | | | | | | | | | | Currently they are in .text.head because the rest of head_64.S. .text.head is not removed as init data, but the early exception handlers should be because they are not needed after early boot of the BP. So move them over. Signed-off-by: Andi Kleen <ak@suse.de> Cc: mingo@elte.hu Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: replace early exception setup macro recursion with loopAndi Kleen2008-04-171-10/+6
| | | | | | | | | | | | | | | The early exception handlers are currently set up using a macro recursion. There is only one user left. Replace the macro with a standard loop in place. Noop patch, just a cleanup. [ tglx@linutronix.de: simplified ] Signed-off-by: Andi Kleen <ak@suse.de> Cc: mingo@elte.hu Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: don't set up early exception handlers for external interruptsAndi Kleen2008-04-171-4/+2
| | | | | | | | | | | | All of early setup runs with interrupts disabled, so there is no need to set up early exception handlers for vectors >= 32 This saves some minor text size. Signed-off-by: Andi Kleen <ak@suse.de> Cc: mingo@elte.hu Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: increase the kernel text limit to 512 MBIngo Molnar2008-04-171-2/+2
| | | | | | | | | | | people sometimes do crazy stuff like building really large static arrays into their kernels or building allyesconfig kernels. Give more space to the kernel and push modules up a bit: kernel has 512 MB and modules have 1.5 GB. Should be enough for a few years ;-) Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: rename KERNEL_TEXT_SIZE => KERNEL_IMAGE_SIZEIngo Molnar2008-02-261-1/+1
| | | | | | | | | | | | The KERNEL_TEXT_SIZE constant was mis-named, as we not only map the kernel text but data, bss and init sections as well. That name led me on the wrong path with the KERNEL_TEXT_SIZE regression, because i knew how big of _text_ my images have and i knew about the 40 MB "text" limit so i wrongly thought to be on the safe side of the 40 MB limit with my 29 MB of text, while the total image size was slightly above 40 MB. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: fix spontaneous reboot with allyesconfig bzImageIngo Molnar2008-02-261-8/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | recently the 64-bit allyesconfig bzImage kernel started spontaneously rebooting during early bootup. after a few fun hours spent with early init debugging, it turns out that we've got this rather annoying limit on the size of the kernel image: #define KERNEL_TEXT_SIZE (40*1024*1024) which limit my vmlinux just happened to pass: text data bss dec hex filename 29703744 4222751 8646224 42572719 2899baf vmlinux 40 MB is 42572719 bytes, so my vmlinux was just 1.5% above this limit :-/ So it happily crashed right in head_64.S, which - as we all know - is the most debuggable code in the whole architecture ;-) So increase the limit to allow an up to 128MB kernel image to be mapped. (should anyone be that crazy or lazy) We have a full 4K of pagetable (level2_kernel_pgt) allocated for these mappings already, so there's no RAM overhead and the limit was rather pointless and arbitrary. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: fix section mismatch in head_64.S:initial_codeSam Ravnborg2008-02-191-1/+1
| | | | | | | | | | | | | | initial_code are initially used to hold a function pointer from __init and later from __cpuinit. This confuses modpost and changing initial_code to REFDATA silence the warning. (But now we do not discard the variable anymore). Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: zap invalid and unused pmds in early bootThomas Gleixner2008-02-181-1/+6
| | | | | | | | | | | | | | | | | | | | | | | The early boot code maps KERNEL_TEXT_SIZE (currently 40MB) starting from __START_KERNEL_map. The kernel itself only needs _text to _end mapped in the high alias. On relocatible kernels the ASM setup code adjusts the compile time created high mappings to the relocation. This creates invalid pmd entries for negative offsets: 0xffffffff80000000 -> pmd entry: ffffffffff2001e3 It points outside of the physical address space and is marked present. This starts at the virtual address __START_KERNEL_map and goes up to the point where the first valid physical address (0x0) is mapped. Zap the mappings before _text and after _end right away in early boot. This removes also the invalid entries. Furthermore it simplifies the range check for high aliases. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: fix 64-bit sectionsSam Ravnborg2008-02-061-10/+5
| | | | | | fix 64-bit section warnings. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: rename LARGE_PAGE_SIZE to PMD_PAGE_SIZEAndi Kleen2008-02-041-2/+2
| | | | | | | | Fix up all users. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: make early printk selectable on 64-bit as wellIngo Molnar2008-01-301-0/+7
| | | | | | | | | | | | | Enable CONFIG_EMBEDDED to select CONFIG_EARLY_PRINTK on 64-bit as well. saves ~2K: text data bss dec hex filename 7290283 3672091 1907848 12870222 c4624e vmlinux.before 7288373 3671795 1907848 12868016 c459b0 vmlinux.after Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: early_idt_handler improvements, 64-bitRoland McGrath2008-01-301-4/+30
| | | | | | | | | | It's not too pretty, but I found this made the "PANIC: early exception" messages become much more reliably useful: 1. print the vector number, 2. print the %cs value, 3. handle error-code-pushing vs non-pushing vectors. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: turn priviled operation into a macro in head_64.SGlauber de Oliveira Costa2008-01-301-1/+8
| | | | | | | | | | | | | | under paravirt, read cr2 cannot be issued directly anymore. So wrap it in a macro, defined to the operation itself in case paravirt is off, but to something else if we have paravirt in the game Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86_64: move kernelThomas Gleixner2007-10-111-0/+416
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
OpenPOWER on IntegriCloud