summaryrefslogtreecommitdiffstats
path: root/arch/x86/lib
Commit message (Collapse)AuthorAgeFilesLines
* x86-64: Fix "bytes left to copy" return value for copy_from_user()Linus Torvalds2008-06-172-28/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Most users by far do not care about the exact return value (they only really care about whether the copy succeeded in its entirety or not), but a few special core routines actually care deeply about exactly how many bytes were copied from user space. And the unrolled versions of the x86-64 user copy routines would sometimes report that it had copied more bytes than it actually had. Very few uses actually have partial copies to begin with, but to make this bug even harder to trigger, most x86 CPU's use the "rep string" instructions for normal user copies, and that version didn't have this issue. To make it even harder to hit, the one user of this that really cared about the return value (and used the uncached version of the copy that doesn't use the "rep string" instructions) was the generic write routine, which pre-populated its source, once more hiding the problem by avoiding the exception case that triggers the bug. In other words, very special thanks to Bron Gondwana who not only triggered this, but created a test-program to show it, and bisected the behavior down to commit 08291429cfa6258c4cd95d8833beb40f828b194e ("mm: fix pagecache write deadlocks") which changed the access pattern just enough that you can now trigger it with 'writev()' with multiple iovec's. That commit itself was not the cause of the bug, it just allowed all the stars to align just right that you could trigger the problem. [ Side note: this is just the minimal fix to make the copy routines (with __copy_from_user_inatomic_nocache as the particular version that was involved in showing this) have the right return values. We really should improve on the exceptional case further - to make the copy do a byte-accurate copy up to the exact page limit that causes it to fail. As it is, the callers have to do extra work to handle the limit case gracefully. ] Reported-by: Bron Gondwana <brong@fastmail.fm> Cc: Nick Piggin <npiggin@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> (which didn't have this problem), and since most users that do the carethis was very hard to trigger, but
* x86: enable preemption in delaySteven Rostedt2008-06-042-8/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The RT team has been searching for a nasty latency. This latency shows up out of the blue and has been seen to be as big as 5ms! Using ftrace I found the cause of the latency. pcscd-2995 3dNh1 52360300us : irq_exit (smp_apic_timer_interrupt) pcscd-2995 3dN.2 52360301us : idle_cpu (irq_exit) pcscd-2995 3dN.2 52360301us : rcu_irq_exit (irq_exit) pcscd-2995 3dN.1 52360771us : smp_apic_timer_interrupt (apic_timer_interrupt ) pcscd-2995 3dN.1 52360771us : exit_idle (smp_apic_timer_interrupt) Here's an example of a 400 us latency. pcscd took a timer interrupt and returned with "need resched" enabled, but did not reschedule until after the next interrupt came in at 52360771us 400us later! At first I thought we somehow missed a preemption check in entry.S. But I also noticed that this always seemed to happen during a __delay call. pcscd-2995 3dN.2 52360836us : rcu_irq_exit (irq_exit) pcscd-2995 3.N.. 52361265us : preempt_schedule (__delay) Looking at the x86 delay, I found my problem. In git commit 35d5d08a085c56f153458c3f5d8ce24123617faf, Andrew Morton placed preempt_disable around the entire delay due to TSC's not working nicely on SMP. Unfortunately for those that care about latencies this is devastating! Especially when we have callers to mdelay(8). Here I enable preemption during the loop and account for anytime the task migrates to a new CPU. The delay asked for may be extended a bit by the migration, but delay only guarantees that it will delay for that minimum time. Delaying longer should not be an issue. [ Thanks to Thomas Gleixner for spotting that cpu wasn't updated, and to place the rep_nop between preempt_enabled/disable. ] Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: akpm@osdl.org Cc: Clark Williams <clark.williams@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Luis Claudio R. Goncalves" <lclaudio@uudg.org> Cc: Gregory Haskins <ghaskins@novell.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andi Kleen <andi-suse@firstfloor.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: fix csum_partial() exportIngo Molnar2008-05-131-2/+0
| | | | | | | | | | | | | | | | | | | Fix this symbol export problem: Building modules, stage 2. MODPOST 193 modules ERROR: "csum_partial" [fs/reiserfs/reiserfs.ko] undefined! make[1]: *** [__modpost] Error 1 make: *** [modules] Error 2 This is due to a known weakness of symbol exports: if a symbol's only in-core user is an EXPORT_SYMBOL from a lib-y section, the symbol is not linked in. The solution is to move the export to x8664_ksyms_64.c - but the real solution would be to fix kbuild. Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, UML: remove x86-specific implementations of find_first_bitAlexander van Heukelum2008-04-262-110/+0
| | | | | | | | | | | x86 has been switched to the generic versions of find_first_bit and find_first_zero_bit, but the original versions were retained. This patch just removes the now unused x86-specific versions. also update UML. Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: switch 64-bit to generic find_first_bitAlexander van Heukelum2008-04-261-0/+2
| | | | | | | | Switch x86_64 to generic find_first_bit. The x86_64-specific implementation is not removed. Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: change x86 to use generic find_next_bitAlexander van Heukelum2008-04-263-139/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The versions with inline assembly are in fact slower on the machines I tested them on (in userspace) (Athlon XP 2800+, p4-like Xeon 2.8GHz, AMD Opteron 270). The i386-version needed a fix similar to 06024f21 to avoid crashing the benchmark. Benchmark using: gcc -fomit-frame-pointer -Os. For each bitmap size 1...512, for each possible bitmap with one bit set, for each possible offset: find the position of the first bit starting at offset. If you follow ;). Times include setup of the bitmap and checking of the results. Athlon Xeon Opteron 32/64bit x86-specific: 0m3.692s 0m2.820s 0m3.196s / 0m2.480s generic: 0m2.622s 0m1.662s 0m2.100s / 0m1.572s If the bitmap size is not a multiple of BITS_PER_LONG, and no set (cleared) bit is found, find_next_bit (find_next_zero_bit) returns a value outside of the range [0, size]. The generic version always returns exactly size. The generic version also uses unsigned long everywhere, while the x86 versions use a mishmash of int, unsigned (int), long and unsigned long. Using the generic version does give a slightly bigger kernel, though. defconfig: text data bss dec hex filename x86-specific: 4738555 481232 626688 5846475 5935cb vmlinux (32 bit) generic: 4738621 481232 626688 5846541 59360d vmlinux (32 bit) x86-specific: 5392395 846568 724424 6963387 6a40bb vmlinux (64 bit) generic: 5392458 846568 724424 6963450 6a40fa vmlinux (64 bit) Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* Merge branch 'for-linus' of ↵Linus Torvalds2008-04-186-201/+192
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86 * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86: (613 commits) x86: standalone trampoline code x86: move suspend wakeup code to C x86: coding style fixes to arch/x86/kernel/acpi/sleep.c x86: setup_trampoline() - fix section mismatch warning x86: section mismatch fixes, #1 x86: fix paranoia about using BIOS quickboot mechanism. x86: print out buggy mptable x86: use cpu_online() x86: use cpumask_of_cpu() x86: remove unnecessary tmp local variable x86: remove unnecessary memset() x86: use ioapic_read_entry() and ioapic_write_entry() x86: avoid redundant loop in io_apic_level_ack_pending() x86: remove superfluous initialisation in boot code. x86: merge mpparse_{32,64}.c x86: unify mp_register_gsi x86: unify mp_config_acpi_legacy_irqs x86: unify mp_register_ioapic x86: unify uniq_io_apic_id x86: unify smp_scan_config ...
| * x86: coding style fixes to arch/x86/lib/usercopy_32.cPaolo Ciarrocchi2008-04-171-61/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before: total: 63 errors, 2 warnings, 878 lines checked After: total: 0 errors, 2 warnings, 878 lines checked Compile tested, no change in the binary output: text data bss dec hex filename 3231 0 0 3231 c9f usercopy_32.o.after 3231 0 0 3231 c9f usercopy_32.o.before md5sum: 9f9a3eb43970359ae7cecfd1c9e7cf42 usercopy_32.o.after 9f9a3eb43970359ae7cecfd1c9e7cf42 usercopy_32.o.before Signed-off-by: Paolo Ciarrocchi <paolo.ciarrocchi@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86: coding style fixes to arch/x86/lib/memcpy_32.cPaolo Ciarrocchi2008-04-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before: total: 2 errors, 0 warnings, 43 lines checked After: total: 0 errors, 0 warnings, 43 lines checked No code changed: arch/x86/lib/memcpy_32.o: text data bss dec hex filename 164 0 0 164 a4 memcpy_32.o.before 164 0 0 164 a4 memcpy_32.o.after md5: d759f55621af27f51720b59c8ca96a4d memcpy_32.o.before.asm d759f55621af27f51720b59c8ca96a4d memcpy_32.o.after.asm Signed-off-by: Paolo Ciarrocchi <paolo.ciarrocchi@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86: coding style fixes to arch/x86/lib/strstr_3Paolo Ciarrocchi2008-04-171-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before: total: 3 errors, 0 warnings, 31 lines checked After: total: 0 errors, 0 warnings, 31 lines checked No code changed: arch/x86/lib/strstr_32.o: text data bss dec hex filename 49 0 0 49 31 strstr_32.o.before 49 0 0 49 31 strstr_32.o.after md5: a224a7c4082e75a4f31f9d91dd34fe8e strstr_32.o.before.asm a224a7c4082e75a4f31f9d91dd34fe8e strstr_32.o.after.asm Signed-off-by: Paolo Ciarrocchi <paolo.ciarrocchi@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86: coding style fixes to arch/x86/lib/string_32.cPaolo Ciarrocchi2008-04-171-30/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The patch kills 45 errors and a few warnings. The file is now error/warning free: total: 0 errors, 0 warnings, 237 lines checked arch/x86/lib/string_32.c has no obvious style problems and is ready for submission. no code changed: arch/x86/lib/string_32.o: text data bss dec hex filename 639 0 0 639 27f string_32.o.before 639 0 0 639 27f string_32.o.after md5: 2db1c48187cf5113bb595153ee1fc73d string_32.o.before.asm 2db1c48187cf5113bb595153ee1fc73d string_32.o.after.asm Signed-off-by: Paolo Ciarrocchi <paolo.ciarrocchi@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86: coding style fixes to arch/x86/lib/memmove_64.cPaolo Ciarrocchi2008-04-171-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After the patch: total: 0 errors, 0 warnings, 21 lines checked no code changed: arch/x86/lib/memmove_64.o: text data bss dec hex filename 116 0 0 116 74 memmove_64.o.before 116 0 0 116 74 memmove_64.o.after md5: 2d6b0951cafb86a11a222cdd70f6104f memmove_64.o.before.asm 2d6b0951cafb86a11a222cdd70f6104f memmove_64.o.after.asm Signed-off-by: Paolo Ciarrocchi <paolo.ciarrocchi@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * x86: clean up mmx_32.cIngo Molnar2008-04-171-103/+94
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | checkpatch.pl --file cleanups: before: total: 74 errors, 3 warnings, 386 lines checked after: total: 0 errors, 0 warnings, 377 lines checked no code changed: arch/x86/lib/mmx_32.o: text data bss dec hex filename 1323 0 8 1331 533 mmx_32.o.before 1323 0 8 1331 533 mmx_32.o.after md5: 4cc39f1017dc40a5ebf02ce0ff7312bc mmx_32.o.before.asm 4cc39f1017dc40a5ebf02ce0ff7312bc mmx_32.o.after.asm Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* | Generic semaphore implementationMatthew Wilcox2008-04-172-88/+0
|/ | | | | | | | | | | Semaphores are no longer performance-critical, so a generic C implementation is better for maintainability, debuggability and extensibility. Thanks to Peter Zijlstra for fixing the lockdep warning. Thanks to Harvey Harrison for pointing out that the unlikely() was unnecessary. Signed-off-by: Matthew Wilcox <willy@linux.intel.com> Acked-by: Ingo Molnar <mingo@elte.hu>
* x86: clean up csum-wrappers_64.c some moreIngo Molnar2008-02-191-36/+51
| | | | | | | | | | | | | | | no code changed: arch/x86/lib/csum-wrappers_64.o: text data bss dec hex filename 839 0 0 839 347 csum-wrappers_64.o.before 839 0 0 839 347 csum-wrappers_64.o.after md5: b31994226c33e0b52bef5a0e110b84b0 csum-wrappers_64.o.before.asm b31994226c33e0b52bef5a0e110b84b0 csum-wrappers_64.o.after.asm Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: coding style fixes in arch/x86/lib/csum-wrappers_64.cPaolo Ciarrocchi2008-02-191-40/+40
| | | | | | | | | | | | | | | | no code changed: arch/x86/lib/csum-wrappers_64.o: text data bss dec hex filename 839 0 0 839 347 csum-wrappers_64.o.before 839 0 0 839 347 csum-wrappers_64.o.after md5: b31994226c33e0b52bef5a0e110b84b0 csum-wrappers_64.o.before.asm b31994226c33e0b52bef5a0e110b84b0 csum-wrappers_64.o.after.asm Signed-off-by: Paolo Ciarrocchi <paolo.ciarrocchi@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: coding style fixes in arch/x86/lib/io_64.cPaolo Ciarrocchi2008-02-191-8/+10
| | | | | | | | | | | | | | | | | | | This simple patch makes the file error free (according to checkpatch.pl) no code changed: arch/x86/lib/io_64.o: text data bss dec hex filename 308 0 0 308 134 io_64.o.before 308 0 0 308 134 io_64.o.after md5: 3c64f9ed83d091678e849b36ca27bee3 io_64.o.before.asm 3c64f9ed83d091678e849b36ca27bee3 io_64.o.after.asm Signed-off-by: Paolo Ciarrocchi <paolo.ciarrocchi@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* read_current_timer() cleanupsAndrew Morton2008-02-062-2/+6
| | | | | | | | | | | | | | | | | | | | - All implementations can be __devinit - The function prototypes were in asm/timex.h but they all must be the same, so create a single declaration in linux/timex.h. - uninline the sparc64 version to match the other architectures - Don't bother #defining ARCH_HAS_READ_CURRENT_TIMER to a particular value. [ezk@cs.sunysb.edu: fix build] Cc: "David S. Miller" <davem@davemloft.net> Cc: Haavard Skinnemoen <hskinnemoen@atmel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* iommu sg: kill __clear_bit_string and find_next_zero_stringFUJITA Tomonori2008-02-052-29/+1
| | | | | | | | | | | | | | | | This kills unused __clear_bit_string and find_next_zero_string (they were used by only gart and calgary IOMMUs). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jeff Garzik <jeff@garzik.org> Cc: James Bottomley <James.Bottomley@steeleye.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Muli Ben-Yehuda <mulix@mulix.org> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86Linus Torvalds2008-02-043-42/+13
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86: (78 commits) x86: fix RTC lockdep warning: potential hardirq recursion x86: cpa, micro-optimization x86: cpa, clean up code flow x86: cpa, eliminate CPA_ enum x86: cpa, cleanups x86: implement gbpages support in change_page_attr() x86: support gbpages in pagetable dump x86: add gbpages support to lookup_address x86: add pgtable accessor functions for gbpages x86: add PUD_PAGE_SIZE x86: add feature macros for the gbpages cpuid bit x86: switch direct mapping setup over to set_pte x86: fix page-present check in cpa_flush_range x86: remove cpa warning x86: remove now unused clear_kernel_mapping x86: switch pci-gart over to using set_memory_np() instead of clear_kernel_mapping() x86: cpa selftest, skip non present entries x86: CPA fix pagetable split x86: rename LARGE_PAGE_SIZE to PMD_PAGE_SIZE x86: cpa, fix lookup_address ...
| * x86: use _ASM_EXTABLE macro in arch/x86/lib/usercopy_64.cH. Peter Anvin2008-02-041-9/+3
| | | | | | | | | | | | | | | | | | Use the _ASM_EXTABLE macro from <asm/asm.h>, instead of open-coding __ex_table entires in arch/x86/lib/usercopy_64.c. Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * x86: use _ASM_EXTABLE macro in arch/x86/lib/usercopy_32.cH. Peter Anvin2008-02-041-9/+3
| | | | | | | | | | | | | | | | | | Use the _ASM_EXTABLE macro from <asm/asm.h>, instead of open-coding __ex_table entires in arch/x86/lib/usercopy_32.c. Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * x86: use _ASM_EXTABLE macro in arch/x86/lib/mmx_32.cH. Peter Anvin2008-02-041-24/+7
| | | | | | | | | | | | | | | | | | Use the _ASM_EXTABLE macro from <asm/asm.h>, instead of open-coding __ex_table entires in arch/x86/lib/mmx_32.c. Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* | Correct explanations of "find_next" bit routines.Robert P. J. Day2008-02-032-2/+2
|/ | | | | | | | | Correct the obvious "copy and paste" errors explaining some of the "find_next" routines. Signed-off-by: Robert P. J. Day <rpjday@mindspring.com> Acked-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Adrian Bunk <bunk@kernel.org>
* x86: export copy_from_user_ll_nocache[_nozero]Andrew Morton2008-01-301-0/+2
| | | | | | Cc: Neil Brown <neilb@cse.unsw.edu.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: fix usage of .section .sched.text in assembler codeSam Ravnborg2008-01-302-2/+2
| | | | | | | | | | | | | | | Without this patch the linker will generate a section named .sched.text.1 which is unexpected. This is because the gcc generated section has "ax" but the assembler usage of .sched.text lacks the "ax" specifier. It would be better to have a definition we could use from assembler code but I did not find a suitable header file for it. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: remove unneded castsJan Engelhardt2008-01-302-4/+4
| | | | | | | | x86: remove unneeded casts Signed-off-by: Jan Engelhardt <jengelh@computergmbh.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: add ENDPROC() markersJohn Reiser2008-01-301-10/+10
| | | | | | | | | | | The ENDPROCs() were not used everywhere. Some code used just END() instead, while other code used nothing. um/sys-i386/checksum.S didn't #include <linux/linkage.h> . I also got confused because gcc puts the .type near the ENTRY, while ENDPROC puts it on the opposite end. Signed off by: John Reiser <jreiser@BitWagon.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: unify arch/x86/lib/Makefile(s)Sam Ravnborg2008-01-303-26/+24
| | | | | | | | | | | | | | | | | Trivial unification of Makefiles for the x86 specific library part. Linking order is slightly modified but should be harmless. Tested doing a defconfig build before and after and saw no build changes. It adds almost as many lines as it deletes - bacause I broke a few lines up fo readability in the Makefile. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: disable preemption in delay_tsc()Andrew Morton2007-11-142-4/+10
| | | | | | | | | | | | | | | | Marin Mitov points out that delay_tsc() can misbehave if it is preempted and rescheduled on a different CPU which has a skewed TSC. Fix it by disabling preemption. (I assume that the worst-case behaviour here is a stall of 2^32 cycles) Cc: Andi Kleen <ak@suse.de> Cc: Marin Mitov <mitov@issp.bas.bg> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge ssh://master.kernel.org/pub/scm/linux/kernel/git/tglx/linux-2.6-x86Linus Torvalds2007-10-192-2/+3
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * ssh://master.kernel.org/pub/scm/linux/kernel/git/tglx/linux-2.6-x86: (33 commits) x86: convert cpuinfo_x86 array to a per_cpu array x86: introduce frame_pointer() and stack_pointer() x86 & generic: change to __builtin_prefetch() i386: do not BUG_ON() when MSR is unknown x86: acpi use cpu_physical_id x86: convert cpu_llc_id to be a per cpu variable x86: convert cpu_to_apicid to be a per cpu variable i386: introduce "used_vectors" bitmap which can be used to reserve vectors. x86: use raw locks during oopses x86: honor _PAGE_PSE bit on page walks i386: do cpuid_device_create() in CPU_UP_PREPARE instead of CPU_ONLINE. x86: implement missing x86_64 function smp_call_function_mask() x86: use descriptor's functions instead of inline assembly i386: consolidate show_regs and show_registers for i386 i386: make callgraph use dump_trace() on i386/x86_64 x86: enable iommu_merge by default i386: i386 add AMD64 Barcelona PMU MSR definitions to msr.h x86: Unify i386 and x86-64 early quirks x86: enable HPET on ICH3 and ICH4 x86: force enable HPET on VT8235/8237 chipsets ... Manually fix trivial conflict with task pid container helper changes in arch/x86/kernel/process_32.c
| * x86: convert cpuinfo_x86 array to a per_cpu arrayMike Travis2007-10-192-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cpu_data is currently an array defined using NR_CPUS. This means that we overallocate since we will rarely really use maximum configured cpus. When NR_CPU count is raised to 4096 the size of cpu_data becomes 3,145,728 bytes. These changes were adopted from the sparc64 (and ia64) code. An additional field was added to cpuinfo_x86 to be a non-ambiguous cpu index. This corresponds to the index into a cpumask_t as well as the per_cpu index. It's used in various places like show_cpuinfo(). cpu_data is defined to be the boot_cpu_data structure for the NON-SMP case. Signed-off-by: Mike Travis <travis@sgi.com> Acked-by: Christoph Lameter <clameter@sgi.com> Cc: Andi Kleen <ak@suse.de> Cc: James Bottomley <James.Bottomley@steeleye.com> Cc: Dmitry Torokhov <dtor@mail.ru> Cc: "Antonino A. Daplas" <adaplas@pol.net> Cc: Mark M. Hoffman <mhoffman@lightlink.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* | pid namespaces: define is_global_init() and is_container_init()Serge E. Hallyn2007-10-191-1/+1
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | is_init() is an ambiguous name for the pid==1 check. Split it into is_global_init() and is_container_init(). A cgroup init has it's tsk->pid == 1. A global init also has it's tsk->pid == 1 and it's active pid namespace is the init_pid_ns. But rather than check the active pid namespace, compare the task structure with 'init_pid_ns.child_reaper', which is initialized during boot to the /sbin/init process and never changes. Changelog: 2.6.22-rc4-mm2-pidns1: - Use 'init_pid_ns.child_reaper' to determine if a given task is the global init (/sbin/init) process. This would improve performance and remove dependence on the task_pid(). 2.6.21-mm2-pidns2: - [Sukadev Bhattiprolu] Changed is_container_init() calls in {powerpc, ppc,avr32}/traps.c for the _exception() call to is_global_init(). This way, we kill only the cgroup if the cgroup's init has a bug rather than force a kernel panic. [akpm@linux-foundation.org: fix comment] [sukadev@us.ibm.com: Use is_global_init() in arch/m32r/mm/fault.c] [bunk@stusta.de: kernel/pid.c: remove unused exports] [sukadev@us.ibm.com: Fix capability.c to work with threaded init] Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Signed-off-by: Sukadev Bhattiprolu <sukadev@us.ibm.com> Acked-by: Pavel Emelianov <xemul@openvz.org> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Cedric Le Goater <clg@fr.ibm.com> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: Herbert Poetzel <herbert@13thfloor.at> Cc: Kirill Korotaev <dev@sw.ru> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86: rename .i assembler includes to .hAdrian Bunk2007-10-172-3/+3
| | | | | | | | | | | | | | .i is an ending used for preprocessed stuff. This patch therefore renames assembler include files to .h and guards the contents with an #ifdef __ASSEMBLY__. [ tglx: arch/x86 adaptation ] Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* i386: Remove strrchr assembler implementationAndi Kleen2007-10-171-20/+0
| | | | | | | | | | | | | | | The constraints in the inline assembler implementation of i386 strrchr() were incorrect and break the build with recent gcc 4.3. Since there are only very few callers of strrchr() and none of them are performance relevant just remove the assembler implementation and use the C fallback instead. [ tglx: arch/x86 adaptation ] Cc: rguenther@suse.de Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* i386: simplify smp_call_function_single() call sequence in msr-on-cpuAvi Kivity2007-10-171-40/+22
| | | | | | | | | | | | | | smp_call_function_single() now knows how to call the function on the current cpu. [ tglx: arch/x86 adaptation ] Cc: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: fix off-by-one in find_next_zero_stringAndrew Hastings2007-10-171-1/+1
| | | | | | | | | | | | Fix an off-by-one error in find_next_zero_string which prevents allocating the last bit. [ tglx: arch/x86 adaptation ] Signed-off-by: Andrew Hastings <abh@cray.com> on behalf of Cray Inc. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* lockdep: x86_64: connect the sysexit hookPeter Zijlstra2007-10-111-0/+4
| | | | | | | | Run the lockdep_sys_exit hook after all other C code on the syscall return path. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: fence oostores on 64-bitNick Piggin2007-10-121-0/+1
| | | | | | | | | | | | | movnt* instructions are not strongly ordered with respect to other stores, so if we are to assume stores are strongly ordered in the rest of the 64 bit code, we must fence these off (see similar examples in 32 bit code). [ The AMD memory ordering document seems to say that nontemporal stores can also pass earlier regular stores, so maybe we need sfences _before_ movnt* everywhere too? ] Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86_64: move libThomas Gleixner2007-10-1122-1/+2381
| | | | | Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* i386: move libThomas Gleixner2007-10-1114-0/+2865
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
OpenPOWER on IntegriCloud