summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linuxAnton Vorontsov2012-12-113228-116575/+95747
|\ | | | | | | | | | | | | | | | | | | | | | | | | The merge is merely to fix conflicts before sending a pull request. Conflicts: drivers/power/ab8500_btemp.c drivers/power/ab8500_charger.c drivers/power/ab8500_fg.c drivers/power/abx500_chargalg.c drivers/power/max8925_power.c Signed-off-by: Anton Vorontsov <anton.vorontsov@linaro.org>
| * Merge branch 'x86-timers-for-linus' of ↵Linus Torvalds2012-12-115-24/+64
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 timer update from Ingo Molnar: "This tree includes HPET fixes and also implements a calibration-free, TSC match driven APIC timer interrupt mode: 'TSC deadline mode' supported in SandyBridge and later CPUs." * 'x86-timers-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86: hpet: Fix inverted return value check in arch_setup_hpet_msi() x86: hpet: Fix masking of MSI interrupts x86: apic: Use tsc deadline for oneshot when available
| | * x86: hpet: Fix inverted return value check in arch_setup_hpet_msi()Jan Beulich2012-11-021-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | setup_hpet_msi_remapped() returns a negative error indicator on error - check for this rather than for a boolean false indication, and pass on that error code rather than a meaningless "-1". Signed-off-by: Jan Beulich <jbeulich@suse.com> Cc: David Woodhouse <dwmw2@infradead.org> Link: http://lkml.kernel.org/r/5093E00D02000078000A60E2@nat28.tlf.novell.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| | * x86: hpet: Fix masking of MSI interruptsJan Beulich2012-11-021-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | HPET_TN_FSB is not a proper mask bit; it merely toggles between MSI and legacy interrupt delivery. The proper mask bit is HPET_TN_ENABLE, so use both bits when (un)masking the interrupt. Signed-off-by: Jan Beulich <jbeulich@suse.com> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/5093E09002000078000A60E6@nat28.tlf.novell.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| | * x86: apic: Use tsc deadline for oneshot when availableSuresh Siddha2012-11-023-20/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the TSC deadline mode is supported, LAPIC timer one-shot mode can be implemented using IA32_TSC_DEADLINE MSR. An interrupt will be generated when the TSC value equals or exceeds the value in the IA32_TSC_DEADLINE MSR. This enables us to skip the APIC calibration during boot. Also, in xapic mode, this enables us to skip the uncached apic access to re-arm the APIC timer. As this timer ticks at the high frequency TSC rate, we use the TSC_DIVISOR (32) to work with the 32-bit restrictions in the clockevent API's to avoid 64-bit divides etc (frequency is u32 and "unsigned long" in the set_next_event(), max_delta limits the next event to 32-bit for 32-bit kernel). Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: venki@google.com Cc: len.brown@intel.com Link: http://lkml.kernel.org/r/1350941878.6017.31.camel@sbsiddha-desk.sc.intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * | Merge branch 'x86-nuke386-for-linus' of ↵Linus Torvalds2012-12-1124-425/+56
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull "Nuke 386-DX/SX support" from Ingo Molnar: "This tree removes ancient-386-CPUs support and thus zaps quite a bit of complexity: 24 files changed, 56 insertions(+), 425 deletions(-) ... which complexity has plagued us with extra work whenever we wanted to change SMP primitives, for years. Unfortunately there's a nostalgic cost: your old original 386 DX33 system from early 1991 won't be able to boot modern Linux kernels anymore. Sniff." I'm not sentimental. Good riddance. * 'x86-nuke386-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, 386 removal: Document Nx586 as a 386 and thus unsupported x86, cleanups: Simplify sync_core() in the case of no CPUID x86, 386 removal: Remove CONFIG_X86_POPAD_OK x86, 386 removal: Remove CONFIG_X86_WP_WORKS_OK x86, 386 removal: Remove CONFIG_INVLPG x86, 386 removal: Remove CONFIG_BSWAP x86, 386 removal: Remove CONFIG_XADD x86, 386 removal: Remove CONFIG_CMPXCHG x86, 386 removal: Remove CONFIG_M386 from Kconfig
| | * | x86, 386 removal: Document Nx586 as a 386 and thus unsupportedH. Peter Anvin2012-11-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Per Alan Cox, Nx586 did not support WP in supervisor mode, making it a 386 by Linux kernel standards. As such, it is too unsupported now. Reported-by: Alan Cox <alan@lxorguk.ukuu.org.uk> Link: http://lkml.kernel.org/r/20121128205203.05868eab@pyramind.ukuu.org.uk Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | x86, cleanups: Simplify sync_core() in the case of no CPUIDH. Peter Anvin2012-11-291-10/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Simplify the implementation of sync_core() for the case where we may not have the CPUID instruction available. [ v2: stylistic cleanup of the #else clause per suggestion by Borislav Petkov. ] Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-9-git-send-email-hpa@linux.intel.com Cc: Borislav Petkov <bp@alien8.de>
| | * | x86, 386 removal: Remove CONFIG_X86_POPAD_OKH. Peter Anvin2012-11-292-32/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The check_popad() routine tested for a 386-specific bug, and never actually did anything useful with it anyway other than print a message. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-8-git-send-email-hpa@linux.intel.com
| | * | x86, 386 removal: Remove CONFIG_X86_WP_WORKS_OKH. Peter Anvin2012-11-294-106/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All 486+ CPUs support WP in supervisor mode, so remove the fallback 386 support code. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-7-git-send-email-hpa@linux.intel.com
| | * | x86, 386 removal: Remove CONFIG_INVLPGH. Peter Anvin2012-11-296-25/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All 486+ CPUs support INVLPG, so remove the fallback 386 support code. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-6-git-send-email-hpa@linux.intel.com
| | * | x86, 386 removal: Remove CONFIG_BSWAPH. Peter Anvin2012-11-294-54/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All 486+ CPUs support BSWAP, so remove the fallback 386 support code. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-5-git-send-email-hpa@linux.intel.com
| | * | x86, 386 removal: Remove CONFIG_XADDH. Peter Anvin2012-11-295-42/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All 486+ CPUs support XADD, so remove the fallback 386 support code. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-4-git-send-email-hpa@linux.intel.com
| | * | x86, 386 removal: Remove CONFIG_CMPXCHGH. Peter Anvin2012-11-296-117/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All 486+ CPUs support CMPXCHG, so remove the fallback 386 support code. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-3-git-send-email-hpa@linux.intel.com
| | * | x86, 386 removal: Remove CONFIG_M386 from KconfigH. Peter Anvin2012-11-295-42/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the CONFIG_M386 symbol from Kconfig so that it cannot be selected. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/1354132230-21854-2-git-send-email-hpa@linux.intel.com
| * | | Merge branch 'x86-cpu-for-linus' of ↵Linus Torvalds2012-12-115-29/+60
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 topology discovery improvements from Ingo Molnar: "These changes improve topology discovery on AMD CPUs. Right now this feeds information displayed in /sys/devices/system/cpu/cpuX/cache/indexY/* - but in the future we could use this to set up a better scheduling topology." * 'x86-cpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, cacheinfo: Base cache sharing info on CPUID 0x8000001d on AMD x86, cacheinfo: Make use of CPUID 0x8000001d for cache information on AMD x86, cacheinfo: Determine number of cache leafs using CPUID 0x8000001d on AMD x86: Add cpu_has_topoext
| | * | | x86, cacheinfo: Base cache sharing info on CPUID 0x8000001d on AMDAndreas Herrmann2012-11-131-14/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The patch is based on a patch submitted by Hans Rosenfeld. See http://marc.info/?l=linux-kernel&m=133908777200931 Note that CPUID Fn8000_001D_EAX slightly differs to Intel's CPUID function 4. Bits 14-25 contain NumSharingCache. Actual number of cores sharing this cache. SW to add value of one to get result. The corresponding bits on Intel are defined as "maximum number of threads sharing this cache" (with a "plus 1" encoding). Thus a different method to determine which cores are sharing a cache level has to be used. Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Link: http://lkml.kernel.org/r/20121019090209.GG26718@alberich Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | x86, cacheinfo: Make use of CPUID 0x8000001d for cache information on AMDAndreas Herrmann2012-11-131-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rely on CPUID 0x8000001d for cache information when AMD CPUID topology extensions are available. Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Link: http://lkml.kernel.org/r/20121019090049.GF26718@alberich Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | x86, cacheinfo: Determine number of cache leafs using CPUID 0x8000001d on AMDAndreas Herrmann2012-11-133-12/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CPUID 0x8000001d works quite similar to Intels' CPUID function 4. Use it to determine number of cache leafs. Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Link: http://lkml.kernel.org/r/20121019085933.GE26718@alberich Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | x86: Add cpu_has_topoextAndreas Herrmann2012-11-133-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce cpu_has_topoext to check for AMD's CPUID topology extensions support. It indicates support for CPUID Fn8000_001D_EAX_x[N:0]-CPUID Fn8000_001E_EDX See AMD's CPUID Specification, Publication # 25481 (as of Rev. 2.34 September 2010) Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Link: http://lkml.kernel.org/r/20121019085813.GD26718@alberich Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | | Merge branch 'x86-cleanups-for-linus' of ↵Linus Torvalds2012-12-113-48/+23
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 cleanups from Ingo Molnar: "Small cleanups." * 'x86-cleanups-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86: Fix the error of using "const" in gen-insn-attr-x86.awk x86, apic: Cleanup cfg->domain setup for legacy interrupts x86: Remove dead hlt_use_halt code
| | * | | | x86: Fix the error of using "const" in gen-insn-attr-x86.awkCong Ding2012-12-101-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original version code causes following sparse warnings: arch/x86/lib/inat-tables.c:1080:25: warning: duplicate const arch/x86/lib/inat-tables.c:1095:25: warning: duplicate const arch/x86/lib/inat-tables.c:1118:25: warning: duplicate const for the variables inat_escape_tables, inat_group_tables, and inat_avx_tables in the code generated by gen-insn-attr-x86.awk. The author Masami Hiramutsu says here is to make both the value pointed by the pointers and the pointers itself read-only, so we move the "const" to be after the "*". Signed-off-by: Cong Ding <dinggnu@gmail.com> Link: http://lkml.kernel.org/r/20121209082103.GA9181@gmail.com Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | | x86, apic: Cleanup cfg->domain setup for legacy interruptsSuresh Siddha2012-11-261-20/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Issues that need to be handled: * Handle PIC interrupts on any CPU irrespective of the apic mode * In the apic lowest priority logical flat delivery mode, be prepared to handle the interrupt on any CPU irrespective of what the IO-APIC RTE says. * Because of above, when the IO-APIC starts handling the legacy PIC interrupt, use the same vector that is being used by the PIC while programming the corresponding IO-APIC RTE. Start with all the cpu's in the legacy PIC interrupts cfg->domain. By the time IO-APIC starts taking over the PIC interrupts, apic driver model is finalized. So depend on the assign_irq_vector() to update the cfg->domain and retain the same vector that was used by PIC before. For the logical apic flat mode, cfg->domain is updated (during the first call to assign_irq_vector()) to contain all the possible online cpu's (0xff). Vector used for the legacy PIC interrupt doesn't change when the IO-APIC starts handling the interrupt. Any interrupt migration after that doesn't change the cfg->domain or the vector used. For other apic modes like physical mode, cfg->domain is updated (during the first call to assign_irq_vector()) to the boot cpu (cpu-0), with the same vector that is being used by the PIC. When that interrupt is migrated to a different cpu, cfg->domin and the vector assigned will change accordingly. Tested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1353970176.21070.51.camel@sbsiddha-desk.sc.intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | | x86: Remove dead hlt_use_halt codeDaniel Lezcano2012-10-261-25/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The hlt_use_halt function returns always true and there is only one definition of it. The default_idle function can then get ride of the if ... statement and we can remove the else branch. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: linaro-dev@lists.linaro.org Cc: patches@linaro.org Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1351181591-8710-1-git-send-email-daniel.lezcano@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | | Merge branch 'x86-bsp-hotplug-for-linus' of ↵Linus Torvalds2012-12-1115-41/+436
| |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 BSP hotplug changes from Ingo Molnar: "This tree enables CPU#0 (the boot processor) to be onlined/offlined on x86, just like any other CPU. Enabled on Intel CPUs for now. Allowing this required the identification and fixing of latent CPU#0 assumptions (such as CPU#0 initializations, etc.) in the x86 architecture code, plus the identification of barriers to BSP-offlining, such as active PIC interrupts which can only be serviced on the BSP. It's behind a default-off option, and there's a debug option that allows the automatic testing of this feature. The motivation of this feature is to allow and prepare for true CPU-hotplug hardware support: recent changes to MCE support enable us to detect a deteriorating but not yet hard-failing L1/L2 cache on a CPU that could be soft-unplugged - or a failing L3 cache on a multi-socket system. Note that true hardware hot-plug is not yet fully enabled by this, because that requires a special platform wakeup sequence to be sent to the freshly powered up CPU#0. Future patches for this are planned, once such a platform exists. Chicken and egg" * 'x86-bsp-hotplug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, topology: Debug CPU0 hotplug x86/i387.c: Initialize thread xstate only on CPU0 only once x86, hotplug: Handle retrigger irq by the first available CPU x86, hotplug: The first online processor saves the MTRR state x86, hotplug: During CPU0 online, enable x2apic, set_numa_node. x86, hotplug: Wake up CPU0 via NMI instead of INIT, SIPI, SIPI x86-32, hotplug: Add start_cpu0() entry point to head_32.S x86-64, hotplug: Add start_cpu0() entry point to head_64.S kernel/cpu.c: Add comment for priority in cpu_hotplug_pm_callback x86, hotplug, suspend: Online CPU0 for suspend or hibernate x86, hotplug: Support functions for CPU0 online/offline x86, topology: Don't offline CPU0 if any PIC irq can not be migrated out of it x86, Kconfig: Add config switch for CPU0 hotplug doc: Add x86 CPU0 online/offline feature
| | * | | | | x86, topology: Debug CPU0 hotplugFenghua Yu2012-11-144-0/+107
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CONFIG_DEBUG_HOTPLUG_CPU0 is for debugging the CPU0 hotplug feature. The switch offlines CPU0 as soon as possible and boots userspace up with CPU0 offlined. User can online CPU0 back after boot time. The default value of the switch is off. To debug CPU0 hotplug, you need to enable CPU0 offline/online feature by either turning on CONFIG_BOOTPARAM_HOTPLUG_CPU0 during compilation or giving cpu0_hotplug kernel parameter at boot. It's safe and early place to take down CPU0 after all hotplug notifiers are installed and SMP is booted. Please note that some applications or drivers, e.g. some versions of udevd, during boot time may put CPU0 online again in this CPU0 hotplug debug mode. In this debug mode, setup_local_APIC() may report a warning on max_loops<=0 when CPU0 is onlined back after boot time. This is because pending interrupt in IRR can not move to ISR. The warning is not CPU0 specfic and it can happen on other CPUs as well. It is harmless except the first CPU0 online takes a bit longer time. And so this debug mode is useful to expose this issue. I'll send a seperate patch to fix this generic warning issue. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1352835171-3958-15-git-send-email-fenghua.yu@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | | | x86/i387.c: Initialize thread xstate only on CPU0 only onceFenghua Yu2012-11-141-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | init_thread_xstate() is only called once to avoid overriding xstate_size during boot time or during CPU hotplug. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1352835171-3958-14-git-send-email-fenghua.yu@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | | | x86, hotplug: Handle retrigger irq by the first available CPUFenghua Yu2012-11-141-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The first cpu in irq cfg->domain is likely to be CPU 0 and may not be available when CPU 0 is offline. Instead of using CPU 0 to handle retriggered irq, we use first available CPU which is online and in this irq's domain. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1352835171-3958-13-git-send-email-fenghua.yu@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | | | x86, hotplug: The first online processor saves the MTRR stateFenghua Yu2012-11-141-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ask the first online CPU to save mtrr instead of asking BSP. BSP could be offline when mtrr_save_state() is called. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1352835171-3958-12-git-send-email-fenghua.yu@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | | | x86, hotplug: During CPU0 online, enable x2apic, set_numa_node.Fenghua Yu2012-11-141-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously these functions were not run on the BSP (CPU 0, the boot processor) since the boot processor init would only be executed before this functionality was initialized. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1352835171-3958-11-git-send-email-fenghua.yu@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | | | x86, hotplug: Wake up CPU0 via NMI instead of INIT, SIPI, SIPIFenghua Yu2012-11-142-7/+105
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of waiting for STARTUP after INITs, BSP will execute the BIOS boot-strap code which is not a desired behavior for waking up BSP. To avoid the boot-strap code, wake up CPU0 by NMI instead. This works to wake up soft offlined CPU0 only. If CPU0 is hard offlined (i.e. physically hot removed and then hot added), NMI won't wake it up. We'll change this code in the future to wake up hard offlined CPU0 if real platform and request are available. AP is still waken up as before by INIT, SIPI, SIPI sequence. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1352896613-25957-1-git-send-email-fenghua.yu@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | | | x86-32, hotplug: Add start_cpu0() entry point to head_32.SFenghua Yu2012-11-141-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | start_cpu0() is defined in head_32.S for 32-bit. The function sets up stack and jumps to start_secondary() for CPU0 wake up. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1352835171-3958-9-git-send-email-fenghua.yu@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | | | x86-64, hotplug: Add start_cpu0() entry point to head_64.SFenghua Yu2012-11-141-0/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | start_cpu0() is defined in head_64.S for 64-bit. The function sets up stack and jumps to start_secondary() for CPU0 wake up. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1352835171-3958-8-git-send-email-fenghua.yu@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | | | kernel/cpu.c: Add comment for priority in cpu_hotplug_pm_callbackFenghua Yu2012-11-141-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cpu_hotplug_pm_callback should have higher priority than bsp_pm_callback which depends on cpu_hotplug_pm_callback to disable cpu hotplug to avoid race during bsp online checking. This is to hightlight the priorities between the two callbacks in case people may overlook the order. Ideally the priorities should be defined in macro/enum instead of fixed values. To do that, a seperate patchset may be pushed which will touch serveral other generic files and is out of scope of this patchset. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1352835171-3958-7-git-send-email-fenghua.yu@intel.com Reviewed-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | | | x86, hotplug, suspend: Online CPU0 for suspend or hibernateFenghua Yu2012-11-141-0/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because x86 BIOS requires CPU0 to resume from sleep, suspend or hibernate can't be executed if CPU0 is detected offline. To make suspend or hibernate and further resume succeed, CPU0 must be online. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1352835171-3958-6-git-send-email-fenghua.yu@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | | | x86, hotplug: Support functions for CPU0 online/offlineFenghua Yu2012-11-142-20/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add smp_store_boot_cpu_info() to store cpu info for BSP during boot time. Now smp_store_cpu_info() stores cpu info for bringing up BSP or AP after it's offline. Continue to online CPU0 in native_cpu_up(). Continue to offline CPU0 in native_cpu_disable(). Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1352835171-3958-5-git-send-email-fenghua.yu@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | | | x86, topology: Don't offline CPU0 if any PIC irq can not be migrated out of itFenghua Yu2012-11-141-7/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If CONFIG_BOOTPARAM_HOTPLUG_CPU is turned on, CPU0 hotplug feature is enabled by default. If CONFIG_BOOTPARAM_HOTPLUG_CPU is not turned on, CPU0 hotplug feature is not enabled by default. The kernel parameter cpu0_hotplug can enable CPU0 hotplug feature at boot. Currently the feature is supported on Intel platforms only. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1352835171-3958-4-git-send-email-fenghua.yu@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | | | x86, Kconfig: Add config switch for CPU0 hotplugFenghua Yu2012-11-141-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | New config switch CONFIG_BOOTPARAM_HOTPLUG_CPU0 sets default state of whether the CPU0 hotplug is on or off. If the switch is off, CPU0 is not hotpluggable by default. But the CPU0 hotplug feature can still be turned on by kernel parameter cpu0_hotplug at boot. If the switch is on, CPU0 is always hotpluggable. The default value of the switch is off. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1352835171-3958-3-git-send-email-fenghua.yu@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| | * | | | | doc: Add x86 CPU0 online/offline featureFenghua Yu2012-11-142-0/+38
| | | |/ / / | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If CONFIG_BOOTPARAM_HOTPLUG_CPU0 is turned on, CPU0 is hotpluggable. Otherwise, by default CPU0 is not hotpluggable and kernel parameter cpu0_hotplug enables CPU0 online/offline feature. The documentations point out two known CPU0 dependencies. First, resume from hibernate or suspend always starts from CPU0. So hibernate and suspend are prevented if CPU0 is offline. Another dependency is PIC interrupts always go to CPU0. It's said that some machines may depend on CPU0 to poweroff/reboot. But I haven't seen such dependency on a few tested machines. Please let me know if you see any CPU0 dependencies on your machine. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Link: http://lkml.kernel.org/r/1352835171-3958-2-git-send-email-fenghua.yu@intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
| * | | | | Merge branch 'x86-boot-for-linus' of ↵Linus Torvalds2012-12-112-3/+3
| |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 boot changes from Ingo Molnar: "Two small changes: a cleanup and allow CONFIG_X86_MPPARSE to be turned off on SFI as well." * 'x86-boot-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: arch/x86/Kconfig: Allow turning off CONFIG_X86_MPPARSE when either ACPI or SFI is present x86/boot/doc: Fix grammar and typo in boot.txt
| | * | | | | arch/x86/Kconfig: Allow turning off CONFIG_X86_MPPARSE when either ACPI or ↵Bin Gao2012-10-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SFI is present MPS tables are not needed for systems that have proper ACPI support. This is also true for systems that have SFI in place. So this patch allows the configuration (turning off) of CONFIG_X86_MPPARSE when either ACPI or SFI is present. Signed-off-by: Bin Gao <bin.gao@intel.com> Cc: Len Brown <len.brown@intel.com> Cc: Bin Gao <bin.gao@linux.intel.com> Link: http://lkml.kernel.org/r/20121025163544.GB38477@bingao-desk1.fm.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| | * | | | | x86/boot/doc: Fix grammar and typo in boot.txtKees Cook2012-10-261-2/+2
| | | |/ / / | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixes some minor issues in the x86 boot documentation. Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Rob Landley <rob@landley.net> Link: http://lkml.kernel.org/r/20121026031702.GA23828@www.outflux.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | | Merge branch 'x86-asm-for-linus' of ↵Linus Torvalds2012-12-113-75/+95
| |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 asm changes from Ingo Molnar: "Two fixlets and a cleanup." * 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86_32: Return actual stack when requesting sp from regs x86: Don't clobber top of pt_regs in nested NMI x86/asm: Clean up copy_page_*() comments and code
| | * | | | | x86_32: Return actual stack when requesting sp from regsSteven Rostedt2012-11-191-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As x86_32 traps do not save sp when taken in kernel mode, we need to accommodate the sp when requesting to get the register. This affects kprobes. Before: # echo 'p:ftrace sys_read+4 s=%sp' > /debug/tracing/kprobe_events # echo 1 > /debug/tracing/events/kprobes/enable # cat trace sshd-1345 [000] d... 489.117168: ftrace: (sys_read+0x4/0x70) s=b7e96768 sshd-1345 [000] d... 489.117191: ftrace: (sys_read+0x4/0x70) s=b7e96768 cat-1447 [000] d... 489.117392: ftrace: (sys_read+0x4/0x70) s=5a7 cat-1447 [001] d... 489.118023: ftrace: (sys_read+0x4/0x70) s=b77ad05f less-1448 [000] d... 489.118079: ftrace: (sys_read+0x4/0x70) s=b7762e06 less-1448 [000] d... 489.118117: ftrace: (sys_read+0x4/0x70) s=b7764970 After: sshd-1352 [000] d... 362.348016: ftrace: (sys_read+0x4/0x70) s=f3febfa8 sshd-1352 [000] d... 362.348048: ftrace: (sys_read+0x4/0x70) s=f3febfa8 bash-1355 [001] d... 362.348081: ftrace: (sys_read+0x4/0x70) s=f5075fa8 sshd-1352 [000] d... 362.348082: ftrace: (sys_read+0x4/0x70) s=f3febfa8 sshd-1352 [000] d... 362.690950: ftrace: (sys_read+0x4/0x70) s=f3febfa8 bash-1355 [001] d... 362.691033: ftrace: (sys_read+0x4/0x70) s=f5075fa8 Link: http://lkml.kernel.org/r/1342208654.30075.22.camel@gandalf.stny.rr.com Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| | * | | | | x86: Don't clobber top of pt_regs in nested NMISalman Qazi2012-11-021-14/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The nested NMI modifies the place (instruction, flags and stack) that the first NMI will iret to. However, the copy of registers modified is exactly the one that is the part of pt_regs in the first NMI. This can change the behaviour of the first NMI. In particular, Google's arch_trigger_all_cpu_backtrace handler also prints regions of memory surrounding addresses appearing in registers. This results in handled exceptions, after which nested NMIs start coming in. These nested NMIs change the value of registers in pt_regs. This can cause the original NMI handler to produce incorrect output. We solve this problem by interchanging the position of the preserved copy of the iret registers ("saved") and the copy subject to being trampled by nested NMI ("copied"). Link: http://lkml.kernel.org/r/20121002002919.27236.14388.stgit@dungbeetle.mtv.corp.google.com Signed-off-by: Salman Qazi <sqazi@google.com> [ Added a needed CFI_ADJUST_CFA_OFFSET ] Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| | * | | | | x86/asm: Clean up copy_page_*() comments and codeMa Ling2012-10-241-61/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Modern CPUs use fast-string instruction to accelerate copy performance, by combining data into 128 bit chunks. Modify comments and coding style to match it. Signed-off-by: Ma Ling <ling.ma@intel.com> Cc: iant@google.com Link: http://lkml.kernel.org/r/1350503565-19167-1-git-send-email-ling.ma@intel.com [ Cleaned up the clean up. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | | | Merge branch 'timers-core-for-linus' of ↵Linus Torvalds2012-12-119-106/+84
| |\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull core timer changes from Ingo Molnar: "It contains continued generic-NOHZ work by Frederic and smaller cleanups." * 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: time: Kill xtime_lock, replacing it with jiffies_lock clocksource: arm_generic: use this_cpu_ptr per-cpu helper clocksource: arm_generic: use integer math helpers time/jiffies: Make clocksource_jiffies static clocksource: clean up parse_pmtmr() tick: Correct the comments for tick_sched_timer() tick: Conditionally build nohz specific code in tick handler tick: Consolidate tick handling for high and low res handlers tick: Consolidate timekeeping handling code
| | * \ \ \ \ \ Merge branch 'fortglx/3.8/time' of git://git.linaro.org/people/jstultz/linux ↵Thomas Gleixner2012-11-21739-5944/+8888
| | |\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | into timers/core Fix trivial conflicts in: kernel/time/tick-sched.c Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| | | * | | | | | time: Kill xtime_lock, replacing it with jiffies_lockJohn Stultz2012-11-137-31/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that timekeeping is protected by its own locks, rename the xtime_lock to jifffies_lock to better describe what it protects. CC: Thomas Gleixner <tglx@linutronix.de> CC: Eric Dumazet <eric.dumazet@gmail.com> CC: Richard Cochran <richardcochran@gmail.com> Signed-off-by: John Stultz <john.stultz@linaro.org>
| | | * | | | | | clocksource: arm_generic: use this_cpu_ptr per-cpu helperShan Wei2012-11-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Utilize this_cpu_ptr() instead of per_cpu_ptr(...,smp_processor_id()) Signed-off-by: Shan Wei <davidshan@tencent.com> Reviewed-by: Christoph Lameter <cl@linux.com> Signed-off-by: John Stultz <john.stultz@linaro.org>
OpenPOWER on IntegriCloud