summaryrefslogtreecommitdiffstats
path: root/arch/x86/kernel/cpu/intel.c
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'x86/ptrace' into x86/tscIngo Molnar2008-12-231-2/+1
|\ | | | | | | | | Conflicts: arch/x86/kernel/cpu/intel.c
| * x86, bts: DS and BTS initializationMarkus Metzger2008-11-101-2/+1
| | | | | | | | | | | | | | | | | | | | | | Impact: widen BTS/PEBS ptrace enablement to more CPU models Move BTS initialisation out of an #ifdef CONFIG_X86_64 guard. Assume core2 BTS and DS layout for future models of family 6 processors. Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | x86: fix intel x86_64 llc_shared_map/cpu_llc_id anomoliesSuresh Siddha2008-12-191-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: fix wrong cache sharing detection on platforms supporting > 8 bit apicid's In the presence of extended topology eumeration leaf 0xb provided by cpuid, 32bit extended initial_apicid in cpuinfo_x86 struct will be updated by detect_extended_topology(). At this instance, we should also reinit the apicid (which could also potentially be extended to 32bit). With out this there will potentially be duplicate apicid's populated in the per cpu's cpuinfo_x86 struct, resulting in wrong cache sharing topology etc detected by init_intel_cacheinfo(). Reported-by: Dimitri Sivanich <sivanich@sgi.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Acked-by: Dimitri Sivanich <sivanich@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: <stable@kernel.org>
* | x86: support always running TSC on Intel CPUsVenki Pallipadi2008-12-161-0/+10
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: reward non-stop TSCs with good TSC-based clocksources, etc. Add support for CPUID_0x80000007_Bit8 on Intel CPUs as well. This bit means that the TSC is invariant with C/P/T states and always runs at constant frequency. With Intel CPUs, we have 3 classes * CPUs where TSC runs at constant rate and does not stop n C-states * CPUs where TSC runs at constant rate, but will stop in deep C-states * CPUs where TSC rate will vary based on P/T-states and TSC will stop in deep C-states. To cover these 3, one feature bit (CONSTANT_TSC) is not enough. So, add a second bit (NONSTOP_TSC). CONSTANT_TSC indicates that the TSC runs at constant frequency irrespective of P/T-states, and NONSTOP_TSC indicates that TSC does not stop in deep C-states. CPUID_0x8000000_Bit8 indicates both these feature bit can be set. We still have CONSTANT_TSC _set_ and NONSTOP_TSC _not_set_ on some older Intel CPUs, based on model checks. We can use TSC on such CPUs for time, as long as those CPUs do not support/enter deep C-states. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: print out apic id in hex formatYinghai Lu2008-10-161-1/+1
| | | | | Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: extended "flags" to show virtualization HW feature in /proc/cpuinfoSheng Yang2008-09-101-0/+41
| | | | | | | | | | | | | | | The hardware virtualization technology evolves very fast. But currently it's hard to tell if your CPU support a certain kind of HW technology without digging into the source code. The patch add a new catagory in "flags" under /proc/cpuinfo. Now "flags" can indicate the (important) HW virtulization features the CPU supported as well. Current implementation just cover Intel VMX side. Signed-off-by: Sheng Yang <sheng.yang@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: intel.c put workaround for old cpus togetherYinghai Lu2008-09-101-94/+98
| | | | | | | | | consolidate the code some more. No change in functionality intended. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: make intel.c have 64-bit support codeYinghai Lu2008-09-101-36/+83
| | | | | | | prepare for unification. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* Merge branch 'x86/pebs' into x86/unify-cpu-detectIngo Molnar2008-09-101-1/+2
|\ | | | | | | | | | | | | | | Conflicts: arch/x86/Kconfig.cpu include/asm-x86/ds.h Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * Merge branch 'linus' into x86/pebsIngo Molnar2008-07-251-0/+14
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/x86/Kconfig.cpu arch/x86/kernel/cpu/intel.c arch/x86/kernel/setup_64.c Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | x86, ptrace: PEBS supportMarkus Metzger2008-05-121-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Polish the ds.h interface and add support for PEBS. Ds.c is meant to be the resource allocator for per-thread and per-cpu BTS and PEBS recording. It is used by ptrace/utrace to provide execution tracing of debugged tasks. It will be used by profilers (e.g. perfmon2). It may be used by kernel debuggers to provide a kernel execution trace. Changes in detail: - guard DS and ptrace by CONFIG macros - separate DS and BTS more clearly - simplify field accesses - add functions to manage PEBS buffers - add simple protection/allocation mechanism - added support for Atom Opens: - buffer overflow handling Currently, only circular buffers are supported. This is all we need for debugging. Profilers would want an overflow notification. This is planned to be added when perfmon2 is made to use the ds.h interface. - utrace intermediate layer Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* | | x86: little clean up of intel.c/intel_64.cYinghai Lu2008-09-081-4/+3
| | | | | | | | | | | | | | | Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | | Merge branch 'x86/x2apic' into x86/coreIngo Molnar2008-09-051-3/+10
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/x86/kernel/cpu/common_64.c Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86: use cpuid vector 0xb when available for detecting cpu topologySuresh Siddha2008-08-231-3/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cpuid leaf 0xb provides extended topology enumeration. This interface provides the 32-bit x2APIC id of the logical processor and it also provides a new mechanism to detect SMT and core siblings (which provides increased addressability). Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | | | x86: remove cpu_vendor_devYinghai Lu2008-09-041-1/+2
|/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1. add c_x86_vendor into cpu_dev 2. change cpu_devs to static 3. check c_x86_vendor before put that cpu_dev into array 4. remove alignment for 64bit 5. order the sequence in cpu_devs according to link sequence... so could put intel at first, then amd... Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | | x86: move cmpxchg fallbacks to a generic placeThomas Petazzoni2008-08-181-64/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | arch/x86/kernel/cpu/intel.c defines a few fallback functions (cmpxchg_*()) that are used when the CPU doesn't support cmpxchg and/or cmpxchg64 natively. However, while defined in an Intel-specific file, these functions are also used for CPUs from other vendors when they don't support cmpxchg and/or cmpxchg64. This breaks the compilation when support for Intel CPUs is disabled. This patch moves these functions to a new arch/x86/kernel/cpu/cmpxchg.c file, unconditionally compiled when X86_32 is enabled. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Cc: michael@free-electrons.com Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | | x86: make movsl_mask definition non-CPU specificThomas Petazzoni2008-08-181-7/+0
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | movsl_mask is currently defined in arch/x86/kernel/cpu/intel.c, which contains code specific to Intel CPUs. However, movsl_mask is used in the non-CPU specific code in arch/x86/lib/usercopy_32.c, which breaks the compilation when support for Intel CPUs is compiled out. This patch solves this problem by moving movsl_mask's definition close to its users in arch/x86/lib/usercopy_32.c. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Cc: michael@free-electrons.com Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | x86: APIC: remove apic_write_around(); use alternativesMaciej W. Rozycki2008-07-181-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use alternatives to select the workaround for the 11AP Pentium erratum for the affected steppings on the fly rather than build time. Remove the X86_GOOD_APIC configuration option and replace all the calls to apic_write_around() with plain apic_write(), protecting accesses to the ESR as appropriate due to the 3AP Pentium erratum. Remove apic_read_around() and all its invocations altogether as not needed. Remove apic_write_atomic() and all its implementing backends. The use of ASM_OUTPUT2() is not strictly needed for input constraints, but I have used it for readability's sake. I had the feeling no one else was brave enough to do it, so I went ahead and here it is. Verified by checking the generated assembly and tested with both a 32-bit and a 64-bit configuration, also with the 11AP "feature" forced on and verified with gdb on /proc/kcore to work as expected (as an 11AP machines are quite hard to get hands on these days). Some script complained about the use of "volatile", but apic_write() needs it for the same reason and is effectively a replacement for writel(), so I have disregarded it. I am not sure what the policy wrt defconfig files is, they are generated and there is risk of a conflict resulting from an unrelated change, so I have left changes to them out. The option will get removed from them at the next run. Some testing with machines other than mine will be needed to avoid some stupid mistake, but despite its volume, the change is not really that intrusive, so I am fairly confident that because it works for me, it will everywhere. Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | x86: fix numaq_tsc_disable callingYinghai Lu2008-07-131-0/+4
|/ | | | | | | | | | | | | | | | got this on a test-system: calling numaq_tsc_disable+0x0/0x39 NUMAQ: disabling TSC initcall numaq_tsc_disable+0x0/0x39 returned 0 after 0 msecs that's because we should not be using arch_initcall to call numaq_tsc_disable. need to call it in setup_arch before time_init()/tsc_init() and call it in init_intel() to make the cpu feature bits right. Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: clean up cpu capabilities in arch/x86/kernel/cpu/intel.cIngo Molnar2008-04-171-7/+7
| | | | Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: coding style fixes to arch/x86/kernel/cpu/intel.cPaolo Ciarrocchi2008-04-171-40/+43
| | | | | | | | | | | | | | | | | | | | | | Before: total: 37 errors, 16 warnings, 366 lines checked After: total: 0 errors, 15 warnings, 369 lines checked No code changed: arch/x86/kernel/cpu/intel.o: text data bss dec hex filename 1534 452 0 1986 7c2 intel.o.before 1534 452 0 1986 7c2 intel.o.after md5: 1ca348a06de6eb354c4b6ea715a57db5 intel.o.before.asm 1ca348a06de6eb354c4b6ea715a57db5 intel.o.after.asm Signed-off-by: Paolo Ciarrocchi <paolo.ciarrocchi@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: use ELF section to list CPU vendor specific codeThomas Petazzoni2008-04-171-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Replace the hardcoded list of initialization functions for each CPU vendor by a list in an ELF section, which is read at initialization in arch/x86/kernel/cpu/cpu.c to fill the cpu_devs[] array. The ELF section, named .x86cpuvendor.init, is reclaimed after boot, and contains entries of type "struct cpu_vendor_dev" which associates a vendor number with a pointer to a "struct cpu_dev" structure. This first modification allows to remove all the VENDOR_init_cpu() functions. This patch also removes the hardcoded calls to early_init_amd() and early_init_intel(). Instead, we add a "c_early_init" member to the cpu_dev structure, which is then called if not NULL by the generic CPU initialization code. Unfortunately, in early_cpu_detect(), this_cpu is not yet set, so we have to use the cpu_devs[] array directly. This patch is part of the Linux Tiny project, and is needed for further patch that will allow to disable compilation of unused CPU support code. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: add include to cpu/intel.cHarvey Harrison2008-02-041-0/+1
| | | | | | | | | | Fixes sparse warning: arch/x86/kernel/cpu/intel.c:48:15: warning: symbol 'ppro_with_ram_bug' was not declared. Should it be static? Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: move MWAIT idle check to generic CPU initialization on 32-bitAndi Kleen2008-01-301-1/+0
| | | | | | | | | | Previously it was only run for Intel CPUs, but AMD Fam10h implements MWAIT too. This matches 64bit behaviour. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: move X86_FEATURE_CONSTANT_TSC into early cpu feature detectionAndi Kleen2008-01-301-7/+6
| | | | | | | | | | | | Need this in the next patch in time_init and that happens early. This includes a minor fix on i386 where early_intel_workarounds() [which is now called early_init_intel] really executes early as the comments say. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: lfence fixIngo Molnar2008-01-301-1/+1
| | | | | | | | | LFENCE is available on XMM2 or higher Intel CPUs - not XMM or higher... this caused boot failures on XMM1 & !XMM1 capable CPUs. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: Implement support to synchronize RDTSC with LFENCE on Intel CPUsAndi Kleen2008-01-301-1/+2
| | | | | | | | | | | | | According to Intel RDTSC can be always synchronized with LFENCE on all current CPUs. Implement the necessary CPUID bit for that. It is unclear yet if that is true for all future CPUs too, but if there's another way the kernel can be always updated. Cc: asit.k.mallick@intel.com Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86, ptrace: support for branch trace store(BTS)Markus Metzger2008-01-301-0/+5
| | | | | | | | | | | | | | | Resend using different mail client Changes to the last version: - split implementation into two layers: ds/bts and ptrace - renamed TIF's - save/restore ds save area msr in __switch_to_xtra() - make block-stepping only look at BTF bit Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86: fall back on interrupt disable in cmpxchg8b on 80386 and 80486Mathieu Desnoyers2008-01-301-0/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Actually, on 386, cmpxchg and cmpxchg_local fall back on cmpxchg_386_u8/16/32: it disables interruptions around non atomic updates to mimic the cmpxchg behavior. The comment: /* Poor man's cmpxchg for 386. Unsuitable for SMP */ already present in cmpxchg_386_u32 tells much about how this cmpxchg implementation should not be used in a SMP context. However, the cmpxchg_local can perfectly use this fallback, since it only needs to be atomic wrt the local cpu. This patch adds a cmpxchg_486_u64 and uses it as a fallback for cmpxchg64 and cmpxchg64_local on 80386 and 80486. Q: but why is it called cmpxchg_486 when the other functions are called A: Because the standard cmpxchg is missing only on 386, but cmpxchg8b is missing both on 386 and 486. Citing Intel's Instruction set reference: cmpxchg: This instruction is not supported on Intel processors earlier than the Intel486 processors. cmpxchg8b: This instruction encoding is not supported on Intel processors earlier than the Pentium processors. Q: What's the reason to have cmpxchg64_local on 32 bit architectures? Without that need all this would just be a few simple defines. A: cmpxchg64_local on 32 bits architectures takes unsigned long long parameters, but cmpxchg_local only takes longs. Since we have cmpxchg8b to execute a 8 byte cmpxchg atomically on pentium and +, it makes sense to provide a flavor of cmpxchg and cmpxchg_local using this instruction. Also, for 32 bits architectures lacking the 64 bits atomic cmpxchg, it makes sense _not_ to define cmpxchg64 while cmpxchg could still be available. Moreover, the fallback for cmpxchg8b on i386 for 386 and 486 is a However, cmpxchg64_local will be emulated by disabling interrupts on all architectures where it is not supported atomically. Therefore, we *could* turn cmpxchg64_local into a cmpxchg_local, but it would make the 386/486 fallbacks ugly, make its design different from cmpxchg/cmpxchg64 (which really depends on atomic operations and cannot be emulated) and require the __cmpxchg_local to be expressed as a macro rather than an inline function so the parameters would not be fixed to unsigned long long in every case. So I think cmpxchg64_local makes sense there, but I am open to suggestions. Q: Are there any callers? A: I am actually using it in LTTng in my timestamping code. I use it to work around CPUs with asynchronous TSCs. I need to update 64 bits values atomically on this 32 bits architecture. Changelog: - Ran though checkpatch. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* i386: fix section mismatch warning in intel.cSam Ravnborg2007-10-171-2/+15
| | | | | | | | | | | | | | | | | | | | | | Fix following section mismatch warning: WARNING: vmlinux.o(.text+0xc88c): Section mismatch: reference to .init.text:trap_init_f00f_bug (between 'init_intel' and 'cpuid4_cache_lookup') init_intel are __cpuint where trap_init_f00f_bug is __init. Fixed by declaring trap_init_f00f_bug __cpuinit. Moved the defintion of trap_init_f00f_bug to the sole user in init.c so the ugly prototype in intel.c could get killed. Frank van Maarseveen <frankvm@frankvm.com> supplied the .config used to reproduce the warning. [ tglx: arch/x86 adaptation ] Cc: Frank van Maarseveen <frankvm@frankvm.com> Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* i386: move kernel/cpuThomas Gleixner2007-10-111-0/+333
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
OpenPOWER on IntegriCloud