| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In some rare cases, nmis are generated immediately after the nmi
handler of the cpu was started. This causes the counter not to be
enabled. Before enabling the nmi handlers we need to set variable
ctr_running first and make sure its value is written to memory.
Also, the patch makes all existing barriers a memory barrier instead
of a compiler barrier only.
Reported-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: <stable@kernel.org> # .35+
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|\
| |
| |
| | |
git://git.kernel.org/pub/scm/linux/kernel/git/rric/oprofile into perf/urgent
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
For some performance events it's useful to set the EDGE and INV
bits and the CMASK mask in the counter control register. The list
of predefined events Intel releases for each CPU has some events which
require these settings to get more "natural" to use higher level events.
oprofile currently doesn't allow this.
This patch adds new extra configuration fields for them, so that
they can be specified in oprofilefs.
An updated oprofile daemon can then make use of this to set them.
v2: Write back masked extra value to variable.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some subsystems in the x86 tree need to carry out suspend/resume and
shutdown operations with one CPU on-line and interrupts disabled and
they define sysdev classes and sysdevs or sysdev drivers for this
purpose. This leads to unnecessarily complicated code and excessive
memory usage, so switch them to using struct syscore_ops objects for
this purpose instead.
Generally, there are three categories of subsystems that use
sysdevs for implementing PM operations: (1) subsystems whose
suspend/resume callbacks ignore their arguments entirely (the
majority), (2) subsystems whose suspend/resume callbacks use their
struct sys_device argument, but don't really need to do that,
because they can be implemented differently in an arguably simpler
way (io_apic.c), and (3) subsystems whose suspend/resume callbacks
use their struct sys_device argument, but the value of that argument
is always the same and could be ignored (microcode_core.c). In all
of these cases the subsystems in question may be readily converted to
using struct syscore_ops objects for power management and shutdown.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Ingo Molnar <mingo@elte.hu>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'perf-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (28 commits)
perf session: Fix infinite loop in __perf_session__process_events
perf evsel: Support perf_evsel__open(cpus > 1 && threads > 1)
perf sched: Use PTHREAD_STACK_MIN to avoid pthread_attr_setstacksize() fail
perf tools: Emit clearer message for sys_perf_event_open ENOENT return
perf stat: better error message for unsupported events
perf sched: Fix allocation result check
perf, x86: P4 PMU - Fix unflagged overflows handling
dynamic debug: Fix build issue with older gcc
tracing: Fix TRACE_EVENT power tracepoint creation
tracing: Fix preempt count leak
tracepoint: Add __rcu annotation
tracing: remove duplicate null-pointer check in skb tracepoint
tracing/trivial: Add missing comma in TRACE_EVENT comment
tracing: Include module.h in define_trace.h
x86: Save rbp in pt_regs on irq entry
x86, dumpstack: Fix unused variable warning
x86, NMI: Clean-up default_do_nmi()
x86, NMI: Allow NMI reason io port (0x61) to be processed on any CPU
x86, NMI: Remove DIE_NMI_IPI
x86, NMI: Add priorities to handlers
...
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
With priorities in place and no one really understanding the difference between
DIE_NMI and DIE_NMI_IPI, just remove DIE_NMI_IPI and convert everyone to DIE_NMI.
This also simplifies default_do_nmi() a little bit. Instead of calling the
die_notifier in both the if and else part, just pull it out and call it before
the if-statement. This has the side benefit of avoiding a call to the ioport
to see if there is an external NMI sitting around until after the (more frequent)
internal NMIs are dealt with.
Patch-Inspired-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Don Zickus <dzickus@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1294348732-15030-5-git-send-email-dzickus@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In order to consolidate the NMI die_chain events, we need to setup the priorities
for the die notifiers.
I started by defining a bunch of common priorities that can be used by the
notifier blocks. Then I modified the notifier blocks to use the newly created
priorities.
Now that the priorities are straightened out, it should be easier to remove the
event DIE_NMI_IPI.
Signed-off-by: Don Zickus <dzickus@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1294348732-15030-4-git-send-email-dzickus@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|\ \
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu
* 'for-2.6.38' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (30 commits)
gameport: use this_cpu_read instead of lookup
x86: udelay: Use this_cpu_read to avoid address calculation
x86: Use this_cpu_inc_return for nmi counter
x86: Replace uses of current_cpu_data with this_cpu ops
x86: Use this_cpu_ops to optimize code
vmstat: User per cpu atomics to avoid interrupt disable / enable
irq_work: Use per cpu atomics instead of regular atomics
cpuops: Use cmpxchg for xchg to avoid lock semantics
x86: this_cpu_cmpxchg and this_cpu_xchg operations
percpu: Generic this_cpu_cmpxchg() and this_cpu_xchg support
percpu,x86: relocate this_cpu_add_return() and friends
connector: Use this_cpu operations
xen: Use this_cpu_inc_return
taskstats: Use this_cpu_ops
random: Use this_cpu_inc_return
fs: Use this_cpu_inc_return in buffer.c
highmem: Use this_cpu_xx_return() operations
vmstat: Use this_cpu_inc_return for vm statistics
x86: Support for this_cpu_add, sub, dec, inc_return
percpu: Generic support for this_cpu_add, sub, dec, inc_return
...
Fixed up conflicts: in arch/x86/kernel/{apic/nmi.c, apic/x2apic_uv_x.c, process.c}
as per Tejun.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Go through x86 code and replace __get_cpu_var and get_cpu_var
instances that refer to a scalar and are not used for address
determinations.
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Ingo Molnar <mingo@elte.hu>
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
|
|/
|
|
|
|
|
| |
This patch adds support for AMD family 15h (Interlagos/Valencia/
Zambezi) cpus.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
| |
This patch adds support for AMD family 14h (Ontario/Zacate) cpus.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
| |
This patch adds support for AMD family 12h (Llano) cpus.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Now, that we only call the exit function if init succeeds with commit:
979048e oprofile: don't call arch exit code from init code on failure
we can simplify the x86 init/exit functions too. Variable using_nmi
becomes obsolete.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds CPU type detection for dunnington processor (Family 6
/ Model 29) to be identified as core 2 family cpu type (wikipedia
source).
I tested oprofile on Intel(R) Xeon(R) CPU E7440 reporting itself as
model 29, and it runs without an issue.
Spec:
http://www.intel.com/Assets/en_US/PDF/specupdate/320336.pdf
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Andi Kleen <ak@linux.intel.com>
Cc: stable@kernel.org
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds CPU type detection for the Intel Celeron 540, which is
part of the Core 2 family according to Wikipedia; the family and ID pair
is absent from the Volume 3B table referenced in the source code
comments. I have tested this patch on an Intel Celeron 540 machine
reporting itself as Family 6 Model 22, and OProfile runs on the machine
without issue.
Spec:
http://download.intel.com/design/mobile/SPECUPDT/317667.pdf
Signed-off-by: Patrick Simmons <linuxrocks123@netscape.net>
Acked-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Cc: stable@kernel.org
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The use of the return value of init_sysfs() with commit
10f0412 oprofile, x86: fix init_sysfs error handling
discovered the following build error for !CONFIG_PM:
.../linux/arch/x86/oprofile/nmi_int.c: In function ‘op_nmi_init’:
.../linux/arch/x86/oprofile/nmi_int.c:784: error: expected expression before ‘do’
make[2]: *** [arch/x86/oprofile/nmi_int.o] Error 1
make[1]: *** [arch/x86/oprofile] Error 2
This patch fixes this.
Reported-by: Ingo Molnar <mingo@elte.hu>
Cc: stable@kernel.org
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
| |
On failure init_sysfs() might not properly free resources. The error
code of the function is not checked. And, when reinitializing the exit
function might be called twice. This patch fixes all this.
Cc: stable@kernel.org
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Newer Intel processors identifying themselves as model 30 are not recognized by
oprofile.
<cpuinfo snippet>
model : 30
model name : Intel(R) Xeon(R) CPU X3470 @ 2.93GHz
</cpuinfo snippet>
Running oprofile on these machines gives the following:
+ opcontrol --init
+ opcontrol --list-events
oprofile: available events for CPU type "Intel Architectural Perfmon"
See Intel 64 and IA-32 Architectures Software Developer's Manual
Volume 3B (Document 253669) Chapter 18 for architectural perfmon events
This is a limited set of fallback events because oprofile doesn't know your CPU
CPU_CLK_UNHALTED: (counter: all)
Clock cycles when not halted (min count: 6000)
INST_RETIRED: (counter: all)
number of instructions retired (min count: 6000)
LLC_MISSES: (counter: all)
Last level cache demand requests from this core that missed the LLC
(min count: 6000)
Unit masks (default 0x41)
----------
0x41: No unit mask
LLC_REFS: (counter: all)
Last level cache demand requests from this core (min count: 6000)
Unit masks (default 0x4f)
----------
0x4f: No unit mask
BR_MISS_PRED_RETIRED: (counter: all)
number of mispredicted branches retired (precise) (min count: 500)
+ opcontrol --shutdown
Tested using oprofile 0.9.6.
Signed-off-by: Josh Hunt <johunt@akamai.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Back when the patch was submitted for "Add Xeon 7500 series support to
oprofile", Robert Richter had asked for a followon patch that
converted all the CPU ID values to hex.
I have done that here for the "i386/core_i7" and "i386/atom" class
processors in the ppro_init() function and also added some comments on
where to find documentation on the Intel processors.
Signed-off-by: John L. Villalovos <john.l.villalovos@intel.com>
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current IBS code is not hotplug capable. An offline cpu might not be
initialized or deinitialized properly. This patch fixes this by
removing on_each_cpu() functions. The IBS init/deinit code is executed
in the per-cpu functions model->setup_ctrs() and model->cpu_down()
which are also called by hotplug notifiers. model->cpu_down() replaces
model->exit() that became obsolete.
Cc: Andi Kleen <andi@firstfloor.org>
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
|
| |
This patch moves the cpu notifier registration from nmi_init() to
nmi_setup(). The corresponding unregistration function is now in
nmi_shutdown(). Thus, the hotplug code is only active, if the oprofile
daemon is running.
Cc: Andi Kleen <andi@firstfloor.org>
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
| |
Reordering some functions. Necessary for the next patch. No functional
changes.
Cc: Andi Kleen <andi@firstfloor.org>
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
|
| |
This patch adds checks to the nmi handler. Now samples are only
generated and counters reenabled, if the counters are running.
Otherwise the counters are stopped, if oprofile is using the nmi. In
other cases it will ignore the nmi notification.
Cc: Andi Kleen <andi@firstfloor.org>
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch reworks oprofile cpu hotplug code as follows:
Introduce ctr_running variable to check, if counters are running or
not. The state must be known for taking a cpu on or offline and when
switching counters during counter multiplexing.
Protect on_each_cpu() sections with get_online_cpus()/put_online_cpu()
functions. This is necessary if notifiers or states are
modified. Within these sections the cpu mask may not change.
Switch only between counters in nmi_cpu_switch(), if counters are
running. Otherwise the switch may restart a counter though they are
disabled.
Add nmi_cpu_setup() and nmi_cpu_shutdown() to cpu hotplug code. The
function must also be called to avoid uninitialzed counter usage.
Cc: Andi Kleen <andi@firstfloor.org>
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
| |
CPU notifier register functions also exist if CONFIG_SMP is
disabled. This change is part of hotplug code rework and also
necessary for later patches.
Cc: Andi Kleen <andi@firstfloor.org>
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a NULL pointer dereference that is triggered when taking a
cpu offline after oprofile was initialized, e.g.:
$ opcontrol --init
$ opcontrol --start-daemon
$ opcontrol --shutdown
$ opcontrol --deinit
$ echo 0 > /sys/devices/system/cpu/cpu1/online
See the crash dump below. Though the counter has been disabled the cpu
notifier is still active and trying to use already freed counter data.
This fix is for linux-stable. To proper fix this, the hotplug code
must be rewritten. Thus I will leave a WARN_ON_ONCE() message with
this patch.
BUG: unable to handle kernel NULL pointer dereference at (null)
IP: [<ffffffff8132ad57>] op_amd_stop+0x2d/0x8e
PGD 0
Oops: 0000 [#1] SMP
last sysfs file: /sys/devices/system/cpu/cpu1/online
CPU 1
Modules linked in:
Pid: 0, comm: swapper Not tainted 2.6.34-rc5-oprofile-x86_64-standard-00210-g8c00f06 #16 Anaheim/Anaheim
RIP: 0010:[<ffffffff8132ad57>] [<ffffffff8132ad57>] op_amd_stop+0x2d/0x8e
RSP: 0018:ffff880001843f28 EFLAGS: 00010006
RAX: 0000000000000000 RBX: 0000000000000000 RCX: dead000000200200
RDX: ffff880001843f68 RSI: dead000000100100 RDI: 0000000000000000
RBP: ffff880001843f48 R08: 0000000000000000 R09: ffff880001843f08
R10: ffffffff8102c9a5 R11: ffff88000184ea80 R12: 0000000000000000
R13: ffff88000184f6c0 R14: 0000000000000000 R15: 0000000000000000
FS: 00007fec6a92e6f0(0000) GS:ffff880001840000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 0000000000000000 CR3: 000000000163b000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process swapper (pid: 0, threadinfo ffff88042fcd8000, task ffff88042fcd51d0)
Stack:
ffff880001843f48 0000000000000001 ffff88042e9f7d38 ffff880001843f68
<0> ffff880001843f58 ffffffff8132a602 ffff880001843f98 ffffffff810521b3
<0> ffff880001843f68 ffff880001843f68 ffff880001843f88 ffff88042fcd9fd8
Call Trace:
<IRQ>
[<ffffffff8132a602>] nmi_cpu_stop+0x21/0x23
[<ffffffff810521b3>] generic_smp_call_function_single_interrupt+0xdf/0x11b
[<ffffffff8101804f>] smp_call_function_single_interrupt+0x22/0x31
[<ffffffff810029f3>] call_function_single_interrupt+0x13/0x20
<EOI>
[<ffffffff8102c9a5>] ? wake_up_process+0x10/0x12
[<ffffffff81008701>] ? default_idle+0x22/0x37
[<ffffffff8100896d>] c1e_idle+0xdf/0xe6
[<ffffffff813f1170>] ? atomic_notifier_call_chain+0x13/0x15
[<ffffffff810012fb>] cpu_idle+0x4b/0x7e
[<ffffffff813e8a4e>] start_secondary+0x1ae/0x1b2
Code: 89 e5 41 55 49 89 fd 41 54 45 31 e4 53 31 db 48 83 ec 08 89 df e8 be f8 ff ff 48 98 48 83 3c c5 10 67 7a 81 00 74 1f 49 8b 45 08 <42> 8b 0c 20 0f 32 48 c1 e2 20 25 ff ff bf ff 48 09 d0 48 89 c2
RIP [<ffffffff8132ad57>] op_amd_stop+0x2d/0x8e
RSP <ffff880001843f28>
CR2: 0000000000000000
---[ end trace 679ac372d674b757 ]---
Kernel panic - not syncing: Fatal exception in interrupt
Pid: 0, comm: swapper Tainted: G D 2.6.34-rc5-oprofile-x86_64-standard-00210-g8c00f06 #16
Call Trace:
<IRQ> [<ffffffff813ebd6a>] panic+0x9e/0x10c
[<ffffffff810474b0>] ? up+0x34/0x39
[<ffffffff81031ccc>] ? kmsg_dump+0x112/0x12c
[<ffffffff813eeff1>] oops_end+0x81/0x8e
[<ffffffff8101efee>] no_context+0x1f3/0x202
[<ffffffff8101f1b7>] __bad_area_nosemaphore+0x1ba/0x1e0
[<ffffffff81028d24>] ? enqueue_task_fair+0x16d/0x17a
[<ffffffff810264dc>] ? activate_task+0x42/0x53
[<ffffffff8102c967>] ? try_to_wake_up+0x272/0x284
[<ffffffff8101f1eb>] bad_area_nosemaphore+0xe/0x10
[<ffffffff813f0f3f>] do_page_fault+0x1c8/0x37c
[<ffffffff81028d24>] ? enqueue_task_fair+0x16d/0x17a
[<ffffffff813ee55f>] page_fault+0x1f/0x30
[<ffffffff8102c9a5>] ? wake_up_process+0x10/0x12
[<ffffffff8132ad57>] ? op_amd_stop+0x2d/0x8e
[<ffffffff8132ad46>] ? op_amd_stop+0x1c/0x8e
[<ffffffff8132a602>] nmi_cpu_stop+0x21/0x23
[<ffffffff810521b3>] generic_smp_call_function_single_interrupt+0xdf/0x11b
[<ffffffff8101804f>] smp_call_function_single_interrupt+0x22/0x31
[<ffffffff810029f3>] call_function_single_interrupt+0x13/0x20
<EOI> [<ffffffff8102c9a5>] ? wake_up_process+0x10/0x12
[<ffffffff81008701>] ? default_idle+0x22/0x37
[<ffffffff8100896d>] c1e_idle+0xdf/0xe6
[<ffffffff813f1170>] ? atomic_notifier_call_chain+0x13/0x15
[<ffffffff810012fb>] cpu_idle+0x4b/0x7e
[<ffffffff813e8a4e>] start_secondary+0x1ae/0x1b2
------------[ cut here ]------------
WARNING: at /local/rrichter/.source/linux/arch/x86/kernel/smp.c:118 native_smp_send_reschedule+0x27/0x53()
Hardware name: Anaheim
Modules linked in:
Pid: 0, comm: swapper Tainted: G D 2.6.34-rc5-oprofile-x86_64-standard-00210-g8c00f06 #16
Call Trace:
<IRQ> [<ffffffff81017f32>] ? native_smp_send_reschedule+0x27/0x53
[<ffffffff81030ee2>] warn_slowpath_common+0x77/0xa4
[<ffffffff81030f1e>] warn_slowpath_null+0xf/0x11
[<ffffffff81017f32>] native_smp_send_reschedule+0x27/0x53
[<ffffffff8102634b>] resched_task+0x60/0x62
[<ffffffff8102653a>] check_preempt_curr_idle+0x10/0x12
[<ffffffff8102c8ea>] try_to_wake_up+0x1f5/0x284
[<ffffffff8102c986>] default_wake_function+0xd/0xf
[<ffffffff810a110d>] pollwake+0x57/0x5a
[<ffffffff8102c979>] ? default_wake_function+0x0/0xf
[<ffffffff81026be5>] __wake_up_common+0x46/0x75
[<ffffffff81026ed0>] __wake_up+0x38/0x50
[<ffffffff81031694>] printk_tick+0x39/0x3b
[<ffffffff8103ac37>] update_process_times+0x3f/0x5c
[<ffffffff8104dc63>] tick_periodic+0x5d/0x69
[<ffffffff8104dc90>] tick_handle_periodic+0x21/0x71
[<ffffffff81018fd0>] smp_apic_timer_interrupt+0x82/0x95
[<ffffffff81002853>] apic_timer_interrupt+0x13/0x20
[<ffffffff81030cb5>] ? panic_blink_one_second+0x0/0x7b
[<ffffffff813ebdd6>] ? panic+0x10a/0x10c
[<ffffffff810474b0>] ? up+0x34/0x39
[<ffffffff81031ccc>] ? kmsg_dump+0x112/0x12c
[<ffffffff813eeff1>] ? oops_end+0x81/0x8e
[<ffffffff8101efee>] ? no_context+0x1f3/0x202
[<ffffffff8101f1b7>] ? __bad_area_nosemaphore+0x1ba/0x1e0
[<ffffffff81028d24>] ? enqueue_task_fair+0x16d/0x17a
[<ffffffff810264dc>] ? activate_task+0x42/0x53
[<ffffffff8102c967>] ? try_to_wake_up+0x272/0x284
[<ffffffff8101f1eb>] ? bad_area_nosemaphore+0xe/0x10
[<ffffffff813f0f3f>] ? do_page_fault+0x1c8/0x37c
[<ffffffff81028d24>] ? enqueue_task_fair+0x16d/0x17a
[<ffffffff813ee55f>] ? page_fault+0x1f/0x30
[<ffffffff8102c9a5>] ? wake_up_process+0x10/0x12
[<ffffffff8132ad57>] ? op_amd_stop+0x2d/0x8e
[<ffffffff8132ad46>] ? op_amd_stop+0x1c/0x8e
[<ffffffff8132a602>] ? nmi_cpu_stop+0x21/0x23
[<ffffffff810521b3>] ? generic_smp_call_function_single_interrupt+0xdf/0x11b
[<ffffffff8101804f>] ? smp_call_function_single_interrupt+0x22/0x31
[<ffffffff810029f3>] ? call_function_single_interrupt+0x13/0x20
<EOI> [<ffffffff8102c9a5>] ? wake_up_process+0x10/0x12
[<ffffffff81008701>] ? default_idle+0x22/0x37
[<ffffffff8100896d>] ? c1e_idle+0xdf/0xe6
[<ffffffff813f1170>] ? atomic_notifier_call_chain+0x13/0x15
[<ffffffff810012fb>] ? cpu_idle+0x4b/0x7e
[<ffffffff813e8a4e>] ? start_secondary+0x1ae/0x1b2
---[ end trace 679ac372d674b758 ]---
Cc: Andi Kleen <andi@firstfloor.org>
Cc: stable <stable@kernel.org>
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
| |
In case a counter is already reserved by the watchdog or perf_event
subsystem, oprofile ignored this counters silently. This case is
handled now and oprofile_setup() now reports an error.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
| |
This patch improves the error handler in nmi_setup(). Most parts of
the code are moved to allocate_msrs(). In case of an error
allocate_msrs() also frees already allocated memory. nmi_setup()
becomes easier and better extendable.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
| |
Cc: stable@kernel.org
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
|
| |
Multiple virtual counters share one physical counter. The reservation
of virtual counters fails due to duplicate allocation of the same
counter. The counters are already reserved. Thus, virtual counter
reservation may removed at all. This also makes the code easier.
Cc: stable@kernel.org
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
|
| |
Add Xeon 7500 series support to oprofile.
Straight forward: it's the same as Core i7, so just detect
the model number. No user space changes needed.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
| |
With multiplexing enabled oprofile crashs when profiling more than 28
events. This patch fixes this.
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
| |
This patch adds a check for the availability of a counter. A virtual
counter is used only if its physical counter is not reserved.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
| |
This patch implements a common x86 function to convert virtual counter
numbers to physical.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
| |
This patch moves the multiplexing switch counter from x86 code to
common oprofile statistic variables. Now the value will be available
and usable for all architectures. The initialization and
incrementation also moved to common code.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
| |
To setup a counter for all cpus, its structure is cloned from cpu
0. This patch implements mux_clone() to do this part for multiplexing
data.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
| |
This patch checks if the model supports multiplexing. Only then
multiplexing will be enabled. The code is added to the common x86
initialization.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
| |
The check is used to prevent running multiplexing code for models not
supporting multiplexing. Before, the code was running but without
effect.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
| |
Models that do not yet support counter multiplexing have to setup
num_virt_counters. This patch implements the setup from num_counters
if num_virt_counters is not set. Thus, num_virt_counters must be setup
only for multiplexing support.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
| |
This patch removes the const qualifier from struct
op_x86_model_spec to make model parameters changable.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
| |
This patch moves some code in nmi_int.c to get a single separate
multiplexing code section.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
| |
This patch moves some code in nmi_int.c to get a single separate
multiplexing code section.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
| |
This patch moves some code in nmi_int.c to get a single separate
multiplexing code section.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
| |
This patch implements nmi_setup_mux() and nmi_shutdown_mux() functions
to setup/shutdown multiplexing. Multiplexing code in nmi_int.c is now
much more separated.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
| |
This new function translates physical to virtual counter numbers.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
| |
Variable switch_index must be initialized for each cpu. This patch
fixes the initialization by moving it to the per-cpu init function
nmi_cpu_setup().
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
| |
__get_cpu_var() calls smp_processor_id(). When the cpu id is already
known, instead use per_cpu() to avoid generating the id again.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The number of hardware counters is limited. The multiplexing feature
enables OProfile to gather more events than counters are provided by
the hardware. This is realized by switching between events at an user
specified time interval.
A new file (/dev/oprofile/time_slice) is added for the user to specify
the timer interval in ms. If the number of events to profile is higher
than the number of hardware counters available, the patch will
schedule a work queue that switches the event counter and re-writes
the different sets of values into it. The switching mechanism needs to
be implemented for each architecture to support multiplexing. This
patch only implements AMD CPU support, but multiplexing can be easily
extended for other models and architectures.
There are follow-on patches that rework parts of this patch.
Signed-off-by: Jason Yeh <jason.yeh@amd.com>
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
| |
This patch fixes whitespace changes of code that will be touched in
follow-on patches.
Signed-off-by: Robert Richter <robert.richter@amd.com>
|
|
|
|
|
|
|
|
|
| |
This patch removes the function nmi_save_registers(). Per-cpu code is
now executed only in the function nmi_cpu_setup(). Also, it renames
the per-cpu function nmi_restore_registers() to
nmi_cpu_restore_registers().
Signed-off-by: Robert Richter <robert.richter@amd.com>
|