summaryrefslogtreecommitdiffstats
path: root/arch
diff options
context:
space:
mode:
authorIngo Molnar <mingo@elte.hu>2009-05-17 10:04:45 +0200
committerIngo Molnar <mingo@elte.hu>2009-05-17 12:27:37 +0200
commitd2517a49d55536b38c7a87e5289550cfedaa4dcc (patch)
treeee36f662094f0f09575cd2fbc4e6a67a716081d0 /arch
parent0203026b58b4299ba7281c0b4b417207c1f05d0e (diff)
downloadblackbird-op-linux-d2517a49d55536b38c7a87e5289550cfedaa4dcc.tar.gz
blackbird-op-linux-d2517a49d55536b38c7a87e5289550cfedaa4dcc.zip
perf_counter, x86: fix zero irq_period counters
The quirk to irq_period unearthed an unrobustness we had in the hw_counter initialization sequence: we left irq_period at 0, which was then quirked up to 2 ... which then generated a _lot_ of interrupts during 'perf stat' runs, slowed them down and skewed the counter results in general. Initialize irq_period to the maximum instead. [ Impact: fix perf stat results ] Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'arch')
-rw-r--r--arch/x86/kernel/cpu/perf_counter.c3
1 files changed, 3 insertions, 0 deletions
diff --git a/arch/x86/kernel/cpu/perf_counter.c b/arch/x86/kernel/cpu/perf_counter.c
index 886dcf334bc3..5bfd30ab3920 100644
--- a/arch/x86/kernel/cpu/perf_counter.c
+++ b/arch/x86/kernel/cpu/perf_counter.c
@@ -286,6 +286,9 @@ static int __hw_perf_counter_init(struct perf_counter *counter)
hwc->nmi = 1;
}
+ if (!hwc->irq_period)
+ hwc->irq_period = x86_pmu.max_period;
+
atomic64_set(&hwc->period_left,
min(x86_pmu.max_period, hwc->irq_period));
OpenPOWER on IntegriCloud