diff options
author | Frederic Weisbecker <fweisbec@gmail.com> | 2012-01-24 18:59:44 +0100 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2012-02-15 15:23:09 +0100 |
commit | 0a8a2e78b7eece7c65884fcff9f98dc0fce89ee4 (patch) | |
tree | db16568b94f11b2cdfe4aefab332b7ec4def5ee6 | |
parent | 15f827be93928890bba965bc175caee50c4406d2 (diff) | |
download | blackbird-op-linux-0a8a2e78b7eece7c65884fcff9f98dc0fce89ee4.tar.gz blackbird-op-linux-0a8a2e78b7eece7c65884fcff9f98dc0fce89ee4.zip |
timer: Fix bad idle check on irq entry
idle_cpu() is called on irq entry to guess if we need to call
tick_check_idle(). This way we can catch up with jiffies if the tick
was stopped, stop accounting idle time during the interrupt and
maintain the sched clock if it is unstable.
But if we are going to exit the idle loop to schedule a new task (ie:
if we have a task in the runqueue or a remotely enqueued ttwu to
perform), the idle_cpu() check will return 0 such that we miss the
call to tick_check_idle() for all interrupts happening before we
schedule the new task.
As a result these interrupts and the softirqs coming along may deal
with stale jiffies values, bad sched clock values, and won't substract
their time from the idle time accounting.
Fix this with using is_idle_task() instead that strictly checks that
we are running the idle task, without caring about the fact we are
going to schedule a task soon.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Ingo Molnar <mingo@elte.hu>
Link: http://lkml.kernel.org/r/1327427984-23282-3-git-send-email-fweisbec@gmail.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-rw-r--r-- | kernel/softirq.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/softirq.c b/kernel/softirq.c index 4eb3a0fa351e..5ace266bc0e6 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -297,7 +297,7 @@ void irq_enter(void) int cpu = smp_processor_id(); rcu_irq_enter(); - if (idle_cpu(cpu) && !in_interrupt()) { + if (is_idle_task(current) && !in_interrupt()) { /* * Prevent raise_softirq from needlessly waking up ksoftirqd * here, as softirq will be serviced on return from interrupt. |