diff options
author | Venkatesh Pallipadi <venki@google.com> | 2010-05-17 18:14:43 -0700 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2010-06-09 10:34:51 +0200 |
commit | fdf3e95d3916f18bf8703fb065499fdbc4dfe34c (patch) | |
tree | b9bfc0f78135502adf7c83313948a705fb19384b /kernel/sched_fair.c | |
parent | 246d86b51845063e4b06b27579990492dc5fa317 (diff) | |
download | blackbird-op-linux-fdf3e95d3916f18bf8703fb065499fdbc4dfe34c.tar.gz blackbird-op-linux-fdf3e95d3916f18bf8703fb065499fdbc4dfe34c.zip |
sched: Avoid side-effect of tickless idle on update_cpu_load
tickless idle has a negative side effect on update_cpu_load(), which
in turn can affect load balancing behavior.
update_cpu_load() is supposed to be called every tick, to keep track
of various load indicies. With tickless idle, there are no scheduler
ticks called on the idle CPUs. Idle CPUs may still do load balancing
(with idle_load_balance CPU) using the stale cpu_load. It will also
cause problems when all CPUs go idle for a while and become active
again. In this case loads would not degrade as expected.
This is how rq->nr_load_updates change looks like under different
conditions:
<cpu_num> <nr_load_updates change>
All CPUS idle for 10 seconds (HZ=1000)
0 1621
10 496
11 139
12 875
13 1672
14 12
15 21
1 1472
2 2426
3 1161
4 2108
5 1525
6 701
7 249
8 766
9 1967
One CPU busy rest idle for 10 seconds
0 10003
10 601
11 95
12 966
13 1597
14 114
15 98
1 3457
2 93
3 6679
4 1425
5 1479
6 595
7 193
8 633
9 1687
All CPUs busy for 10 seconds
0 10026
10 10026
11 10026
12 10026
13 10025
14 10025
15 10025
1 10026
2 10026
3 10026
4 10026
5 10026
6 10026
7 10026
8 10026
9 10026
That is update_cpu_load works properly only when all CPUs are busy.
If all are idle, all the CPUs get way lower updates. And when few
CPUs are busy and rest are idle, only busy and ilb CPU does proper
updates and rest of the idle CPUs will do lower updates.
The patch keeps track of when a last update was done and fixes up
the load avg based on current time.
On one of my test system SPECjbb with warehouse 1..numcpus, patch
improves throughput numbers by ~1% (average of 6 runs). On another
test system (with different domain hierarchy) there is no noticable
change in perf.
Signed-off-by: Venkatesh Pallipadi <venki@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
LKML-Reference: <AANLkTilLtDWQsAUrIxJ6s04WTgmw9GuOODc5AOrYsaR5@mail.gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_fair.c')
-rw-r--r-- | kernel/sched_fair.c | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index eed35eded602..22b8b4f2b616 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -3420,9 +3420,12 @@ static void run_rebalance_domains(struct softirq_action *h) if (need_resched()) break; + rq = cpu_rq(balance_cpu); + raw_spin_lock_irq(&rq->lock); + update_cpu_load(rq); + raw_spin_unlock_irq(&rq->lock); rebalance_domains(balance_cpu, CPU_IDLE); - rq = cpu_rq(balance_cpu); if (time_after(this_rq->next_balance, rq->next_balance)) this_rq->next_balance = rq->next_balance; } |