diff options
author | Mike Galbraith <efault@gmx.de> | 2010-03-11 17:15:38 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2010-03-11 18:32:50 +0100 |
commit | b42e0c41a422a212ddea0666d5a3a0e3c35206db (patch) | |
tree | 443cf5918548cab86c3f9f3f34a1b700d809070b /kernel/sched_debug.c | |
parent | 39c0cbe2150cbd848a25ba6cdb271d1ad46818ad (diff) | |
download | blackbird-obmc-linux-b42e0c41a422a212ddea0666d5a3a0e3c35206db.tar.gz blackbird-obmc-linux-b42e0c41a422a212ddea0666d5a3a0e3c35206db.zip |
sched: Remove avg_wakeup
Testing the load which led to this heuristic (nfs4 kbuild) shows that it has
outlived it's usefullness. With intervening load balancing changes, I cannot
see any difference with/without, so recover there fastpath cycles.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1268301062.6785.29.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_debug.c')
-rw-r--r-- | kernel/sched_debug.c | 1 |
1 files changed, 0 insertions, 1 deletions
diff --git a/kernel/sched_debug.c b/kernel/sched_debug.c index ad9df4422763..20b95a420fec 100644 --- a/kernel/sched_debug.c +++ b/kernel/sched_debug.c @@ -408,7 +408,6 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m) PN(se.vruntime); PN(se.sum_exec_runtime); PN(se.avg_overlap); - PN(se.avg_wakeup); nr_switches = p->nvcsw + p->nivcsw; |