diff options
author | Miklos Szeredi <mszeredi@suse.cz> | 2009-03-23 16:07:24 +0100 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2009-03-23 09:22:31 -0700 |
commit | 53da1d9456fe7f87a920a78fdbdcf1225d197cb7 (patch) | |
tree | eccd5357ceff25a9a07be802ac0161c8c1842e64 /kernel/signal.c | |
parent | b0dcb4a91ddb79f2e213205cf8d86b467f8559c7 (diff) | |
download | blackbird-op-linux-53da1d9456fe7f87a920a78fdbdcf1225d197cb7.tar.gz blackbird-op-linux-53da1d9456fe7f87a920a78fdbdcf1225d197cb7.zip |
fix ptrace slowness
This patch fixes bug #12208:
Bug-Entry : http://bugzilla.kernel.org/show_bug.cgi?id=12208
Subject : uml is very slow on 2.6.28 host
This turned out to be not a scheduler regression, but an already
existing problem in ptrace being triggered by subtle scheduler
changes.
The problem is this:
- task A is ptracing task B
- task B stops on a trace event
- task A is woken up and preempts task B
- task A calls ptrace on task B, which does ptrace_check_attach()
- this calls wait_task_inactive(), which sees that task B is still on the runq
- task A goes to sleep for a jiffy
- ...
Since UML does lots of the above sequences, those jiffies quickly add
up to make it slow as hell.
This patch solves this by not rescheduling in read_unlock() after
ptrace_stop() has woken up the tracer.
Thanks to Oleg Nesterov and Ingo Molnar for the feedback.
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
CC: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'kernel/signal.c')
-rw-r--r-- | kernel/signal.c | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/kernel/signal.c b/kernel/signal.c index 2a74fe87c0dd..1c8814481a11 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -1575,7 +1575,15 @@ static void ptrace_stop(int exit_code, int clear_code, siginfo_t *info) read_lock(&tasklist_lock); if (may_ptrace_stop()) { do_notify_parent_cldstop(current, CLD_TRAPPED); + /* + * Don't want to allow preemption here, because + * sys_ptrace() needs this task to be inactive. + * + * XXX: implement read_unlock_no_resched(). + */ + preempt_disable(); read_unlock(&tasklist_lock); + preempt_enable_no_resched(); schedule(); } else { /* |