diff options
author | Rik van Riel <riel@redhat.com> | 2014-04-11 13:00:29 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2014-05-07 13:33:47 +0200 |
commit | 68d1b02a58f5d9f584c1fb2923ed60ec68cbbd9b (patch) | |
tree | 3a2c4afeca2dd9403a3e7e9d646d9067f4bf7d1d /kernel | |
parent | 5085e2a328849bdee6650b32d52c87c3788ab01c (diff) | |
download | talos-obmc-linux-68d1b02a58f5d9f584c1fb2923ed60ec68cbbd9b.tar.gz talos-obmc-linux-68d1b02a58f5d9f584c1fb2923ed60ec68cbbd9b.zip |
sched/numa: Do not set preferred_node on migration to a second choice node
Setting the numa_preferred_node for a task in task_numa_migrate
does nothing on a 2-node system. Either we migrate to the node
that already was our preferred node, or we stay where we were.
On a 4-node system, it can slightly decrease overhead, by not
calling the NUMA code as much. Since every node tends to be
directly connected to every other node, running on the wrong
node for a while does not do much damage.
However, on an 8 node system, there are far more bad nodes
than there are good ones, and pretending that a second choice
is actually the preferred node can greatly delay, or even
prevent, a workload from converging.
The only time we can safely pretend that a second choice
node is the preferred node is when the task is part of a
workload that spans multiple NUMA nodes.
Signed-off-by: Rik van Riel <riel@redhat.com>
Tested-by: Vinod Chegu <chegu_vinod@hp.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1397235629-16328-4-git-send-email-riel@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched/fair.c | 11 |
1 files changed, 10 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ecea8d9f957c..051903f33eec 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1301,7 +1301,16 @@ static int task_numa_migrate(struct task_struct *p) if (env.best_cpu == -1) return -EAGAIN; - sched_setnuma(p, env.dst_nid); + /* + * If the task is part of a workload that spans multiple NUMA nodes, + * and is migrating into one of the workload's active nodes, remember + * this node as the task's preferred numa node, so the workload can + * settle down. + * A task that migrated to a second choice node will be better off + * trying for a better one later. Do not set the preferred node here. + */ + if (p->numa_group && node_isset(env.dst_nid, p->numa_group->active_nodes)) + sched_setnuma(p, env.dst_nid); /* * Reset the scan period if the task is being rescheduled on an |