diff options
author | Michal Hocko <mhocko@suse.cz> | 2013-02-22 16:32:30 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2013-02-23 17:50:10 -0800 |
commit | a394cb8ee632ec5edce20309901ec66767497a43 (patch) | |
tree | f1b02c0329a8614810efe5a1f45f51ae64d46d33 /mm/vmscan.c | |
parent | 4ca3a69bcb6875c3f20802522c1b4fc56bb14608 (diff) | |
download | blackbird-op-linux-a394cb8ee632ec5edce20309901ec66767497a43.tar.gz blackbird-op-linux-a394cb8ee632ec5edce20309901ec66767497a43.zip |
memcg,vmscan: do not break out targeted reclaim without reclaimed pages
Targeted (hard resp soft) reclaim has traditionally tried to scan one
group with decreasing priority until nr_to_reclaim (SWAP_CLUSTER_MAX
pages) is reclaimed or all priorities are exhausted. The reclaim is
then retried until the limit is met.
This approach, however, doesn't work well with deeper hierarchies where
groups higher in the hierarchy do not have any or only very few pages
(this usually happens if those groups do not have any tasks and they
have only re-parented pages after some of their children is removed).
Those groups are reclaimed with decreasing priority pointlessly as there
is nothing to reclaim from them.
An easiest fix is to break out of the memcg iteration loop in
shrink_zone only if the whole hierarchy has been visited or sufficient
pages have been reclaimed. This is also more natural because the
reclaimer expects that the hierarchy under the given root is reclaimed.
As a result we can simplify the soft limit reclaim which does its own
iteration.
[yinghan@google.com: break out of the hierarchy loop only if nr_reclaimed exceeded nr_to_reclaim]
[akpm@linux-foundation.org: use conventional comparison order]
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Reported-by: Ying Han <yinghan@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Tejun Heo <htejun@gmail.com>
Cc: Glauber Costa <glommer@parallels.com>
Cc: Li Zefan <lizefan@huawei.com>
Signed-off-by: Ying Han <yinghan@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/vmscan.c')
-rw-r--r-- | mm/vmscan.c | 19 |
1 files changed, 9 insertions, 10 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index 292f50a2a685..463990941a78 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1973,18 +1973,17 @@ static void shrink_zone(struct zone *zone, struct scan_control *sc) shrink_lruvec(lruvec, sc); /* - * Limit reclaim has historically picked one - * memcg and scanned it with decreasing - * priority levels until nr_to_reclaim had - * been reclaimed. This priority cycle is - * thus over after a single memcg. - * - * Direct reclaim and kswapd, on the other - * hand, have to scan all memory cgroups to - * fulfill the overall scan target for the + * Direct reclaim and kswapd have to scan all memory + * cgroups to fulfill the overall scan target for the * zone. + * + * Limit reclaim, on the other hand, only cares about + * nr_to_reclaim pages to be reclaimed and it will + * retry with decreasing priority if one round over the + * whole hierarchy is not sufficient. */ - if (!global_reclaim(sc)) { + if (!global_reclaim(sc) && + sc->nr_reclaimed >= sc->nr_to_reclaim) { mem_cgroup_iter_break(root, memcg); break; } |