diff options
author | Johannes Weiner <hannes@cmpxchg.org> | 2014-04-08 16:04:10 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-04-08 16:48:51 -0700 |
commit | 0bf1457f0cfca7bc026a82323ad34bcf58ad035d (patch) | |
tree | d8e3aafd37360edd8787ad0a6cafcbfe1750b25f /mm | |
parent | e9f37d3a8d126e73f5737ef548cdf6f618e295e4 (diff) | |
download | blackbird-op-linux-0bf1457f0cfca7bc026a82323ad34bcf58ad035d.tar.gz blackbird-op-linux-0bf1457f0cfca7bc026a82323ad34bcf58ad035d.zip |
mm: vmscan: do not swap anon pages just because free+file is low
Page reclaim force-scans / swaps anonymous pages when file cache drops
below the high watermark of a zone in order to prevent what little cache
remains from thrashing.
However, on bigger machines the high watermark value can be quite large
and when the workload is dominated by a static anonymous/shmem set, the
file set might just be a small window of used-once cache. In such
situations, the VM starts swapping heavily when instead it should be
recycling the no longer used cache.
This is a longer-standing problem, but it's more likely to trigger after
commit 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy")
because file pages can no longer accumulate in a single zone and are
dispersed into smaller fractions among the available zones.
To resolve this, do not force scan anon when file pages are low but
instead rely on the scan/rotation ratios to make the right prediction.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Rafael Aquini <aquini@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Suleiman Souhlal <suleiman@google.com>
Cc: <stable@kernel.org> [3.12+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/vmscan.c | 16 |
1 files changed, 1 insertions, 15 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index 06879ead7380..9b6497eda806 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1862,7 +1862,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, struct zone *zone = lruvec_zone(lruvec); unsigned long anon_prio, file_prio; enum scan_balance scan_balance; - unsigned long anon, file, free; + unsigned long anon, file; bool force_scan = false; unsigned long ap, fp; enum lru_list lru; @@ -1916,20 +1916,6 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, get_lru_size(lruvec, LRU_INACTIVE_FILE); /* - * If it's foreseeable that reclaiming the file cache won't be - * enough to get the zone back into a desirable shape, we have - * to swap. Better start now and leave the - probably heavily - * thrashing - remaining file pages alone. - */ - if (global_reclaim(sc)) { - free = zone_page_state(zone, NR_FREE_PAGES); - if (unlikely(file + free <= high_wmark_pages(zone))) { - scan_balance = SCAN_ANON; - goto out; - } - } - - /* * There is enough inactive page cache, do not reclaim * anything from the anonymous working set right now. */ |