diff options
author | Vlastimil Babka <vbabka@suse.cz> | 2017-09-06 16:20:51 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-09-06 17:27:26 -0700 |
commit | 10903027948d768d9639b31e9a555802e2dabafc (patch) | |
tree | 33a15c25e6bc0f687dcce1b9e7bf8675c7eaa524 /mm/page_owner.c | |
parent | 0fc542b7dd90a7080fc1a8c846d13a4ddad509ba (diff) | |
download | talos-obmc-linux-10903027948d768d9639b31e9a555802e2dabafc.tar.gz talos-obmc-linux-10903027948d768d9639b31e9a555802e2dabafc.zip |
mm, page_owner: don't grab zone->lock for init_pages_in_zone()
init_pages_in_zone() is run under zone->lock, which means a long lock
time and disabled interrupts on large machines. This is currently not
an issue since it runs early in boot, but a later patch will change
that.
However, like other pfn scanners, we don't actually need zone->lock even
when other cpus are running. The only potentially dangerous operation
here is reading bogus buddy page owner due to race, and we already know
how to handle that. The worst that can happen is that we skip some
early allocated pages, which should not affect the debugging power of
page_owner noticeably.
Link: http://lkml.kernel.org/r/20170720134029.25268-4-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Vinayak Menon <vinmenon@codeaurora.org>
Cc: zhong jiang <zhongjiang@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page_owner.c')
-rw-r--r-- | mm/page_owner.c | 16 |
1 files changed, 10 insertions, 6 deletions
diff --git a/mm/page_owner.c b/mm/page_owner.c index 33634f74d0b2..8e2d7137510c 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -562,11 +562,17 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone) continue; /* - * We are safe to check buddy flag and order, because - * this is init stage and only single thread runs. + * To avoid having to grab zone->lock, be a little + * careful when reading buddy page order. The only + * danger is that we skip too much and potentially miss + * some early allocated pages, which is better than + * heavy lock contention. */ if (PageBuddy(page)) { - pfn += (1UL << page_order(page)) - 1; + unsigned long order = page_order_unsafe(page); + + if (order > 0 && order < MAX_ORDER) + pfn += (1UL << order) - 1; continue; } @@ -585,6 +591,7 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone) __set_page_owner_handle(page_ext, early_handle, 0, 0); count++; } + cond_resched(); } pr_info("Node %d, zone %8s: page owner found early allocated %lu pages\n", @@ -595,15 +602,12 @@ static void init_zones_in_node(pg_data_t *pgdat) { struct zone *zone; struct zone *node_zones = pgdat->node_zones; - unsigned long flags; for (zone = node_zones; zone - node_zones < MAX_NR_ZONES; ++zone) { if (!populated_zone(zone)) continue; - spin_lock_irqsave(&zone->lock, flags); init_pages_in_zone(pgdat, zone); - spin_unlock_irqrestore(&zone->lock, flags); } } |