diff options
author | Mel Gorman <mgorman@techsingularity.net> | 2015-08-06 15:46:51 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2015-08-07 04:39:41 +0300 |
commit | e298ff75f133f2524bb6a9a305b17c5f6ff1a6b2 (patch) | |
tree | fa98b09c1249fc9818cafa326abdf3b52efeeeb5 /mm/memory_hotplug.c | |
parent | 32e5a2a2be6b085febaac36efff495ad65a55e6c (diff) | |
download | blackbird-obmc-linux-e298ff75f133f2524bb6a9a305b17c5f6ff1a6b2.tar.gz blackbird-obmc-linux-e298ff75f133f2524bb6a9a305b17c5f6ff1a6b2.zip |
mm: initialize hotplugged pages as reserved
Commit 92923ca3aace ("mm: meminit: only set page reserved in the
memblock region") broke memory hotplug which expects the memmap for
newly added sections to be reserved until onlined by
online_pages_range(). This patch marks hotplugged pages as reserved
when adding new zones.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reported-by: David Vrabel <david.vrabel@citrix.com>
Tested-by: David Vrabel <david.vrabel@citrix.com>
Cc: Nathan Zimmer <nzimmer@sgi.com>
Cc: Robin Holt <holt@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memory_hotplug.c')
-rw-r--r-- | mm/memory_hotplug.c | 10 |
1 files changed, 9 insertions, 1 deletions
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 26fbba7d888f..003dbe4b060d 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -446,7 +446,7 @@ static int __meminit __add_zone(struct zone *zone, unsigned long phys_start_pfn) int nr_pages = PAGES_PER_SECTION; int nid = pgdat->node_id; int zone_type; - unsigned long flags; + unsigned long flags, pfn; int ret; zone_type = zone - pgdat->node_zones; @@ -461,6 +461,14 @@ static int __meminit __add_zone(struct zone *zone, unsigned long phys_start_pfn) pgdat_resize_unlock(zone->zone_pgdat, &flags); memmap_init_zone(nr_pages, nid, zone_type, phys_start_pfn, MEMMAP_HOTPLUG); + + /* online_page_range is called later and expects pages reserved */ + for (pfn = phys_start_pfn; pfn < phys_start_pfn + nr_pages; pfn++) { + if (!pfn_valid(pfn)) + continue; + + SetPageReserved(pfn_to_page(pfn)); + } return 0; } |