aboutsummaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorJan Beulich <JBeulich@novell.com>2009-09-21 17:03:03 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2009-09-22 07:17:38 -0700
commit4738e1b9cf8f9e28d7de080a5e6ce5d0095ea18f (patch)
tree96804aacfd79495dbb787055473f92ccb6dab65d /mm
parent78986a678f6ec3759a01976749f4437d8bf2d6c3 (diff)
downloadkernel_samsung_tuna-4738e1b9cf8f9e28d7de080a5e6ce5d0095ea18f.zip
kernel_samsung_tuna-4738e1b9cf8f9e28d7de080a5e6ce5d0095ea18f.tar.gz
kernel_samsung_tuna-4738e1b9cf8f9e28d7de080a5e6ce5d0095ea18f.tar.bz2
memory hotplug: fix updating of num_physpages for hot plugged memory
Sizing of memory allocations shouldn't depend on the number of physical pages found in a system, as that generally includes (perhaps a huge amount of) non-RAM pages. The amount of what actually is usable as storage should instead be used as a basis here. In line with that, the memory hotplug code should update num_physpages in a way that it retains its original (post-boot) meaning; in particular, decreasing the value should at best be done with great care - this patch doesn't try to ever decrease this value at all as it doesn't really seem meaningful to do so. Signed-off-by: Jan Beulich <jbeulich@novell.com> Acked-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Yasunori Goto <y-goto@jp.fujitsu.com> Cc: Badari Pulavarty <pbadari@us.ibm.com> Cc: Minchan Kim <minchan.kim@gmail.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/memory_hotplug.c6
1 files changed, 4 insertions, 2 deletions
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 616236e..efe3e0e 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -339,8 +339,11 @@ EXPORT_SYMBOL_GPL(__remove_pages);
void online_page(struct page *page)
{
+ unsigned long pfn = page_to_pfn(page);
+
totalram_pages++;
- num_physpages++;
+ if (pfn >= num_physpages)
+ num_physpages = pfn + 1;
#ifdef CONFIG_HIGHMEM
if (PageHighMem(page))
@@ -832,7 +835,6 @@ repeat:
zone->present_pages -= offlined_pages;
zone->zone_pgdat->node_present_pages -= offlined_pages;
totalram_pages -= offlined_pages;
- num_physpages -= offlined_pages;
setup_per_zone_wmarks();
calculate_zone_inactive_ratio(zone);