diff options
author | Gerald Schaefer <gerald.schaefer@de.ibm.com> | 2009-07-22 00:36:56 +0200 |
---|---|---|
committer | Rafael J. Wysocki <rjw@sisk.pl> | 2009-09-14 20:26:59 +0200 |
commit | 98e73dc5d2dadfcb95305ad71ac9239f4e361870 (patch) | |
tree | f6a0b8098b02d2ef8e673ceef34150df55425ce7 /kernel/power | |
parent | ef4aede3f10d82adef1fb044b565ba5f08f851e0 (diff) | |
download | kernel_samsung_aries-98e73dc5d2dadfcb95305ad71ac9239f4e361870.zip kernel_samsung_aries-98e73dc5d2dadfcb95305ad71ac9239f4e361870.tar.gz kernel_samsung_aries-98e73dc5d2dadfcb95305ad71ac9239f4e361870.tar.bz2 |
PM / Hibernate / Memory hotplug: Always use for_each_populated_zone()
Use for_each_populated_zone() instead of for_each_zone() in hibernation
code. This fixes a bug on s390, where we allow both config options
HIBERNATION and MEMORY_HOTPLUG, so that we also have a ZONE_MOVABLE
here. We only allow hibernation if no memory hotplug operation was
performed, so in fact both features can only be used exclusively, but
this way we don't need 2 differently configured (distribution) kernels.
If we have an unpopulated ZONE_MOVABLE, we allow hibernation but run
into a BUG_ON() in memory_bm_test/set/clear_bit() because hibernation
code iterates through all zones, not only the populated zones, in
several places. For example, swsusp_free() does for_each_zone() and
then checks for pfn_valid(), which is true even if the zone is not
populated, resulting in a BUG_ON() later because the pfn cannot be
found in the memory bitmap.
Replacing all occurences of for_each_zone() in hibernation code with
for_each_populated_zone() would fix this issue.
[rjw: Rebased on top of linux-next hibernation patches.]
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Diffstat (limited to 'kernel/power')
-rw-r--r-- | kernel/power/snapshot.c | 12 |
1 files changed, 6 insertions, 6 deletions
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c index 0a06b11..bf06658 100644 --- a/kernel/power/snapshot.c +++ b/kernel/power/snapshot.c @@ -853,7 +853,7 @@ static unsigned int count_highmem_pages(void) struct zone *zone; unsigned int n = 0; - for_each_zone(zone) { + for_each_populated_zone(zone) { unsigned long pfn, max_zone_pfn; if (!is_highmem(zone)) @@ -916,7 +916,7 @@ static unsigned int count_data_pages(void) unsigned long pfn, max_zone_pfn; unsigned int n = 0; - for_each_zone(zone) { + for_each_populated_zone(zone) { if (is_highmem(zone)) continue; @@ -1010,7 +1010,7 @@ copy_data_pages(struct memory_bitmap *copy_bm, struct memory_bitmap *orig_bm) struct zone *zone; unsigned long pfn; - for_each_zone(zone) { + for_each_populated_zone(zone) { unsigned long max_zone_pfn; mark_free_pages(zone); @@ -1065,7 +1065,7 @@ void swsusp_free(void) struct zone *zone; unsigned long pfn, max_zone_pfn; - for_each_zone(zone) { + for_each_populated_zone(zone) { max_zone_pfn = zone->zone_start_pfn + zone->spanned_pages; for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++) if (pfn_valid(pfn)) { @@ -1397,7 +1397,7 @@ static int enough_free_mem(unsigned int nr_pages, unsigned int nr_highmem) struct zone *zone; unsigned int free = alloc_normal; - for_each_zone(zone) + for_each_populated_zone(zone) if (!is_highmem(zone)) free += zone_page_state(zone, NR_FREE_PAGES); @@ -1688,7 +1688,7 @@ static int mark_unsafe_pages(struct memory_bitmap *bm) unsigned long pfn, max_zone_pfn; /* Clear page flags */ - for_each_zone(zone) { + for_each_populated_zone(zone) { max_zone_pfn = zone->zone_start_pfn + zone->spanned_pages; for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++) if (pfn_valid(pfn)) |