aboutsummaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorLuden <luden@ghostmail.com>2016-02-13 23:44:14 +0100
committerZiyan <jaraidaniel@gmail.com>2016-05-01 23:35:55 +0200
commitab26843c057773f42f5b46e4e4a519b39707253e (patch)
treed90a6f0111048a797cf217becfc2d3b70bb6e6b8 /include
parentf819ad93dea3adee5f4a7ea87e1f6631aea83d44 (diff)
downloadkernel_samsung_tuna-ab26843c057773f42f5b46e4e4a519b39707253e.zip
kernel_samsung_tuna-ab26843c057773f42f5b46e4e4a519b39707253e.tar.gz
kernel_samsung_tuna-ab26843c057773f42f5b46e4e4a519b39707253e.tar.bz2
Retry CMA allocations.
It looks like Linux pages migration code was never designed to be deterministic or synchronous, there are multiple race conditions between different parts of the code that make CMA allocation in one step very likely to fail, especially for large memory ranges that we need for Ducati. Therefore, changing the allocation code to perform multiple allocation attempts. To further increase the chances of the allocation to succeed and to make things faster, the results of the previous allocation attempts are kept - that is, all pages that are already isolated stay isolated, so that retries are only for those pages that failed isolation or migration in previous steps. Additionally, there's a small delay between the steps so that the chances of the other code to free the pages we need are higher.
Diffstat (limited to 'include')
-rw-r--r--include/linux/ksm.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index 3af13d3..d20721e 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -91,7 +91,7 @@ int rmap_walk_ksm(struct page *page, int (*rmap_one)(struct page *,
void ksm_migrate_page(struct page *newpage, struct page *oldpage);
void ksm_start_migration(void);
void ksm_finalize_migration(unsigned long start_pfn, unsigned long nr_pages);
-void ksm_abort_migration(void);
+void ksm_stop_migration(void);
#else /* !CONFIG_KSM */