diff options
author | Hugh Dickins <hugh@veritas.com> | 2005-10-29 18:16:30 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2005-10-29 21:40:41 -0700 |
commit | 508034a32b819a2d40aa7ac0dbc8cd2e044c2de6 (patch) | |
tree | 906a8f0095af24f403b30d649d3ec1ffb4ff2f50 /mm/mmap.c | |
parent | 8f4f8c164cb4af1432cc25eda82928ea4519ba72 (diff) | |
download | kernel_samsung_crespo-508034a32b819a2d40aa7ac0dbc8cd2e044c2de6.zip kernel_samsung_crespo-508034a32b819a2d40aa7ac0dbc8cd2e044c2de6.tar.gz kernel_samsung_crespo-508034a32b819a2d40aa7ac0dbc8cd2e044c2de6.tar.bz2 |
[PATCH] mm: unmap_vmas with inner ptlock
Remove the page_table_lock from around the calls to unmap_vmas, and replace
the pte_offset_map in zap_pte_range by pte_offset_map_lock: all callers are
now safe to descend without page_table_lock.
Don't attempt fancy locking for hugepages, just take page_table_lock in
unmap_hugepage_range. Which makes zap_hugepage_range, and the hugetlb test in
zap_page_range, redundant: unmap_vmas calls unmap_hugepage_range anyway. Nor
does unmap_vmas have much use for its mm arg now.
The tlb_start_vma and tlb_end_vma in unmap_page_range are now called without
page_table_lock: if they're implemented at all, they typically come down to
flush_cache_range (usually done outside page_table_lock) and flush_tlb_range
(which we already audited for the mprotect case).
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm/mmap.c')
-rw-r--r-- | mm/mmap.c | 8 |
1 files changed, 2 insertions, 6 deletions
@@ -1673,9 +1673,7 @@ static void unmap_region(struct mm_struct *mm, lru_add_drain(); tlb = tlb_gather_mmu(mm, 0); update_hiwater_rss(mm); - spin_lock(&mm->page_table_lock); - unmap_vmas(&tlb, mm, vma, start, end, &nr_accounted, NULL); - spin_unlock(&mm->page_table_lock); + unmap_vmas(&tlb, vma, start, end, &nr_accounted, NULL); vm_unacct_memory(nr_accounted); free_pgtables(&tlb, vma, prev? prev->vm_end: FIRST_USER_ADDRESS, next? next->vm_start: 0); @@ -1958,9 +1956,7 @@ void exit_mmap(struct mm_struct *mm) tlb = tlb_gather_mmu(mm, 1); /* Don't update_hiwater_rss(mm) here, do_exit already did */ /* Use -1 here to ensure all VMAs in the mm are unmapped */ - spin_lock(&mm->page_table_lock); - end = unmap_vmas(&tlb, mm, vma, 0, -1, &nr_accounted, NULL); - spin_unlock(&mm->page_table_lock); + end = unmap_vmas(&tlb, vma, 0, -1, &nr_accounted, NULL); vm_unacct_memory(nr_accounted); free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, 0); tlb_finish_mmu(tlb, 0, end); |