diff options
author | Benjamin Herrenschmidt <benh@kernel.crashing.org> | 2007-05-27 15:18:22 +1000 |
---|---|---|
committer | Paul Mackerras <paulus@samba.org> | 2007-06-02 21:01:55 +1000 |
commit | 6ad8d010b2f364b739020e514e61b6a73444464b (patch) | |
tree | bb6b10d3c1b2db68a8bca66a587fa2db0a8f2fd9 /include/asm-powerpc | |
parent | 988519acb3dbe7168276a36cbb8fd91fddbffaee (diff) | |
download | kernel_samsung_crespo-6ad8d010b2f364b739020e514e61b6a73444464b.zip kernel_samsung_crespo-6ad8d010b2f364b739020e514e61b6a73444464b.tar.gz kernel_samsung_crespo-6ad8d010b2f364b739020e514e61b6a73444464b.tar.bz2 |
[POWERPC] Fix possible access to free pages
I think we have a subtle race on ppc64 with the tlb batching. The
common code expects tlb_flush() to actually flush any pending TLB
batch. It does that because it delays all page freeing until after
tlb_flush() is called, in order to ensure no stale reference to
those pages exist in any TLB, thus causing potential access to
the freed pages.
However, our tlb_flush only triggers the RCU for freeing page
table pages, it does not currently trigger a flush of a pending
TLB/hash batch, which is, I think, an error. This fixes it.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Diffstat (limited to 'include/asm-powerpc')
-rw-r--r-- | include/asm-powerpc/tlb.h | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/include/asm-powerpc/tlb.h b/include/asm-powerpc/tlb.h index 0a17682..6671404 100644 --- a/include/asm-powerpc/tlb.h +++ b/include/asm-powerpc/tlb.h @@ -38,6 +38,15 @@ extern void pte_free_finish(void); static inline void tlb_flush(struct mmu_gather *tlb) { + struct ppc64_tlb_batch *tlbbatch = &__get_cpu_var(ppc64_tlb_batch); + + /* If there's a TLB batch pending, then we must flush it because the + * pages are going to be freed and we really don't want to have a CPU + * access a freed page because it has a stale TLB + */ + if (tlbbatch->index) + __flush_tlb_pending(tlbbatch); + pte_free_finish(); } |