diff options
author | David Daney <ddaney@caviumnetworks.com> | 2010-04-19 11:43:10 -0700 |
---|---|---|
committer | Ralf Baechle <ralf@linux-mips.org> | 2010-04-30 20:52:41 +0100 |
commit | c8f3cc0b65af00be5f84c6d4ee45007643322713 (patch) | |
tree | 9a1159172287b7fd921c5dd9a34d4c6b970187a0 /arch/mips | |
parent | b0b4ce38a535ed3de5ec6fdd4f3c34435a1c1d1e (diff) | |
download | kernel_samsung_crespo-c8f3cc0b65af00be5f84c6d4ee45007643322713.zip kernel_samsung_crespo-c8f3cc0b65af00be5f84c6d4ee45007643322713.tar.gz kernel_samsung_crespo-c8f3cc0b65af00be5f84c6d4ee45007643322713.tar.bz2 |
MIPS: Don't vmap things at address zero.
In the 64-bit kernel we use swapper_pg_dir for three different things.
1) xuseg mappings for kernel threads.
2) vmap mappings for all kernel-space accesses in xkseg.
3) vmap mappings for kernel modules in ksseg (kseg2).
Due to how the TLB refill handlers work, any mapping established in
xkseg or ksseg will also establish a xuseg mapping that should never
be used by the kernel.
In order to be able to use exceptions to trap NULL pointer
dereferences, we need to ensure that nothing is mapped at address
zero. Since vmap mappings in xkseg are reflected in xuseg, this means
we need to ensure that there are no vmap mappings established at the
start of xkseg. So we move back VMALLOC_START to avoid establishing
vmap mappings at the start of xkseg.
Signed-off-by: David Daney <ddaney@caviumnetworks.com>
To: linux-mips@linux-mips.org
Patchwork: http://patchwork.linux-mips.org/patch/1129/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Diffstat (limited to 'arch/mips')
-rw-r--r-- | arch/mips/include/asm/pgtable-64.h | 9 |
1 files changed, 7 insertions, 2 deletions
diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h index 26dc69d..1be4b0f 100644 --- a/arch/mips/include/asm/pgtable-64.h +++ b/arch/mips/include/asm/pgtable-64.h @@ -120,9 +120,14 @@ #endif #define FIRST_USER_ADDRESS 0UL -#define VMALLOC_START MAP_BASE +/* + * TLB refill handlers also map the vmalloc area into xuseg. Avoid + * the first couple of pages so NULL pointer dereferences will still + * reliably trap. + */ +#define VMALLOC_START (MAP_BASE + (2 * PAGE_SIZE)) #define VMALLOC_END \ - (VMALLOC_START + \ + (MAP_BASE + \ min(PTRS_PER_PGD * PTRS_PER_PMD * PTRS_PER_PTE * PAGE_SIZE, \ (1UL << cpu_vmbits)) - (1UL << 32)) |