diff options
author | Martin Schwidefsky <schwidefsky@de.ibm.com> | 2009-09-11 10:28:57 +0200 |
---|---|---|
committer | Martin Schwidefsky <schwidefsky@de.ibm.com> | 2009-09-11 10:29:53 +0200 |
commit | 50aa98bad056a17655864a4d71ebc32d95c629a7 (patch) | |
tree | bf8d22851d99583e2ea388766697bf64672d7926 /arch/s390/mm/vmem.c | |
parent | c4de0c1a18237c2727dde8ad392e333539b0af3c (diff) | |
download | kernel_goldelico_gta04-50aa98bad056a17655864a4d71ebc32d95c629a7.zip kernel_goldelico_gta04-50aa98bad056a17655864a4d71ebc32d95c629a7.tar.gz kernel_goldelico_gta04-50aa98bad056a17655864a4d71ebc32d95c629a7.tar.bz2 |
[S390] fix recursive locking on page_table_lock
Suzuki Poulose reported the following recursive locking bug on s390:
Here is the stack trace : (see Appendix I for more info)
[<0000000000406ed6>] _spin_lock+0x52/0x94
[<0000000000103bde>] crst_table_free+0x14e/0x1a4
[<00000000001ba684>] __pmd_alloc+0x114/0x1ec
[<00000000001be8d0>] handle_mm_fault+0x2cc/0xb80
[<0000000000407d62>] do_dat_exception+0x2b6/0x3a0
[<0000000000114f8c>] sysc_return+0x0/0x8
[<00000200001642b2>] 0x200001642b2
The page_table_lock is already acquired in __pmd_alloc (mm/memory.c) and
it tries to populate the pud/pgd with a new pmd allocated. If another
thread populates it before we get a chance, we free the pmd using
pmd_free().
On s390x, pmd_free(even pud_free ) is #defined to crst_table_free(),
which acquires the page_table_lock to protect the crst_table index updates.
Hence this ends up in a recursive locking of the page_table_lock.
The solution suggested by Dave Hansen is to use a new spin lock in the mmu
context to protect the access to the crst_list and the pgtable_list.
Reported-by: Suzuki Poulose <suzuki@in.ibm.com>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Diffstat (limited to 'arch/s390/mm/vmem.c')
-rw-r--r-- | arch/s390/mm/vmem.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c index e4868bf..5f91a38 100644 --- a/arch/s390/mm/vmem.c +++ b/arch/s390/mm/vmem.c @@ -331,6 +331,7 @@ void __init vmem_map_init(void) unsigned long start, end; int i; + spin_lock_init(&init_mm.context.list_lock); INIT_LIST_HEAD(&init_mm.context.crst_list); INIT_LIST_HEAD(&init_mm.context.pgtable_list); init_mm.context.noexec = 0; |