aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/mm
Commit message (Collapse)AuthorAgeFilesLines
...
* [POWERPC] Fix invalid semicolon after if statementIlpo Järvinen2007-08-171-1/+1
| | | | | | | | | | | A similar fix to netfilter from Eric Dumazet inspired me to look around a bit by using some grep/sed stuff as looking for this kind of bugs seemed easy to automate. This is one of them I found where it looks like this semicolon is not valid. Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix size check for hugetlbfsBenjamin Herrenschmidt2007-08-101-0/+2
| | | | | | | | | | | | My "slices" address space management code that was added in the 2.6.22 implementation of get_unmapped_area() doesn't properly check that the size is a multiple of the requested page size. This allows userland to create VMAs that aren't a multiple of the huge page size with hugetlbfs (since hugetlbfs entirely relies on get_unmapped_area() to do that checking) which leads to a kernel BUG() when such areas are torn down. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix potential duplicate entry in SLB shadow bufferPaul Mackerras2007-08-101-29/+16
| | | | | | | | | | | | | We were getting a duplicate entry in the SLB shadow buffer in slb_flush_and_rebolt() if the kernel stack was in the same segment as PAGE_OFFSET, which on POWER6 causes the hypervisor to terminate the partition with an error. This fixes it. Also we were not creating an SLB entry (or an SLB shadow buffer entry) for the kernel stack on secondary CPUs when starting the CPU. This isn't a major problem, since an appropriate entry will be created on demand, but this fixes that also for consistency. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fixes for the SLB shadow buffer codeMichael Neuling2007-08-032-11/+19
| | | | | | | | | | | | | | On a machine with hardware 64kB pages and a kernel configured for a 64kB base page size, we need to change the vmalloc segment from 64kB pages to 4kB pages if some driver creates a non-cacheable mapping in the vmalloc area. However, we never updated with SLB shadow buffer. This fixes it. Thanks to paulus for finding this. Also added some write barriers to ensure the shadow buffer contents are always consistent. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix parse_drconf_memory() for 64-bit start addressesMichael Ellerman2007-08-031-2/+2
| | | | | | | | | | | | | | Some new machines use the "ibm,dynamic-reconfiguration-memory" property to provide memory layout information, rather than via memory nodes. There is a bug in the code to parse this property for start addresses over 4GB; we store the start address in an unsigned int, which means we throw away the high bits and add apparently duplicate regions. This results in a BUG() in free_bootmem_core(). This fixes it by using an unsigned long instead. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix special PTE code for secondary hash bucketPaul Mackerras2007-08-031-2/+4
| | | | | | | | | | | The code for mapping special 4k pages on kernels using a 64kB base page size was missing the code for doing the RPN (real page number) manipulation when inserting the hardware PTE in the secondary hash bucket. It needs the same code as has already been added to the code that inserts the HPTE in the primary hash bucket. This adds it. Spotted by Ben Herrenschmidt. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix loop with unsigned long counter variableManish Ahuja2007-07-261-2/+2
| | | | | | | | | | | | This fixes a possible infinite loop when the unsigned long counter "i" is used in lmb_add_region() in the following for loop: for (i = rgn->cnt-1; i >= 0; i--) by making the loop counter "i" be signed. Signed-off-by: Manish Ahuja <ahuja@austin.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Merge branch 'merge' of ↵Linus Torvalds2007-07-222-3/+9
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc * 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: [POWERPC] Clean up duplicate includes in drivers/macintosh/ [POWERPC] Quiet section mismatch warning on pcibios_setup [POWERPC] init and exit markings for hvc_iseries [POWERPC] Quiet section mismatch in hvc_rtas.c [POWERPC] Constify of_platform_driver match_table [POWERPC] hvcs: Make some things static and const [POWERPC] Constify of_platform_driver name [POWERPC] MPIC protected sources [POWERPC] of_detach_node()'s device node argument cannot be const [POWERPC] Fix ARCH=ppc builds [POWERPC] mv64x60: Use mutex instead of semaphore [POWERPC] Allow smp_call_function_single() to current cpu [POWERPC] Allow exec faults on readable areas on classic 32-bit PowerPC [POWERPC] Fix future firmware feature fixups function failure [POWERPC] fix showing xmon help [POWERPC] Make xmon_write accept a const buffer [POWERPC] Fix misspelled "CONFIG_CHECK_CACHE_COHERENCY" Kconfig option. [POWERPC] cell: CONFIG_SPE_BASE is a typo
| * [POWERPC] Allow exec faults on readable areas on classic 32-bit PowerPCPaul Mackerras2007-07-221-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Classic 32-bit PowerPC CPUs, and the early 64-bit PowerPC CPUs, don't provide a way to prevent execution from readable pages, that is, the MMU doesn't distinguish between data reads and instruction reads, although a different exception is taken for faults in data accesses and instruction accesses. Commit 9ba4ace39fdfe22268daca9f28c5df384ae462cf, in the course of fixing another bug, added a check that meant that a page fault due to an instruction access would fail if the vma did not have the VM_EXEC flag set. This gives an inconsistent enforcement on these CPUs of the no-execute status of the vma (since reading from the page is sufficient to allow subsequent execution from it), and causes old versions of ppc32 glibc (2.2 and earlier) to fail, since they rely on executing the word before the GOT but don't have it marked executable. This fixes the problem by allowing execution from readable (or writable) areas on CPUs which do not provide separate control over data and instruction reads. Signed-off-by: Paul Mackerras <paulus@samba.org> Acked-by: Jon Loeliger <jdl@freescale.com>
| * [POWERPC] cell: CONFIG_SPE_BASE is a typoGeert Uytterhoeven2007-07-221-2/+2
| | | | | | | | | | | | | | | | | | The config symbol for SPE support is called CONFIG_SPU_BASE, not CONFIG_SPE_BASE. Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | powerpc: tlb_32.c build fixMariusz Kozlowski2007-07-211-0/+2
|/ | | | | | | | | | | | | | | | | | allnoconfig results in this: CC arch/powerpc/mm/tlb_32.o In file included from include/asm/tlb.h:60, from arch/powerpc/mm/tlb_32.c:30: include/asm-generic/tlb.h: In function 'tlb_flush_mmu': include/asm-generic/tlb.h:76: error: implicit declaration of function 'release_pages' include/asm-generic/tlb.h: In function 'tlb_remove_page': include/asm-generic/tlb.h:105: error: implicit declaration of function 'page_cache_release' Signed-off-by: Mariusz Kozlowski <m.kozlowski@tuxland.pl> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm: Remove slab destructors from kmem_cache_create().Paul Mundt2007-07-202-3/+2
| | | | | | | | | | | | | | Slab destructors were no longer supported after Christoph's c59def9f222d44bb7e2f0a559f2906191a0862d7 change. They've been BUGs for both slab and slub, and slob never supported them either. This rips out support for the dtor pointer from kmem_cache_create() completely and fixes up every single callsite in the kernel (there were about 224, not including the slab allocator definitions themselves, or the documentation references). Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* mm: fault feedback #2Nick Piggin2007-07-191-15/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch completes Linus's wish that the fault return codes be made into bit flags, which I agree makes everything nicer. This requires requires all handle_mm_fault callers to be modified (possibly the modifications should go further and do things like fault accounting in handle_mm_fault -- however that would be for another patch). [akpm@linux-foundation.org: fix alpha build] [akpm@linux-foundation.org: fix s390 build] [akpm@linux-foundation.org: fix sparc build] [akpm@linux-foundation.org: fix sparc64 build] [akpm@linux-foundation.org: fix ia64 build] Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ian Molton <spyro@f2s.com> Cc: Bryan Wu <bryan.wu@analog.com> Cc: Mikael Starvik <starvik@axis.com> Cc: David Howells <dhowells@redhat.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: Greg Ungerer <gerg@uclinux.org> Cc: Matthew Wilcox <willy@debian.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp> Cc: Richard Curnow <rc@rc0.org.uk> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jeff Dike <jdike@addtoit.com> Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp> Cc: Chris Zankel <chris@zankel.net> Acked-by: Kyle McMartin <kyle@mcmartin.ca> Acked-by: Haavard Skinnemoen <hskinnemoen@atmel.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Acked-by: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> [ Still apparently needs some ARM and PPC loving - Linus ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'for-2.6.23' into mergePaul Mackerras2007-07-1119-643/+128
|\
| * [POWERPC] Remove extra return statementManish Ahuja2007-07-101-2/+0
| | | | | | | | | | | | | | | | Found 2 instances of return one right after each other in arch_add_memory(). This removes the superfluous one. Signed-off-by: Manish Ahuja <mahuja@us.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] Move inline asm eieio to using eieio inline functionKumar Gala2007-07-102-3/+3
| | | | | | | | | | | | | | | | Use the eieio function so we can redefine what eieio does rather than direct inline asm. This is part code clean up and partially because not all PPCs have eieio (book-e has mbar that maps to eieio). Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
| * [POWERPC] Kill typedef-ed structs for hash PTEs and BATsDavid Gibson2007-06-144-17/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Using typedefs to rename structure types if frowned on by CodingStyle. However, we do so for the hash PTE structure on both ppc32 (where it's called "PTE") and ppc64 (where it's called "hpte_t"). On ppc32 we also have such a typedef for the BATs ("BAT"). This removes this unhelpful use of typedefs, in the process bringing ppc32 and ppc64 closer together, by using the name "struct hash_pte" in both cases. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] Remove a couple of unused definitions from pgtable_32.cDavid Gibson2007-06-141-4/+0
| | | | | | | | | | | | | | | | In arch/powerpc/mm/pgtable_32.c, the variable io_bat_index and the macro is_power_of_4() no longer have any users. This removes them. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] Remove the dregs of APUS support from arch/powerpcDavid Gibson2007-06-1413-13/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | APUS (the Amiga Power-Up System) is not supported under arch/powerpc and it's unlikely it ever will be. Therefore, this patch removes the fragments of APUS support code from arch/powerpc which have been copied from arch/ppc. A few APUS references are left in asm-powerpc in .h files which are still used from arch/ppc. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] Abolish iopa(), mm_ptov(), io_block_mapping() from arch/powerpcDavid Gibson2007-06-141-118/+0
| | | | | | | | | | | | | | | | | | These old-fashioned IO mapping functions no longer have any callers in code which remains relevant on arch/powerpc. Therefore, this removes them from arch/powerpc. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] During VM oom condition, kill all threads in process groupwill schmidt2007-06-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have had complaints where a threaded application is left in a bad state after one of it's threads is killed when we hit a VM: out_of_memory condition. Killing just one of the process threads can leave the application in a bad state, whereas killing the entire process group would allow for the application to restart, or be otherwise handled, and makes it very obvious that something has gone wrong. This change allows the entire process group to be taken down, rather than just the one thread. lightly tested on powerpc Signed-off-by: Will <will_schmidt@vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] Rewrite IO allocation & mapping on powerpc64Benjamin Herrenschmidt2007-06-145-483/+106
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This rewrites pretty much from scratch the handling of MMIO and PIO space allocations on powerpc64. The main goals are: - Get rid of imalloc and use more common code where possible - Simplify the current mess so that PIO space is allocated and mapped in a single place for PCI bridges - Handle allocation constraints of PIO for all bridges including hot plugged ones within the 2GB space reserved for IO ports, so that devices on hotplugged busses will now work with drivers that assume IO ports fit in an int. - Cleanup and separate tracking of the ISA space in the reserved low 64K of IO space. No ISA -> Nothing mapped there. I booted a cell blade with IDE on PIO and MMIO and a dual G5 so far, that's it :-) With this patch, all allocations are done using the code in mm/vmalloc.c, though we use the low level __get_vm_area with explicit start/stop constraints in order to manage separate areas for vmalloc/vmap, ioremap, and PCI IOs. This greatly simplifies a lot of things, as you can see in the diffstat of that patch :-) A new pair of functions pcibios_map/unmap_io_space() now replace all of the previous code that used to manipulate PCI IOs space. The allocation is done at mapping time, which is now called from scan_phb's, just before the devices are probed (instead of after, which is by itself a bug fix). The only other caller is the PCI hotplug code for hot adding PCI-PCI bridges (slots). imalloc is gone, as is the "sub-allocation" thing, but I do beleive that hotplug should still work in the sense that the space allocation is always done by the PHB, but if you unmap a child bus of this PHB (which seems to be possible), then the code should properly tear down all the HPTE mappings for that area of the PHB allocated IO space. I now always reserve the first 64K of IO space for the bridge with the ISA bus on it. I have moved the code for tracking ISA in a separate file which should also make it smarter if we ever are capable of hot unplugging or re-plugging an ISA bridge. This should have a side effect on platforms like powermac where VGA IOs will no longer work. This is done on purpose though as they would have worked semi-randomly before. The idea at this point is to isolate drivers that might need to access those and fix them by providing a proper function to obtain an offset to the legacy IOs of a given bus. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] unmap_vm_area becomes unmap_kernel_range for the publicBenjamin Herrenschmidt2007-06-142-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This makes unmap_vm_area static and a wrapper around a new exported unmap_kernel_range that takes an explicit range instead of a vm_area struct. This makes it more versatile for code that wants to play with kernel page tables outside of the standard vmalloc area. (One example is some rework of the PowerPC PCI IO space mapping code that depends on that patch and removes some code duplication and horrible abuse of forged struct vm_struct). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] Move common code out of if/elseJon Tollefson2007-06-141-2/+1
| | | | | | | | | | | | | | | | | | | | | | Move common code out of if/else. Signed-off-by: Jon Tollefson <kniht@linux.vnet.ibm.com> ---- hash_native_64.c | 3 +-- 1 files changed, 1 insertion(+), 2 deletions(-) Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [POWERPC] PowerPC: Prevent data exception in kernel space (32-bit)Segher Boessenkool2007-06-201-3/+2
|/ | | | | | | | | | | | | | | | | The "is_exec" branch of the protection check in do_page_fault() didn't do anything on 32-bit PowerPC. So if a userland program jumps to a page with Linux protection flags "---p", all the tests happily fall through, and handle_mm_fault() is called, which in turn calls handle_pte_fault(), which calls update_mmu_cache(), which goes flush the dcache to a page with no access rights. Boom. This fixes it. Signed-off-by: Segher Boessenkool <segher@kernel.crashing.org> Cc: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix modpost warningKumar Gala2007-05-231-1/+1
| | | | | | | | Mark pte_alloc_one_kernel as __init_refok to fix the following warning: WARNING: arch/powerpc/mm/built-in.o(.text+0x1068): Section mismatch: reference to .init.text:early_get_page (between 'pte_alloc_one_kernel' and 'steal_context') Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* [POWERPC] Fix warning in 32-bit builds with CONFIG_HIGHMEMBenjamin Herrenschmidt2007-05-221-4/+5
| | | | | | | Some missing fixup for the removal of 4 level fixup header. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Detach sched.h from mm.hAlexey Dobriyan2007-05-211-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | First thing mm.h does is including sched.h solely for can_do_mlock() inline function which has "current" dereference inside. By dealing with can_do_mlock() mm.h can be detached from sched.h which is good. See below, why. This patch a) removes unconditional inclusion of sched.h from mm.h b) makes can_do_mlock() normal function in mm/mlock.c c) exports can_do_mlock() to not break compilation d) adds sched.h inclusions back to files that were getting it indirectly. e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were getting them indirectly Net result is: a) mm.h users would get less code to open, read, preprocess, parse, ... if they don't need sched.h b) sched.h stops being dependency for significant number of files: on x86_64 allmodconfig touching sched.h results in recompile of 4083 files, after patch it's only 3744 (-8.3%). Cross-compile tested on all arm defconfigs, all mips defconfigs, all powerpc defconfigs, alpha alpha-up arm i386 i386-up i386-defconfig i386-allnoconfig ia64 ia64-up m68k mips parisc parisc-up powerpc powerpc-up s390 s390-up sparc sparc-up sparc64 sparc64-up um-x86_64 x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig as well as my two usual configs. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [POWERPC] Correct #endif commentJon Tollefson2007-05-171-2/+2
| | | | | | | | | | | Fix up comment on two #endifs to match their #ifs. Signed-off-by: Jon Tollefson <kniht@linux.vnet.ibm.com> ---- hash_utils_64.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Add spinlock to request_phb_iospace()Benjamin Herrenschmidt2007-05-171-0/+4
| | | | | | | | | | request_phb_iospace() can be called from different CPUs at init time (at least with my next patch) and thus needs a spinlock. As for the next patch, this is a temporary workaround for 2.6.22 issues until my rewrite of IO mappings is ready (for 2.6.23) Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix COMMON symbol warningsKumar Gala2007-05-171-4/+14
| | | | | | | | | | | | | | | | | We get the following warnings in various ARCH=powerpc builds: WARNING: "ee_restarts" [arch/powerpc/kernel/built-in] is COMMON symbol WARNING: "fee_restarts" [arch/powerpc/kernel/built-in] is COMMON symbol WARNING: "htab_hash_searches" [arch/powerpc/mm/built-in] is COMMON symbol WARNING: "next_slot" [arch/powerpc/mm/built-in] is COMMON symbol WARNING: "mmu_hash_lock" [arch/powerpc/mm/built-in] is COMMON symbol WARNING: "primary_pteg_full" [arch/powerpc/mm/built-in] is COMMON symbol WARNING: "global_dbcr0" [arch/powerpc/kernel/built-in] is COMMON symbol Switch to moving local symbols (except mmu_hash_lock which is global) and space directive instead. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* [POWERPC] Remove unused variable in hpte_decode()Stephen Rothwell2007-05-121-1/+1
| | | | | Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Assign correct variable in hpte_decode()Stephen Rothwell2007-05-121-1/+1
| | | | | | | This case will never be hit, but it should be corrected anyway. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix warning in hpte_decode(), and generalize itPaul Mackerras2007-05-101-20/+17
| | | | | | | | | | | | | This adds the necessary support to hpte_decode() to handle 1TB segments and 16GB pages, and removes an uninitialized value warning on avpn. We don't have any code to generate HPTEs for 1TB segments or 16GB pages yet, so this is mostly for completeness, and to fix the warning. Signed-off-by: Paul Mackerras <paulus@samba.org> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* Merge branch 'master' of ↵Linus Torvalds2007-05-0913-645/+817
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc * 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: [POWERPC] Further fixes for the removal of 4level-fixup hack from ppc32 [POWERPC] EEH: log all PCI-X and PCI-E AER registers [POWERPC] EEH: capture and log pci state on error [POWERPC] EEH: Split up long error msg [POWERPC] EEH: log error only after driver notification. [POWERPC] fsl_soc: Make mac_addr const in fs_enet_of_init(). [POWERPC] Don't use SLAB/SLUB for PTE pages [POWERPC] Spufs support for 64K LS mappings on 4K kernels [POWERPC] Add ability to 4K kernel to hash in 64K pages [POWERPC] Introduce address space "slices" [POWERPC] Small fixes & cleanups in segment page size demotion [POWERPC] iSeries: Make HVC_ISERIES the default [POWERPC] iSeries: suppress build warning in lparmap.c [POWERPC] Mark pages that don't exist as nosave [POWERPC] swsusp: Introduce register_nosave_region_late
| * [POWERPC] Further fixes for the removal of 4level-fixup hack from ppc32David Gibson2007-05-092-3/+3
| | | | | | | | | | | | | | | | | | | | | | Commit d1953c8888ef034b912ee33bc2ea2cce6a414402 removed the use of 4level-fixup.h for 32-bit systems under arch/powerpc. However, I missed a few things activated on some configurations, resulting in some warnings (at least with STRICT_MM_TYPECHECKS enabled) and build errors in some circumstances. This fixes it. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] Don't use SLAB/SLUB for PTE pagesHugh Dickins2007-05-091-11/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | The SLUB allocator relies on struct page fields first_page and slab, overwritten by ptl when SPLIT_PTLOCK: so the SLUB allocator cannot then be used for the lowest level of pagetable pages. This was obstructing SLUB on PowerPC, which uses kmem_caches for its pagetables. So convert its pte level to use normal gfp pages (whereas pmd, pud and 64k-page pgd want partpages, so continue to use kmem_caches for pmd, pud and pgd). Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] Add ability to 4K kernel to hash in 64K pagesBenjamin Herrenschmidt2007-05-093-16/+40
| | | | | | | | | | | | | | | | | | This adds the ability for a kernel compiled with 4K page size to have special slices containing 64K pages and hash the right type of hash PTEs. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] Introduce address space "slices"Benjamin Herrenschmidt2007-05-097-576/+699
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The basic issue is to be able to do what hugetlbfs does but with different page sizes for some other special filesystems; more specifically, my need is: - Huge pages - SPE local store mappings using 64K pages on a 4K base page size kernel on Cell - Some special 4K segments in 64K-page kernels for mapping a dodgy type of powerpc-specific infiniband hardware that requires 4K MMU mappings for various reasons I won't explain here. The main issues are: - To maintain/keep track of the page size per "segment" (as we can only have one page size per segment on powerpc, which are 256MB divisions of the address space). - To make sure special mappings stay within their allotted "segments" (including MAP_FIXED crap) - To make sure everybody else doesn't mmap/brk/grow_stack into a "segment" that is used for a special mapping Some of the necessary mechanisms to handle that were present in the hugetlbfs code, but mostly in ways not suitable for anything else. The patch relies on some changes to the generic get_unmapped_area() that just got merged. It still hijacks hugetlb callbacks here or there as the generic code hasn't been entirely cleaned up yet but that shouldn't be a problem. So what is a slice ? Well, I re-used the mechanism used formerly by our hugetlbfs implementation which divides the address space in "meta-segments" which I called "slices". The division is done using 256MB slices below 4G, and 1T slices above. Thus the address space is divided currently into 16 "low" slices and 16 "high" slices. (Special case: high slice 0 is the area between 4G and 1T). Doing so simplifies significantly the tracking of segments and avoids having to keep track of all the 256MB segments in the address space. While I used the "concepts" of hugetlbfs, I mostly re-implemented everything in a more generic way and "ported" hugetlbfs to it. Slices can have an associated page size, which is encoded in the mmu context and used by the SLB miss handler to set the segment sizes. The hash code currently doesn't care, it has a specific check for hugepages, though I might add a mechanism to provide per-slice hash mapping functions in the future. The slice code provide a pair of "generic" get_unmapped_area() (bottomup and topdown) functions that should work with any slice size. There is some trickiness here so I would appreciate people to have a look at the implementation of these and let me know if I got something wrong. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] Small fixes & cleanups in segment page size demotionBenjamin Herrenschmidt2007-05-091-41/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code for demoting segments to 4K had some issues, like for example, when using _PAGE_4K_PFN flag, the first CPU to hit it would do the demotion, but other CPUs hitting the same page wouldn't properly flush their SLBs if mmu_ci_restriction isn't set. There are also potential issues with hash_preload not handling _PAGE_4K_PFN. All of these are non issues on current hardware but might bite us in the future. This patch thus fixes it by: - Taking the test comparing the mm and current CPU context page sizes to decide to flush SLBs out of the mmu_ci_restrictions test since that can also be triggered by _PAGE_4K_PFN pages - Due to the above being done all the time, demote_segment_4k doesn't need update the context and flush the SLB - demote_segment_4k can be static and doesn't need an EXPORT_SYMBOL - Making hash_preload ignore anything that has either _PAGE_4K_PFN or _PAGE_NO_CACHE set, thus avoiding duplication of the complicated logic in hash_page() (and possibly making hash_preload a little bit faster for the normal case). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] Mark pages that don't exist as nosaveJohannes Berg2007-05-091-0/+25
| | | | | | | | | | | | | | | | | | | | | | On some powerpc architectures (notably 64-bit powermac) there is a memory hole, for example on powermacs between 2G and 4G. Since we use the flat memory model regardless, these pages must be marked as nosave (for suspend to disk.) Signed-off-by: Johannes Berg <johannes@sipsolutions.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | Add suspend-related notifications for CPU hotplugRafael J. Wysocki2007-05-091-0/+3
|/ | | | | | | | | | | | | | | | | | | | | Since nonboot CPUs are now disabled after tasks and devices have been frozen and the CPU hotplug infrastructure is used for this purpose, we need special CPU hotplug notifications that will help the CPU-hotplug-aware subsystems distinguish normal CPU hotplug events from CPU hotplug events related to a system-wide suspend or resume operation in progress. This patch introduces such notifications and causes them to be used during suspend and resume transitions. It also changes all of the CPU-hotplug-aware subsystems to take these notifications into consideration (for now they are handled in the same way as the corresponding "normal" ones). [oleg@tv-sign.ru: cleanups] Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Cc: Gautham R Shenoy <ego@in.ibm.com> Cc: Pavel Machek <pavel@ucw.cz> Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'master' of ↵Linus Torvalds2007-05-088-128/+116
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc * 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (77 commits) [POWERPC] Abolish powerpc_flash_init() [POWERPC] Early serial debug support for PPC44x [POWERPC] Support for the Ebony 440GP reference board in arch/powerpc [POWERPC] Add device tree for Ebony [POWERPC] Add powerpc/platforms/44x, disable platforms/4xx for now [POWERPC] MPIC U3/U4 MSI backend [POWERPC] MPIC MSI allocator [POWERPC] Enable MSI mappings for MPIC [POWERPC] Tell Phyp we support MSI [POWERPC] RTAS MSI implementation [POWERPC] PowerPC MSI infrastructure [POWERPC] Rip out the existing powerpc msi stubs [POWERPC] Remove use of 4level-fixup.h for ppc32 [POWERPC] Add powerpc PCI-E reset API implementation [POWERPC] Holly bootwrapper [POWERPC] Holly DTS [POWERPC] Holly defconfig [POWERPC] Add support for 750CL Holly board [POWERPC] Generalize tsi108 PCI setup [POWERPC] Generalize tsi108 PHY types ... Fixed conflict in include/asm-powerpc/kdebug.h manually Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * [POWERPC] Remove use of 4level-fixup.h for ppc32David Gibson2007-05-081-12/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | For 32-bit systems, powerpc still relies on the 4level-fixup.h hack, to pretend that the generic pagetable handling stuff is 3-levels rather than 4. This patch removes this, instead using the newer pgtable-nopmd.h to handle the elision of both the pud and pmd pagetable levels (ppc32 pagetables are actually 2 levels). This removes a little extraneous code, and makes it more easily compared to the 64-bit pagetable code. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * Merge branch 'linux-2.6'Paul Mackerras2007-05-082-4/+23
| |\
| * | [POWERPC] Add __init annotations to reserve_mem() and stabs_alloc()Michael Ellerman2007-05-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | reserve_mem() and stabs_alloc() are both called only from other __init routines, so can be marked __init. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * | [POWERPC] 64K page support for kexecLuke Browning2007-05-071-22/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes a couple of kexec problems related to 64K page support in the kernel. kexec issues a tlbie for each pte. The parameters for the tlbie are the page size and the virtual address. Support was missing for the computation of these two parameters for 64K pages. This adds that support. Signed-off-by: Luke Browning <lukebrowning@us.ibm.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Olof Johansson <olof@lixom.net> Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * | [POWERPC] Minor fault path optimizationChristoph Hellwig2007-05-021-27/+15
| | | | | | | | | | | | | | | | | | | | | | | | Call the kprobes pagefault handler directly instead of going through the complex notifier chain. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * | [POWERPC] Revise PPC44x MMU code for arch/powerpcDavid Gibson2007-05-022-64/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch takes the definitions for the PPC44x MMU (a software loaded TLB) from asm-ppc/mmu.h, cleans them up of things no longer necessary in arch/powerpc and puts them in a new asm-powerpc/mmu_44x.h file. It also substantially simplifies arch/powerpc/mm/44x_mmu.c and makes a couple of small fixes necessary for the 44x MMU code to build and work properly in arch/powerpc. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * | [POWERPC] Initialise spinlock in the DEBUG_PAGEALLOC codeMichael Ellerman2007-05-021-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixes: BUG: spinlock bad magic on CPU#0, swapper/0 lock: c00000000064ec30, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0 Call Trace: [c00000000062b980] [c00000000000f920] .show_stack+0x6c/0x1a0 (unreliable) [c00000000062ba20] [c0000000001c2b40] .spin_bug+0xb0/0xd4 [c00000000062bab0] [c0000000001c2ed0] ._raw_spin_lock+0x44/0x184 [c00000000062bb50] [c0000000003a42b4] ._spin_lock+0x10/0x24 [c00000000062bbd0] [c00000000002b4dc] .kernel_map_pages+0x198/0x278 [c00000000062bc90] [c000000000079720] .free_hot_cold_page+0x124/0x418 [c00000000062bd70] [c000000000530278] .free_all_bootmem_core+0x14c/0x224 [c00000000062be50] [c00000000052a178] .mem_init+0x68/0x170 [c00000000062bee0] [c00000000051d874] .start_kernel+0x2a0/0x37c [c00000000062bf90] [c0000000000084c8] .start_here_common+0x54/0x8c Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>