aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/mm
Commit message (Collapse)AuthorAgeFilesLines
* add mm argument to pte/pmd/pud/pgd_freeBenjamin Herrenschmidt2008-02-051-3/+3
| | | | | | | | | | | | | | | | | | (with Martin Schwidefsky <schwidefsky@de.ibm.com>) The pgd/pud/pmd/pte page table allocation functions get a mm_struct pointer as first argument. The free functions do not get the mm_struct argument. This is 1) asymmetrical and 2) to do mm related page table allocations the mm argument is needed on the free function as well. [kamalesh@linux.vnet.ibm.com: i386 fix] [akpm@linux-foundation.org: coding-syle fixes] Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [POWERPC] Allocate the hash table under 1G on cellMichael Ellerman2008-01-311-3/+9
| | | | | | | | | | | | In order to support the fixed IOMMU mapping (in a subsequent patch), we need the hash table to be inside the IOMMUs DMA window. This is usually 2G, but let's make sure the hash table is under 1G as that will satisfy the IOMMU requirements and also means the hash table will be on node 0. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Revert "[POWERPC] Fake NUMA emulation for PowerPC"Paul Mackerras2008-01-261-54/+5
| | | | | | | | | | | | | | | | | | This reverts commit 5c3f5892a2db6757a72ce8b27cba90db06683e1d, basically because it changes behaviour even when no fake NUMA information is specified on the kernel command line. Firstly, it changes the nid, thus destroying the real NUMA information. Secondly, it also changes behaviour in that if a node ends up with no memory in it because of the memory limit, we used to set it online and now we don't. Also, in the non-NUMA case with no fake NUMA information, we do node_set_online once for each LMB now, whereas previously we only did it once. I don't know if that is actually a problem, but it does seem unnecessary. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Make setjmp/longjmp code usable outside of xmonMichael Neuling2008-01-251-4/+2
| | | | | | | | | This makes the setjmp/longjmp code used by xmon, generically available to other code. It also removes the requirement for debugger hooks to be only called on 0x300 (data storage) exception. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Merge branch 'for-2.6.25' of ↵Paul Mackerras2008-01-243-5/+35
|\ | | | | | | master.kernel.org:/pub/scm/linux/kernel/git/galak/powerpc into for-2.6.25
| * [POWERPC] 85xx: Respect KERNELBASE, PAGE_OFFSET, and PHYSICAL_START on e500Dale Farnsworth2008-01-231-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | The e500 MMU init code previously assumed KERNELBASE always equaled PAGE_OFFSET and PHYSICAL_START was 0. This is useful for kdump support as well as asymetric multicore. For the initial kdump support the secondary kernel will run at 32M but need access to all of memory so we bump the initial TLB up to 64M. This also matches with the forth coming ePAPR spec. Signed-off-by: Dale Farnsworth <dale@farnsworth.org> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
| * [POWERPC] Fix handling of memreserve if the range lands in highmemKumar Gala2008-01-232-2/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There were several issues if a memreserve range existed and happened to be in highmem: * The bootmem allocator is only aware of lowmem so calling reserve_bootmem with a highmem address would cause a BUG_ON * All highmem pages were provided to the buddy allocator Added a lmb_is_reserved() api that we now use to determine if a highem page should continue to be PageReserved or provided to the buddy allocator. Also, we incorrectly reported the amount of pages reserved since all highmem pages are initally marked reserved and we clear the PageReserved flag as we "free" up the highmem pages. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* | Merge branch 'linux-2.6'Paul Mackerras2008-01-241-0/+2
|\ \
| * | [POWERPC] Fix boot failure on POWER6Paul Mackerras2008-01-151-8/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 473980a99316c0e788bca50996375a2815124ce1 added a call to clear the SLB shadow buffer before registering it. Unfortunately this means that we clear out the entries that slb_initialize has previously set in there. On POWER6, the hypervisor uses the SLB shadow buffer when doing partition switches, and that means that after the next partition switch, each non-boot CPU has no SLB entries to map the kernel text and data, which causes it to crash. This fixes it by reverting most of 473980a9 and instead clearing the 3rd entry explicitly in slb_initialize. This fixes the problem that 473980a9 was trying to solve, but without breaking POWER6. Signed-off-by: Paul Mackerras <paulus@samba.org>
| * | [POWERPC] Fix CPU hotplug when using the SLB shadow bufferMichael Neuling2008-01-111-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before we register the SLB shadow buffer, we need to invalidate the entries in the buffer, otherwise we can end up stale entries from when we previously offlined the CPU. This does this invalidate as well as unregistering the buffer with PHYP before we offline the cpu. Tested and fixes crashes seen on 970MP (thanks to tonyb) and POWER5. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | | [POWERPC] Provide a way to protect 4k subpages when using 64k pagesPaul Mackerras2008-01-244-18/+295
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using 64k pages on 64-bit PowerPC systems makes life difficult for emulators that are trying to emulate an ISA, such as x86, which use a smaller page size, since the emulator can no longer use the MMU and the normal system calls for controlling page protections. Of course, the emulator can emulate the MMU by checking and possibly remapping the address for each memory access in software, but that is pretty slow. This provides a facility for such programs to control the access permissions on individual 4k sub-pages of 64k pages. The idea is that the emulator supplies an array of protection masks to apply to a specified range of virtual addresses. These masks are applied at the level where hardware PTEs are inserted into the hardware page table based on the Linux PTEs, so the Linux PTEs are not affected. Note that this new mechanism does not allow any access that would otherwise be prohibited; it can only prohibit accesses that would otherwise be allowed. This new facility is only available on 64-bit PowerPC and only when the kernel is configured for 64k pages. The masks are supplied using a new subpage_prot system call, which takes a starting virtual address and length, and a pointer to an array of protection masks in memory. The array has a 32-bit word per 64k page to be protected; each 32-bit word consists of 16 2-bit fields, for which 0 allows any access (that is otherwise allowed), 1 prevents write accesses, and 2 or 3 prevent any access. Implicit in this is that the regions of the address space that are protected are switched to use 4k hardware pages rather than 64k hardware pages (on machines with hardware 64k page support). In fact the whole process is switched to use 4k hardware pages when the subpage_prot system call is used, but this could be improved in future to switch only the affected segments. The subpage protection bits are stored in a 3 level tree akin to the page table tree. The top level of this tree is stored in a structure that is appended to the top level of the page table tree, i.e., the pgd array. Since it will often only be 32-bit addresses (below 4GB) that are protected, the pointers to the first four bottom level pages are also stored in this structure (each bottom level page contains the protection bits for 1GB of address space), so the protection bits for addresses below 4GB can be accessed with one fewer loads than those for higher addresses. Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [POWERPC] Add hugepagesz boot-time parameterJon Tollefson2008-01-172-34/+96
| | | | | | | | | | | | | | | | | | | | | | | | This adds the hugepagesz boot-time parameter for ppc64. It lets one pick the size for huge pages. The choices available are 64K and 16M when the base page size is 4k. It defaults to 16M (previously the only only choice) if nothing or an invalid choice is specified. Tested 64K huge pages successfully with the libhugetlbfs 1.2. Signed-off-by: Jon Tollefson <kniht@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [POWERPC] Fake NUMA emulation for PowerPCBalbir Singh2007-12-201-5/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here's a dumb simple implementation of fake NUMA nodes for PowerPC. Fake NUMA nodes can be specified using the following command line option numa=fake=<node range> node range is of the format <range1>,<range2>,...<rangeN> Each of the rangeX parameters is passed using memparse(). I find this useful for fake NUMA emulation on my simple PowerPC machine. I've tested it on a non-numa box with the following arguments: numa=fake=1G numa=fake=1G,2G name=fake=1G,512M,2G numa=fake=1500M,2800M mem=3500M numa=fake=1G mem=512M numa=fake=1G mem=1G Signed-off-by: Balbir Singh <balbir@linux.vnet.ibm.com> Acked-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [POWERPC] Use SLB size from the device treeMichael Neuling2007-12-113-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | Currently we hardwire the number of SLBs to 64, but PAPR says we should use the ibm,slb-size property to obtain the number of SLB entries. This uses this property instead of assuming 64. If no property is found, we assume 64 entries as before. This soft patches the SLB handler, so it shouldn't change performance at all. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [POWERPC] Add missing spaces in printk formatsjoe@perches.com2007-12-031-1/+1
|/ | | | | Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix 8xx build breakage due to _tlbie changesBenjamin Herrenschmidt2007-11-202-2/+2
| | | | | | | | | | | My changes to _tlbie to fix 4xx unfortunately broke 8xx build in a couple of places. This fixes it. Spotted by Olof Johansson. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Vitaly Bordug <vitb@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix build failure on legacy iSeriesKamalesh Babulal2007-11-201-0/+1
| | | | | | | | | | | | | | | | | | Include <asm/iseries/hv_call.h> in arch/powerpc/mm/stab.c to fix the following compile error (found with randconfig): CC arch/powerpc/mm/stab.o arch/powerpc/mm/stab.c: In function "stab_initialize": arch/powerpc/mm/stab.c:282: error: implicit declaration of function "HvCall1" arch/powerpc/mm/stab.c:282: error: "HvCallBaseSetASR" undeclared (first use in this function) arch/powerpc/mm/stab.c:282: error: (Each undeclared identifier is reported only once arch/powerpc/mm/stab.c:282: error: for each function it appears in.) make[1]: *** [arch/powerpc/mm/stab.o] Error 1 make: *** [arch/powerpc/mm] Error 2 Signed-off-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Acked-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Silence an annoying boot messageStephen Rothwell2007-11-131-12/+4
| | | | | | | | | | | | vmemmap_populate will printk (with KERN_WARNING) for a lot of pages if CONFIG_SPARSEMEM_VMEMMAP is enabled (at least it does on iSeries). Use pr_debug for it instead. Replace the only other use of DBG in this file with pr_debug as well. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix CONFIG_SMP=n build error on ppc64Olof Johansson2007-11-131-2/+0
| | | | | | | | | | | | | | | | | | | | The patch "KVM: fix !SMP build error" change the way smp_call_function() actually uses the passed in function names on non-SMP builds. So previously it was never caught that the function passed in was never actually defined. This causes a build error on ppc64_defconfig + CONFIG_SMP=n: arch/powerpc/mm/tlb_64.c: In function 'pgtable_free_now': arch/powerpc/mm/tlb_64.c:71: error: 'pte_free_smp_sync' undeclared (first use in this function) arch/powerpc/mm/tlb_64.c:71: error: (Each undeclared identifier is reported only once arch/powerpc/mm/tlb_64.c:71: error: for each function it appears in.) So we need to define it even if CONFIG_SMP is off. Either that or ifdef out the smp_call_function() call, but that's ugly. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Merge branch 'for-2.6.24' of ↵Paul Mackerras2007-11-084-12/+12
|\ | | | | | | master.kernel.org:/pub/scm/linux/kernel/git/jwboyer/powerpc-4xx into merge
| * [POWERPC] ppc405 Fix arithmatic rollover bug when memory size under 16MGrant Likely2007-11-011-9/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mmu_mapin_ram() loops over total_lowmem to setup page tables. However, if total_lowmem is less that 16M, the subtraction rolls over and results in a number just under 4G (because total_lowmem is an unsigned value). This patch rejigs the loop from countup to countdown to eliminate the bug. Special thanks to Magnus Hjorth who wrote the original patch to fix this bug. This patch improves on his by making the loop code simpler (which also eliminates the possibility of another rollover at the high end) and also applies the change to arch/powerpc. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
| * [POWERPC] 4xx: Deal with 44x virtually tagged icacheBenjamin Herrenschmidt2007-11-011-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The 44x family has an interesting "feature" which is a virtually tagged instruction cache (yuck !). So far, we haven't dealt with it properly, which means we've been mostly lucky or people didn't report the problems, unless people have been running custom patches in their distro... This is an attempt at fixing it properly. I chose to do it by setting a global flag whenever we change a PTE that was previously marked executable, and flush the entire instruction cache upon return to user space when that happens. This is a bit heavy handed, but it's hard to do more fine grained flushes as the icbi instruction, on those processor, for some very strange reasons (since the cache is virtually mapped) still requires a valid TLB entry for reading in the target address space, which isn't something I want to deal with. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
| * [POWERPC] 4xx: Fix 4xx flush_tlb_page()Benjamin Herrenschmidt2007-11-012-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On 4xx CPUs, the current implementation of flush_tlb_page() uses a low level _tlbie() assembly function that only works for the current PID. Thus, invalidations caused by, for example, a COW fault triggered by get_user_pages() from a different context will not work properly, causing among other things, gdb breakpoints to fail. This patch adds a "pid" argument to _tlbie() on 4xx processors, and uses it to flush entries in the right context. FSL BookE also gets the argument but it seems they don't need it (their tlbivax form ignores the PID when invalidating according to the document I have). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
* | [POWERPC] Fix switch_slb handling of 1T ESID valueswill schmidt2007-11-081-3/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we have 1TB segment size support, we need to be using the GET_ESID_1T macro when comparing ESID values for pc, stack, and unmapped_base within switch_slb(). A new helper function called esids_match() contains the logic for deciding when to call GET_ESID and GET_ESID_1T. This fixes a duplicate-slb-entry inspired machine-check exception I was seeing when trying to run java on a power6 partition. Tested on power6 and power5. Signed-off-by: Will Schmidt <will_schmidt@vnet.ibm.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [POWERPC] Include udbg.h when using udbg_printfwill schmidt2007-11-082-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | This fixes the error error: implicit declaration of function "udbg_printf" We have a few spots where we reference udbg_printf() without #including udbg.h. These are within #ifdef DEBUG blocks, so unnoticed until we do a #define DEBUG or #define DEBUG_LOW nearby. Signed-off-by: Will Schmidt <will_schmidt@vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [POWERPC] powerpc: Fix demotion of segments to 4K pagesBenjamin Herrenschmidt2007-10-292-4/+7
|/ | | | | | | | | | | | | | | | | | When demoting a process to use 4K HW pages (instead of 64K), which happens under various circumstances such as doing cache inhibited mappings on machines that do not support 64K CI pages, the assembly hash code calls back into the C function flush_hash_page(). This function prototype was recently changed to accomodate for 1T segments but the assembly call site was not updated, causing applications that do demotion to hang. In addition, when updating the per-CPU PACA for the new sizes, we didn't properly update the slice "map", thus causing the SLB miss code to re-insert segments for the wrong size. This fixes both and adds a warning comment next to the C implementation to try to avoid problems next time someone changes it. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* pid namespaces: define is_global_init() and is_container_init()Serge E. Hallyn2007-10-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | is_init() is an ambiguous name for the pid==1 check. Split it into is_global_init() and is_container_init(). A cgroup init has it's tsk->pid == 1. A global init also has it's tsk->pid == 1 and it's active pid namespace is the init_pid_ns. But rather than check the active pid namespace, compare the task structure with 'init_pid_ns.child_reaper', which is initialized during boot to the /sbin/init process and never changes. Changelog: 2.6.22-rc4-mm2-pidns1: - Use 'init_pid_ns.child_reaper' to determine if a given task is the global init (/sbin/init) process. This would improve performance and remove dependence on the task_pid(). 2.6.21-mm2-pidns2: - [Sukadev Bhattiprolu] Changed is_container_init() calls in {powerpc, ppc,avr32}/traps.c for the _exception() call to is_global_init(). This way, we kill only the cgroup if the cgroup's init has a bug rather than force a kernel panic. [akpm@linux-foundation.org: fix comment] [sukadev@us.ibm.com: Use is_global_init() in arch/m32r/mm/fault.c] [bunk@stusta.de: kernel/pid.c: remove unused exports] [sukadev@us.ibm.com: Fix capability.c to work with threaded init] Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Signed-off-by: Sukadev Bhattiprolu <sukadev@us.ibm.com> Acked-by: Pavel Emelianov <xemul@openvz.org> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Cedric Le Goater <clg@fr.ibm.com> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: Herbert Poetzel <herbert@13thfloor.at> Cc: Kirill Korotaev <dev@sw.ru> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'merge' of ↵Linus Torvalds2007-10-173-3/+5
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc * 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (24 commits) [POWERPC] Fix vmemmap warning in init_64.c [POWERPC] Fix 64 bits vDSO DWARF info for CR register [POWERPC] Add 1TB workaround for PA6T [POWERPC] Enable NO_HZ and high res timers for pseries and ppc64 configs [POWERPC] Quieten cache information at boot [POWERPC] Quieten clockevent printk [POWERPC] Enable SLUB in *_defconfig [POWERPC] Fix 1TB segment detection [POWERPC] Fix iSeries_hpte_insert prototype [POWERPC] Fix copyright symbol [POWERPC] ibmebus: Move to of_device and of_platform_driver, match eHCA and eHEA drivers [POWERPC] ibmebus: Add device creation and bus probing based on of_device [POWERPC] ibmebus: Remove bus match/probe/remove functions [POWERPC] Move of_device allocation into of_device.[ch] [POWERPC] mpc52xx: device tree changes for FEC and MDIO [POWERPC] bestcomm: GenBD task support [POWERPC] bestcomm: FEC task support [POWERPC] bestcomm: ATA task support [POWERPC] bestcomm: core bestcomm support for Freescale MPC5200 [POWERPC] mpc52xx: Update mpc52xx_psc structure with B revision changes ...
| * [POWERPC] Fix vmemmap warning in init_64.cTony Breeds2007-10-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | Use the right printk format to silence the following warning. CC arch/powerpc/mm/init_64.o arch/powerpc/mm/init_64.c: In function 'vmemmap_populate': arch/powerpc/mm/init_64.c:243: warning: format '%p' expects type 'void *', but argument 4 has type 'long unsigned int' Signed-off-by: Tony Breeds <tony@bakeyournoodle.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] Add 1TB workaround for PA6TOlof Johansson2007-10-172-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PA6T has a bug where the slbie instruction does not honor the large segment bit. As a result, we have to always use slbia when switching context. We don't have to worry about changing the slbie's during fault processing, since they should never be replacing one VSID with another using the same ESID. I.e. there's no risk for inserting duplicate entries due to a failed slbie of the old entry. So as long as we clear it out on context switch we should be fine. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
| * [POWERPC] Fix 1TB segment detectionOlof Johansson2007-10-171-1/+1
| | | | | | | | | | | | | | | | Buglet in the 1TB detection makes it return after checking the first property word, even if it's not a match. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | spin_lock_unlocked cleanupsRoel Kluin2007-10-171-1/+1
| | | | | | | | | | | | | | | | | | Replace some SPIN_LOCK_UNLOCKED with DEFINE_SPINLOCK Signed-off-by: Roel Kluin <12o3l@tiscali.nl> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Slab API: remove useless ctor parameter and reorder parametersChristoph Lameter2007-10-172-2/+2
|/ | | | | | | | | | | | | | | | | | | | | Slab constructors currently have a flags parameter that is never used. And the order of the arguments is opposite to other slab functions. The object pointer is placed before the kmem_cache pointer. Convert ctor(void *object, struct kmem_cache *s, unsigned long flags) to ctor(struct kmem_cache *s, void *object) throughout the kernel [akpm@linux-foundation.org: coupla fixes] Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Update PowerPC vmemmap code for 1TB segmentsAnton Blanchard2007-10-161-1/+2
| | | | | | | | htab_bolt_mapping takes another argument now the 1TB code has been merged. Update vmemmap_populate to match. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* fix memory hot remove not configured case.KAMEZAWA Hiroyuki2007-10-161-45/+0
| | | | | | | | | | | | | | | | | | | | | | Now, arch dependent code around CONFIG_MEMORY_HOTREMOVE is a mess. This patch cleans up them. This is against 2.6.23-rc6-mm1. - fix compile failure on ia64/ CONFIG_MEMORY_HOTPLUG && !CONFIG_MEMORY_HOTREMOVE case. - For !CONFIG_MEMORY_HOTREMOVE, add generic no-op remove_memory(), which returns -EINVAL. - removed remove_pages() only used in powerpc. - removed no-op remove_memory() in i386, sh, sparc64, x86_64. - only powerpc returns -ENOSYS at memory hot remove(no-op). changes it to return -EINVAL. Note: Currently, only ia64 supports CONFIG_MEMORY_HOTREMOVE. I welcome other archs if there are requirements and testers. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ppc64: SPARSEMEM_VMEMMAP supportAndy Whitcroft2007-10-161-0/+67
| | | | | | | | | | | | | | | | | Enable virtual memmap support for SPARSEMEM on PPC64 systems. Slice a 16th off the end of the linear mapping space and use that to hold the vmemmap. Uses the same size mapping as uses in the linear 1:1 kernel mapping. [pbadari@gmail.com: fix warning] Signed-off-by: Andy Whitcroft <apw@shadowen.org> Acked-by: Mel Gorman <mel@csn.ul.ie> Cc: Christoph Lameter <clameter@sgi.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [POWERPC] Use 1TB segmentsPaul Mackerras2007-10-129-133/+293
| | | | | | | | | | | | | | | | | | | | This makes the kernel use 1TB segments for all kernel mappings and for user addresses of 1TB and above, on machines which support them (currently POWER5+, POWER6 and PA6T). We detect that the machine supports 1TB segments by looking at the ibm,processor-segment-sizes property in the device tree. We don't currently use 1TB segments for user addresses < 1T, since that would effectively prevent 32-bit processes from using huge pages unless we also had a way to revert to using 256MB segments. That would be possible but would involve extra complications (such as keeping track of which segment size was used when HPTEs were inserted) and is not addressed here. Parts of this patch were originally written by Ben Herrenschmidt. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] 85xx: Failure with odd memory sizes and CONFIG_HIGHMEMDale Farnsworth2007-10-081-0/+2
| | | | | | | | | | | | | | | | | | | The CONFIG_FSL_BOOKE mmu setup code fails when CONFIG_HIGHMEM=y and the 3 fixed TLB entries cannot exactly map the lowmem size. Each TLB entry can map 4MB, 16MB, 64MB or 256MB, so the failure is observed when the kernel lowmem size is not equal to the sum of up to 3 of those values. Normally, memory is sized in nice numbers, but I observed this problem while testing a crash dump kernel. The failure can also be observed by artificially reducing the kernel's main memory via the mem= kernel command line parameter. This commit fixes the problem by setting __initial_memory_limit in adjust_total_lowmem(). Signed-off-by: Dale Farnsworth <dale@farnsworth.org> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* [POWERPC] 8xx: Set initial memory limit.John Traill2007-10-031-0/+3
| | | | | | | | | | | | The 8xx can only support a max of 8M during early boot (it seems a lot of 8xx boards only have 8M so the bug was never triggered), but the early allocator isn't aware of this. The following change makes it able to run with larger memory. Signed-off-by: John Traill <john.traill@freescale.com> Signed-off-by: Vitaly Bordug <vitb@kernel.crashing.org> Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* [POWERPC] Add memory regions to the kcore list for 32-bit machinesEd Swarthout2007-10-032-0/+39
| | | | | | | | | | | The entries are only 32-bit, so restrict the virtual address to stay below 0xffff_ffff. With KERNELBASE set to 0xc000_0000, this in effect restricts access to the first 1GB of real memory. Make setup_kcore conditional on CONFIG_PROC_KCORE for both 32/64. Signed-off-by: Ed Swarthout <Ed.Swarthout@freescale.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Create and use CONFIG_WORD_SIZEStephen Rothwell2007-10-031-5/+8
| | | | | | | | | Linus made this suggestion for the x86 merge and this starts the process for powerpc. We assume that CONFIG_PPC64 implies CONFIG_PPC_MERGE and CONFIG_PPC_STD_MMU_32 implies CONFIG_PPC_STD_MMU. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Remove barriers from the SLB shadow buffer updateMichael Neuling2007-09-191-4/+2
| | | | | | | | | | | | After talking to an IBM POWER hypervisor (PHYP) design and development guy, there seems to be no need for memory barriers when updating the SLB shadow buffer provided we only update it from the current CPU, which we do. Also, these guys see no need in the future for these barriers. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Export new __io{re,un}map_at() symbolsOlof Johansson2007-09-141-0/+2
| | | | | | | Export new __io{re,un}map_at() symbols so modules can use them. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Merge branch 'linux-2.6' into for-2.6.24Paul Mackerras2007-08-281-8/+28
|\
| * [POWERPC] Fix SLB initialization at boot timePaul Mackerras2007-08-251-8/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This partially reverts edd0622bd2e8f755c960827e15aa6908c3c5aa94. It turns out that the part of that commit that aimed to ensure that we created an SLB entry for the kernel stack on secondary CPUs when starting the CPU didn't achieve its aim, and in fact caused a regression, because get_paca()->kstack is not initialized at the point where slb_initialize is called. This therefore just reverts that part of that commit, while keeping the change to slb_flush_and_rebolt, which is correct and necessary. Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [POWERPC] 40x MMUJosh Boyer2007-08-201-2/+2
| | | | | | | | | | | | | | Add MMU definitions for 40x platforms. Also fixes two warnings in 40x_mmu.c. Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Acked-by: David Gibson <david@gibson.dropbear.id.au>
* | [POWERPC] Rename 4xx paths to 40xJosh Boyer2007-08-202-1/+1
| | | | | | | | | | | | | | | | 4xx is a bit of a misnomer for certain things, as they really apply to PowerPC 40x only. Rename some of the files to clean this up. Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Acked-by: David Gibson <david@gibson.dropbear.id.au>
* | [POWERPC] Tidy up CONFIG_PPC_MM_SLICES codeStephen Rothwell2007-08-172-13/+1
| | | | | | | | | | | | | | This removes some of the #ifdefs from .c files. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [POWERPC] Fix non HUGETLB_PAGE build warningStephen Rothwell2007-08-172-3/+3
| | | | | | | | | | | | | | | | arch/powerpc/mm/mmu_context_64.c: In function 'init_new_context': arch/powerpc/mm/mmu_context_64.c:31: warning: unused variable 'new_context' Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [POWERPC] Clean out a bunch of duplicate includesJesper Juhl2007-08-174-6/+0
|/ | | | | | | | This removes several duplicate includes from arch/powerpc/. Signed-off-by: Jesper Juhl <jesper.juhl@gmail.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>