aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/mm
Commit message (Collapse)AuthorAgeFilesLines
...
| * | | | powerpc: Do not assert pte_locked for hugepage PTE entriesMel Gorman2009-05-181-1/+2
| | |_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With CONFIG_DEBUG_VM, an assertion is made when changing the protection flags of a PTE that the PTE is locked. Huge pages use a different pagetable format and the assertion is bogus and will always trigger with a bug looking something like Unable to handle kernel paging request for data at address 0xf1a00235800006f8 Faulting instruction address: 0xc000000000034a80 Oops: Kernel access of bad area, sig: 11 [#1] SMP NR_CPUS=32 NUMA Maple Modules linked in: dm_snapshot dm_mirror dm_region_hash dm_log dm_mod loop evdev ext3 jbd mbcache sg sd_mod ide_pci_generic pata_amd ata_generic ipr libata tg3 libphy scsi_mod windfarm_pid windfarm_smu_sat windfarm_max6690_sensor windfarm_lm75_sensor windfarm_cpufreq_clamp windfarm_core i2c_powermac NIP: c000000000034a80 LR: c000000000034b18 CTR: 0000000000000003 REGS: c000000003037600 TRAP: 0300 Not tainted (2.6.30-rc3-autokern1) MSR: 9000000000009032 <EE,ME,IR,DR> CR: 28002484 XER: 200fffff DAR: f1a00235800006f8, DSISR: 0000000040010000 TASK = c0000002e54cc740[2960] 'map_high_trunca' THREAD: c000000003034000 CPU: 2 GPR00: 4000000000000000 c000000003037880 c000000000895d30 c0000002e5a2e500 GPR04: 00000000a0000000 c0000002edc40880 0000005700000393 0000000000000001 GPR08: f000000011ac0000 01a00235800006e8 00000000000000f5 f1a00235800006e8 GPR12: 0000000028000484 c0000000008dd780 0000000000001000 0000000000000000 GPR16: fffffffffffff000 0000000000000000 00000000a0000000 c000000003037a20 GPR20: c0000002e5f4ece8 0000000000001000 c0000002edc40880 0000000000000000 GPR24: c0000002e5f4ece8 0000000000000000 00000000a0000000 c0000002e5f4ece8 GPR28: 0000005700000393 c0000002e5a2e500 00000000a0000000 c000000003037880 NIP [c000000000034a80] .assert_pte_locked+0xa4/0xd0 LR [c000000000034b18] .ptep_set_access_flags+0x6c/0xb4 Call Trace: [c000000003037880] [c000000003037990] 0xc000000003037990 (unreliable) [c000000003037910] [c000000000034b18] .ptep_set_access_flags+0x6c/0xb4 [c0000000030379b0] [c00000000014bef8] .hugetlb_cow+0x124/0x674 [c000000003037b00] [c00000000014c930] .hugetlb_fault+0x4e8/0x6f8 [c000000003037c00] [c00000000013443c] .handle_mm_fault+0xac/0x828 [c000000003037cf0] [c0000000000340a8] .do_page_fault+0x39c/0x584 [c000000003037e30] [c0000000000057b0] handle_page_fault+0x20/0x5c Instruction dump: 7d29582a 7d200074 7800d182 0b000000 3c004000 3960ffff 780007c6 796b00c4 7d290214 7929a302 1d290068 7d6b4a14 <800b0010> 7c000074 7800d182 0b000000 This patch fixes the problem by not asseting the PTE is locked for VMAs backed by huge pages. Signed-off-by: Mel Gorman <mel@csn.ul.ie> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | | | powerpc: Add 2.06 tlbie mnemonicsMilton Miller2009-05-211-2/+11
|/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds the PowerPC 2.06 tlbie mnemonics and keeps backwards compatibilty for CPUs before 2.06. Only useful for bare metal systems. Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | | powerpc: Allow mem=x cmdline to work with 4G+Becky Bruce2009-05-151-1/+1
| |/ |/| | | | | | | | | | | | | | | We're currently choking on mem=4g (and above) due to memory_limit being specified as an unsigned long. Make memory_limit phys_addr_t to fix this. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc: fix for long standing bug noticed by gcc 4.4.0Stephen Rothwell2009-04-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Previous gcc versions didn't notice this because one of the preceding #ifs always evaluated to true. gcc 4.4.0 produced this error: arch/powerpc/mm/tlb_nohash_low.S:206:6: error: #elif with no expression Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* | Revert "powerpc: Add support for early tlbilx opcode"Kumar Gala2009-04-231-13/+1
| | | | | | | | | | | | | | This reverts commit e9965577406a2148ade97b5e0ce7c448b4ba4ef6. Our HW guys were able to fix this so it never sees the light of day. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* | powerpc: Fix crash on CPU hotplugMichael Ellerman2009-04-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | early_init_mmu_secondary() is called at CPU hotplug time, so it must be marked as __cpuinit, not __init. Caused by 757c74d2 ("powerpc/mm: Introduce early_init_mmu() on 64-bit"). Tested-by: Sachin Sant <sachinp@in.ibm.com> Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | powerpc/mm: Fix compile warningKumar Gala2009-04-071-1/+0
| | | | | | | | | | | | | | arch/powerpc/mm/tlb_nohash.c: In function 'flush_tlb_mm': arch/powerpc/mm/tlb_nohash.c:128: warning: unused variable 'cpu_mask' Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* | powerpc: Add support for early tlbilx opcodeKumar Gala2009-04-071-1/+13
|/ | | | | | | | During the ISA 2.06 development the opcode for tlbilx changed and some early implementations used to old opcode. Add support for a MMU_FTR fixup to deal with this. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* powerpc/mm: Introduce early_init_mmu() on 64-bitBenjamin Herrenschmidt2009-03-241-2/+34
| | | | | | | | This moves some MMU related init code out of setup_64.c into hash_utils_64.c and calls it early_init_mmu() and early_init_mmu_secondary(). This will make it easier to plug in a new MMU type. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/mm: Fix printk type warning in mmu_context_nohashBenjamin Herrenschmidt2009-03-241-1/+1
| | | | | | We need to use %zu instead of %d when printing a sizeof() Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/mm: Rename arch/powerpc/kernel/mmap.c to mmap_64.cBenjamin Herrenschmidt2009-03-242-1/+1
| | | | | | This file is only useful on 64-bit, so we name it accordingly. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/mm: Tweak PTE bit combination definitionsBenjamin Herrenschmidt2009-03-243-11/+5
| | | | | | | | | | | | | | | | | This patch tweaks the way some PTE bit combinations are defined, in such a way that the 32 and 64-bit variant become almost identical and that will make it easier to bring in a new common pte-* file for the new variant of the Book3-E support. The combination of bits defining access to kernel pages are now clearly separated from the combination used by userspace and the core VM. The resulting generated code should remain identical unless I made a mistake. Note: While at it, I removed a non-sensical statement related to CONFIG_KGDB in ppc_mmu_32.c which could cause kernel mappings to be user accessible when that option is enabled. Probably something that bitrot. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* cpumask: Use mm_cpumask() wrapper instead of cpu_vm_maskRusty Russell2009-03-245-21/+18
| | | | | | | | | | Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/mm: Properly wire up get_user_pages_fast() on 32-bitBenjamin Herrenschmidt2009-03-112-3/+17
| | | | | | | | While we did add support for _PAGE_SPECIAL on some 32-bit platforms, we never actually built get_user_pages_fast() on them. This fixes it which requires a little bit of ifdef'ing around. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Wire up /proc/vmallocinfo to our ioremap()Benjamin Herrenschmidt2009-03-112-11/+28
| | | | | | | | This adds the necessary bits and pieces to powerpc implementation of ioremap to benefit from caller tracking in /proc/vmallocinfo, at least for ioremap's done after mem init as the older ones aren't tracked. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/fsl-booke: Add support for tlbilx instructionsKumar Gala2009-03-091-7/+37
| | | | | | | | The e500mc core supports the new tlbilx instructions that do core local invalidates and also provide us the ability to take down all TLB entries matching a given PID. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* powerpc: Increase stack gap on 64bit binariesAnton Blanchard2009-02-231-2/+9
| | | | | | | | | | | | | On 64bit there is a possibility our stack and mmap randomisation will put the two close enough such that we can't expand our stack to match the ulimit specified. To avoid this, start the upper mmap address at 1GB + 128MB below the top of our address space, so in the worst case we end up with the same ~128MB hole as in 32bit. This works because we randomise the stack over a 1GB range. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Ensure random space between stack and mmapsAnton Blanchard2009-02-231-3/+11
| | | | | | | | | | | get_random_int() returns the same value within a 1 jiffy interval. This means that the mmap and stack regions will almost always end up the same distance apart, making a relative offset based attack possible. To fix this, shift the randomness we use for the mmap region by 1 bit. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Randomise mmap start addressAnton Blanchard2009-02-231-1/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Randomise mmap start address - 8MB on 32bit and 1GB on 64bit tasks. Until ppc32 uses the mmap.c functionality, this is ppc64 specific. Before: # ./test & cat /proc/${!}/maps|tail -2|head -1 f75fe000-f7fff000 rw-p f75fe000 00:00 0 f75fe000-f7fff000 rw-p f75fe000 00:00 0 f75fe000-f7fff000 rw-p f75fe000 00:00 0 f75fe000-f7fff000 rw-p f75fe000 00:00 0 f75fe000-f7fff000 rw-p f75fe000 00:00 0 After: # ./test & cat /proc/${!}/maps|tail -2|head -1 f718b000-f7b8c000 rw-p f718b000 00:00 0 f7551000-f7f52000 rw-p f7551000 00:00 0 f6ee7000-f78e8000 rw-p f6ee7000 00:00 0 f74d4000-f7ed5000 rw-p f74d4000 00:00 0 f6e9d000-f789e000 rw-p f6e9d000 00:00 0 Similar for 64bit, but with 1GB of scatter: # ./test & cat /proc/${!}/maps|tail -2|head -1 fffb97b5000-fffb97b6000 rw-p fffb97b5000 00:00 0 fffce9a3000-fffce9a4000 rw-p fffce9a3000 00:00 0 fffeaaf2000-fffeaaf3000 rw-p fffeaaf2000 00:00 0 fffd88ac000-fffd88ad000 rw-p fffd88ac000 00:00 0 fffbc62e000-fffbc62f000 rw-p fffbc62e000 00:00 0 Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Rearrange mmap.cAnton Blanchard2009-02-231-11/+11
| | | | | | | Rearrange mmap.c to better match the x86 version. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/numa: Cleanup hot_add_scn_to_nidNathan Fontenot2009-02-231-66/+73
| | | | | | | | | | | | | | | This patch reworks the hot_add_scn_to_nid and its supporting functions to make them easier to understand. There are no functional changes in this patch and has been tested on machine with memory represented in the device tree as memory nodes and in the ibm,dynamic-memory property. My previous patch that introduced support for hotplug memory add on systems whose memory was represented by the ibm,dynamic-memory property of the device tree only left the code more unintelligible. This will hopefully makes things easier to understand. Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/mm: Reduce hashtable size when using 64kB pagesAnton Blanchard2009-02-231-2/+3
| | | | | | | | | | | | | At the moment we size the hashtable based on 4kB pages / 2, even on a 64kB kernel. This results in a hashtable that is much larger than it needs to be. Grab the real page size and size the hashtable based on that Note: This only has effect on non hypervisor machines. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* Merge commit 'kumar/next' into nextBenjamin Herrenschmidt2009-02-181-3/+3
|\
| * powerpc/fsl-booke: Fix compile warningKumar Gala2009-02-121-2/+2
| | | | | | | | | | | | | | arch/powerpc/mm/fsl_booke_mmu.c: In function 'adjust_total_lowmem': arch/powerpc/mm/fsl_booke_mmu.c:221: warning: format '%ld' expects type 'long int', but argument 3 has type 'phys_addr_t' Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
| * powerpc/fsl-booke: Add new ISA 2.06 page sizes and MAS definesKumar Gala2009-02-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Power ISA 2.06 added power of two page sizes to the embedded MMU architecture. Its done it such a way to be code compatiable with the existing HW. Made the minor code changes to support both power of two and power of four page sizes. Also added some new MAS bits and macros that are defined as part of the 2.06 ISA. Renamed some things to use the 'Book-3e' concept to convey the new MMU that is based on the Freescale Book-E MMU programming model. Note, its still invalid to try and use a page size that isn't supported by cpu. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* | Merge commit 'origin/master' into nextBenjamin Herrenschmidt2009-02-182-3/+4
|\ \ | | | | | | | | | | | | Manual merge of: arch/powerpc/include/asm/pgtable-ppc32.h
| * | powerpc/mm: Fix numa reserve bootmem page selectionDave Hansen2009-02-131-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix the powerpc NUMA reserve bootmem page selection logic. commit 8f64e1f2d1e09267ac926e15090fd505c1c0cbcb (powerpc: Reserve in bootmem lmb reserved regions that cross NUMA nodes) changed the logic for how the powerpc LMB reserved regions were converted to bootmen reserved regions. As the folowing discussion reports, the new logic was not correct. mark_reserved_regions_for_nid() goes through each LMB on the system that specifies a reserved area. It searches for active regions that intersect with that LMB and are on the specified node. It attempts to bootmem-reserve only the area where the active region and the reserved LMB intersect. We can not reserve things on other nodes as they may not have bootmem structures allocated, yet. We base the size of the bootmem reservation on two possible things. Normally, we just make the reservation start and stop exactly at the start and end of the LMB. However, the LMB reservations are not aware of NUMA nodes and on occasion a single LMB may cross into several adjacent active regions. Those may even be on different NUMA nodes and will require separate calls to the bootmem reserve functions. So, the bootmem reservation must be trimmed to fit inside the current active region. That's all fine and dandy, but we trim the reservation in a page-aligned fashion. That's bad because we start the reservation at a non-page-aligned address: physbase. The reservation may only span 2 bytes, but that those bytes may span two pfns and cause a reserve_size of 2*PAGE_SIZE. Take the case where you reserve 0x2 bytes at 0x0fff and where the active region ends at 0x1000. You'll jump into that if() statment, but node_ar.end_pfn=0x1 and start_pfn=0x0. You'll end up with a reserve_size=0x1000, and then call reserve_bootmem_node(node, physbase=0xfff, size=0x1000); 0x1000 may not be on the same node as 0xfff. Oops. In almost all the vm code, end_<anything> is not inclusive. If you have an end_pfn of 0x1234, page 0x1234 is not included in the range. Using PFN_UP instead of the (>> >> PAGE_SHIFT) will make this consistent with the other VM code. We also need to do math for the reserved size with physbase instead of start_pfn. node_ar.end_pfn << PAGE_SHIFT is *precisely* the end of the node. However, (start_pfn << PAGE_SHIFT) is *NOT* precisely the beginning of the reserved area. That is, of course, physbase. If we don't use physbase here, the reserve_size can be made too large. From: Dave Hansen <dave@linux.vnet.ibm.com> Tested-by: Geoff Levand <geoffrey.levand@am.sony.com> Tested on PS3. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * | powerpc/mm: Fix _PAGE_COHERENT support on classic ppc32 HWKumar Gala2009-02-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The following commit: commit 64b3d0e8122b422e879b23d42f9e0e8efbbf9744 Author: Benjamin Herrenschmidt <benh@kernel.crashing.org> Date: Thu Dec 18 19:13:51 2008 +0000 powerpc/mm: Rework usage of _PAGE_COHERENT/NO_CACHE/GUARDED broke setting of the _PAGE_COHERENT bit in the PPC HW PTE. Since we now actually set _PAGE_COHERENT in the Linux PTE we shouldn't be clearing it out before we propogate it to the PPC HW PTE. Reported-by: Martyn Welch <martyn.welch@gefanuc.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | | powerpc/mm: Rework I$/D$ coherency (v3)Benjamin Herrenschmidt2009-02-113-62/+148
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch reworks the way we do I and D cache coherency on PowerPC. The "old" way was split in 3 different parts depending on the processor type: - Hash with per-page exec support (64-bit and >= POWER4 only) does it at hashing time, by preventing exec on unclean pages and cleaning pages on exec faults. - Everything without per-page exec support (32-bit hash, 8xx, and 64-bit < POWER4) does it for all page going to user space in update_mmu_cache(). - Embedded with per-page exec support does it from do_page_fault() on exec faults, in a way similar to what the hash code does. That leads to confusion, and bugs. For example, the method using update_mmu_cache() is racy on SMP where another processor can see the new PTE and hash it in before we have cleaned the cache, and then blow trying to execute. This is hard to hit but I think it has bitten us in the past. Also, it's inefficient for embedded where we always end up having to do at least one more page fault. This reworks the whole thing by moving the cache sync into two main call sites, though we keep different behaviours depending on the HW capability. The call sites are set_pte_at() which is now made out of line, and ptep_set_access_flags() which joins the former in pgtable.c The base idea for Embedded with per-page exec support, is that we now do the flush at set_pte_at() time when coming from an exec fault, which allows us to avoid the double fault problem completely (we can even improve the situation more by implementing TLB preload in update_mmu_cache() but that's for later). If for some reason we didn't do it there and we try to execute, we'll hit the page fault, which will do a minor fault, which will hit ptep_set_access_flags() to do things like update _PAGE_ACCESSED or _PAGE_DIRTY if needed, we just make this guys also perform the I/D cache sync for exec faults now. This second path is the catch all for things that weren't cleaned at set_pte_at() time. For cpus without per-pag exec support, we always do the sync at set_pte_at(), thus guaranteeing that when the PTE is visible to other processors, the cache is clean. For the 64-bit hash with per-page exec support case, we keep the old mechanism for now. I'll look into changing it later, once I've reworked a bit how we use _PAGE_EXEC. This is also a first step for adding _PAGE_EXEC support for embedded platforms Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | | powerpc/mm: Move 64-bit unmapped_area to top of address spaceAnton Blanchard2009-02-111-6/+0
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We currently place mmaps just below the stack on 32bit, but leave them in the middle of the address space on 64bit: 00100000-00120000 r-xp 00100000 00:00 0 [vdso] 10000000-10010000 r-xp 00000000 08:06 179534 /tmp/sleep 10010000-10020000 rw-p 00000000 08:06 179534 /tmp/sleep 10020000-10130000 rw-p 10020000 00:00 0 [heap] 40000000000-40000030000 r-xp 00000000 08:06 440743 /lib64/ld-2.9.so 40000030000-40000040000 rw-p 00020000 08:06 440743 /lib64/ld-2.9.so 40000050000-400001f0000 r-xp 00000000 08:06 440671 /lib64/libc-2.9.so 400001f0000-40000200000 r--p 00190000 08:06 440671 /lib64/libc-2.9.so 40000200000-40000220000 rw-p 001a0000 08:06 440671 /lib64/libc-2.9.so 40000220000-40008230000 rw-p 40000220000 00:00 0 fffffbc0000-fffffd10000 rw-p fffffeb0000 00:00 0 [stack] Right now it isn't an issue, but at some stage we will run into mmap or hugetlb allocation issues. Using the same layout as 32bit gives us a some breathing room. This matches what x86-64 is doing too. 00100000-00103000 r-xp 00100000 00:00 0 [vdso] 10000000-10001000 r-xp 00000000 08:06 554894 /tmp/test 10010000-10011000 r--p 00000000 08:06 554894 /tmp/test 10011000-10012000 rw-p 00001000 08:06 554894 /tmp/test 10012000-10113000 rw-p 10012000 00:00 0 [heap] fffefdf7000-ffff7df8000 rw-p fffefdf7000 00:00 0 ffff7df8000-ffff7f97000 r-xp 00000000 08:06 130591 /lib64/libc-2.9.so ffff7f97000-ffff7fa6000 ---p 0019f000 08:06 130591 /lib64/libc-2.9.so ffff7fa6000-ffff7faa000 r--p 0019e000 08:06 130591 /lib64/libc-2.9.so ffff7faa000-ffff7fc0000 rw-p 001a2000 08:06 130591 /lib64/libc-2.9.so ffff7fc0000-ffff7fc4000 rw-p ffff7fc0000 00:00 0 ffff7fc4000-ffff7fec000 r-xp 00000000 08:06 130663 /lib64/ld-2.9.so ffff7fee000-ffff7ff0000 rw-p ffff7fee000 00:00 0 ffff7ffa000-ffff7ffb000 rw-p ffff7ffa000 00:00 0 ffff7ffb000-ffff7ffc000 r--p 00027000 08:06 130663 /lib64/ld-2.9.so ffff7ffc000-ffff7fff000 rw-p 00028000 08:06 130663 /lib64/ld-2.9.so ffff7fff000-ffff8000000 rw-p ffff7fff000 00:00 0 fffffc59000-fffffc6e000 rw-p ffffffeb000 00:00 0 [stack] Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc/numa: Remove redundant find_cpu_node()Milton Miller2009-02-111-31/+2
| | | | | | | | | | | | | | | | Use of_get_cpu_node, which is a superset of numa.c's find_cpu_node in a less restrictive section (text vs cpuinit). Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc/numa: Avoid possible reference beyond prop. length in ↵Milton Miller2009-02-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | find_min_common_depth() find_min_common_depth() was checking the property length incorrectly. The value is in bytes not cells, and it is using the second entry. Signed-off-By: Milton Miller <miltonm@bga.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | Merge commit 'kumar/next' into nextBenjamin Herrenschmidt2009-02-111-47/+46
|\ \ | |/ |/|
| * powerpc/fsl-booke: Make CAM entries used for lowmem configurableTrent Piepho2009-01-281-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | On booke processors, the code that maps low memory only uses up to three CAM entries, even though there are sixteen and nothing else uses them. Make this number configurable in the advanced options menu along with max low memory size. If one wants 1 GB of lowmem, then it's typically necessary to have four CAM entries. Signed-off-by: Trent Piepho <tpiepho@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
| * powerpc/fsl-booke: Allow larger CAM sizes than 256 MBTrent Piepho2009-01-281-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code that maps kernel low memory would only use page sizes up to 256 MB. On E500v2 pages up to 4 GB are supported. However, a page must be aligned to a multiple of the page's size. I.e. 256 MB pages must aligned to a 256 MB boundary. This was enforced by a requirement that the physical and virtual addresses of the start of lowmem be aligned to 256 MB. Clearly requiring 1GB or 4GB alignment to allow pages of that size isn't acceptable. To solve this, I simply have adjust_total_lowmem() take alignment into account when it decides what size pages to use. Give it PAGE_OFFSET = 0x7000_0000, PHYSICAL_START = 0x3000_0000, and 2GB of RAM, and it will map pages like this: PA 0x3000_0000 VA 0x7000_0000 Size 256 MB PA 0x4000_0000 VA 0x8000_0000 Size 1 GB PA 0x8000_0000 VA 0xC000_0000 Size 256 MB PA 0x9000_0000 VA 0xD000_0000 Size 256 MB PA 0xA000_0000 VA 0xE000_0000 Size 256 MB Because the lowmem mapping code now takes alignment into account, PHYSICAL_ALIGN can be lowered from 256 MB to 64 MB. Even lower might be possible. The lowmem code will work down to 4 kB but it's possible some of the boot code will fail before then. Poor alignment will force small pages to be used, which combined with the limited number of TLB1 pages available, will result in very little memory getting mapped. So alignments less than 64 MB probably aren't very useful anyway. Signed-off-by: Trent Piepho <tpiepho@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
| * powerpc/fsl-booke: Remove code duplication in lowmem mappingTrent Piepho2009-01-281-48/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code to map lowmem uses three CAM aka TLB[1] entries to cover it. The size of each is stored in three globals named __cam0, __cam1, and __cam2. All the code that uses them is duplicated three times for each of the three variables. We have these things called arrays and loops.... Once converted to use an array, it will be easier to make the number of CAMs configurable. Signed-off-by: Trent Piepho <tpiepho@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* | powerpc/fsl-booke: Fix mapping functions to use phys_addr_tKumar Gala2009-02-092-4/+4
|/ | | | | | | | | Fixed v_mapped_by_tlbcam() and p_mapped_by_tlbcam() to use phys_addr_t instead of unsigned long. In 36-bit physical mode we really need these functions to deal with phys_addr_t when trying to match a physical address or when returning one. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* powerpc/mm: Fix handling of _PAGE_COHERENT in BAT setup codeGerhard Pircher2009-01-281-3/+3
| | | | | | | | | | | | _PAGE_COHERENT is now always set in _PAGE_RAM resp. PAGE_KERNEL. Thus it has to be masked out, if the BAT mapping should be non cacheable or CPU_FTR_NEED_COHERENT is not set. This will work on normal SMP setups because we force-set CPU_FTR_NEED_COHERENT as part of CPU_FTR_COMMON on SMP. Signed-off-by: Gerhard Pircher <gerhard_pircher@gmx.net> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: is_hugepage_only_range() must account for both 4kB and 64kB slicesDave Kleikamp2009-01-161-1/+10
| | | | | | | | | | | powerpc: is_hugepage_only_range() must account for both 4kB and 64kB slices The subpage_prot syscall fails on second and subsequent calls for a given region, because is_hugepage_only_range() is mis-identifying the 4 kB slices when the process has a 64 kB page size. Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Change u64/s64 to a long long integer typeIngo Molnar2009-01-131-2/+2
| | | | | | | | | | | | | | | | | | | | | | Convert arch/powerpc/ over to long long based u64: -#ifdef __powerpc64__ -# include <asm-generic/int-l64.h> -#else -# include <asm-generic/int-ll64.h> -#endif +#include <asm-generic/int-ll64.h> This will avoid reoccuring spurious warnings in core kernel code that comes when people test on their own hardware. (i.e. x86 in ~98% of the cases) This is what x86 uses and it generally helps keep 64-bit code 32-bit clean too. [Adjusted to not impact user mode (from paulus) - sfr] Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* Merge commit 'kumar/kumar-next' into nextBenjamin Herrenschmidt2009-01-132-10/+10
|\
| * powerpc/fsl-booke: Remove num_tlbcam_entriesTrent Piepho2009-01-072-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a global variable defined in fsl_booke_mmu.c with a value that gets initialized in assembly code in head_fsl_booke.S. It's never used. If some code ever does want to know the number of entries in TLB1, then "numcams = mfspr(SPRN_TLB1CFG) & 0xfff", is a whole lot simpler than a global initialized during kernel boot from assembly. Signed-off-by: Trent Piepho <tpiepho@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
| * powerpc/fsl-booke: Don't hard-code size of struct tlbcamTrent Piepho2009-01-072-7/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some assembly code in head_fsl_booke.S hard-coded the size of struct tlbcam to 20 when it indexed the TLBCAM table. Anyone changing the size of struct tlbcam would not know to expect that. The kernel already has a system to get the size of C structures into assembly language files, asm-offsets, so let's use it. The definition of the struct gets moved to a header, so that asm-offsets.c can include it. Signed-off-by: Trent Piepho <tpiepho@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* | powerpc/mm: Make clear_fixmap() actually workAnton Vorontsov2009-01-081-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | The clear_fixmap() routine issues map_page() with flags set to 0. Currently this causes a BUG_ON() inside the map_page(), as it assumes that a PTE should be clear before mapping. This patch makes the map_page() to trigger the BUG_ON() only if the flags were set. Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Acked-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc: Fix missing semicolons in mmu_decl.hBenjamin Herrenschmidt2009-01-081-3/+3
| | | | | | | | | | | | | | | | | | This is a brown paper bag from one of my earlier patches that breaks build on 40x and 8xx. And yes, I've now added 40x and 8xx to my list of test configs :-) Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc: Remove the redundant _tlbil_pid at SMP caseDave Liu2009-01-081-1/+2
| | | | | | | | | | | | Signed-off-by: Dave Liu <daveliu@freescale.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc/mm: Cleanup careful_allocation(): consolidate memset()Dave Hansen2009-01-081-6/+5
| | | | | | | | | | | | | | | | | | | | | | Both users of careful_allocation() immediately memset() the result. So, just do it in one place. Also give careful_allocation() a 'z' prefix to bring it in line with kzmalloc() and friends. Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc/mm: Make careful_allocation() return virtual addrsDave Hansen2009-01-081-17/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since we memset() the result in both of the uses here, just make careful_alloc() return a virtual address. Also, add a separate variable to store the physial address that comes back from the lmb_alloc() functions. This makes it less likely that someone will screw it up forgetting to convert before returning since the vaddr is always in a void* and the paddr is always in an unsigned long. I admit this is arbitrary since one of its users needs a paddr and one a vaddr, but it does remove a good number of casts. Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc/mm:: Cleanup careful_allocation(): bootmem already panicsDave Hansen2009-01-081-5/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If we fail a bootmem allocation, the bootmem code itself panics. No need to redo it here. Also change the wording of the other panic. We don't strictly have to allocate memory on the specified node. It is just a hint and that node may not even *have* any memory on it. In that case we can and do fall back to other nodes. Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | powerpc/mm: Add better comment on careful_allocation()Dave Hansen2009-01-081-2/+10
| | | | | | | | | | | | | | | | | | The behavior in careful_allocation() really confused me at first. Add a comment to hopefully make it easier on the next doofus that looks at it. Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>