aboutsummaryrefslogtreecommitdiffstats
path: root/mm
Commit message (Expand)AuthorAgeFilesLines
* freezer: fix racy usage of try_to_freeze in kswapdRafael J. Wysocki2007-05-071-4/+9
* swsusp: use inline functions for changing page flagsRafael J. Wysocki2007-05-071-3/+3
* slob: fix page order calculation on not 4KB pageAkinobu Mita2007-05-071-12/+3
* Slab allocators: remove useless __GFP_NO_GROW flagChristoph Lameter2007-05-072-7/+2
* slab allocators: Remove SLAB_CTOR_ATOMICChristoph Lameter2007-05-072-23/+4
* slab allocators: Remove SLAB_DEBUG_INITIAL flagChristoph Lameter2007-05-074-32/+4
* get_unmapped_area doesn't need hugetlbfs hacks anymoreBenjamin Herrenschmidt2007-05-071-16/+0
* get_unmapped_area handles MAP_FIXED in generic codeBenjamin Herrenschmidt2007-05-071-11/+16
* oom: fix constraint deadlockDavid Rientjes2007-05-071-4/+6
* mm: fix handling of panic_on_oom when cpusets are in useYasunori Goto2007-05-071-0/+3
* fault injection: fix failslab with CONFIG_NUMAAkinobu Mita2007-05-071-4/+7
* slab allocators: Remove obsolete SLAB_MUST_HWCACHE_ALIGNChristoph Lameter2007-05-073-7/+6
* mm: madvise avoid exclusive mmap_semNick Piggin2007-05-071-4/+29
* include KERN_* constant in printk() calls in mm/slab.cmatze2007-05-071-3/+6
* slob: handle SLAB_PANIC flagAkinobu Mita2007-05-071-1/+2
* Quicklists for page table pagesChristoph Lameter2007-05-073-0/+95
* slub: remove object activities out of checking functionsChristoph Lameter2007-05-071-61/+47
* SLUB: Free slabs and sort partial slab lists in kmem_cache_shrinkChristoph Lameter2007-05-071-13/+112
* slub: add ability to list alloc / free callers per slabChristoph Lameter2007-05-071-3/+181
* SLUB: Add MIN_PARTIALChristoph Lameter2007-05-071-19/+36
* slub: validation of slabs (metadata and guard zones)Christoph Lameter2007-05-071-3/+110
* slub: enable tracking of full slabsChristoph Lameter2007-05-071-1/+40
* slub: fix object trackingChristoph Lameter2007-05-071-37/+20
* Add virt_to_head_page and consolidate code in slab and slubChristoph Lameter2007-05-072-11/+8
* mm: optimize compound_head() by avoiding a shared page flagChristoph Lameter2007-05-071-6/+4
* Make page->private usable in compound pagesChristoph Lameter2007-05-075-30/+28
* SLUB: allocate smallest object size if the user asks for 0 bytesChristoph Lameter2007-05-071-1/+1
* SLUB: change default alignmentsChristoph Lameter2007-05-071-2/+2
* SLUB coreChristoph Lameter2007-05-072-0/+3145
* slab: mark set_up_list3s() __initAndrew Morton2007-05-071-1/+1
* Do not disable interrupts when reading min_free_kbytesMel Gorman2007-05-071-1/+2
* slab: NUMA kmem_cache dietEric Dumazet2007-05-071-4/+20
* SLAB: don't allocate empty shared cachesEric Dumazet2007-05-071-11/+15
* SLAB: use num_possible_cpus() in enable_cpucache()Eric Dumazet2007-05-071-3/+1
* readahead: code cleanupJan Kara2007-05-072-18/+19
* readahead: improve heuristic detecting sequential readsJan Kara2007-05-072-3/+9
* Add unitialized_var() macro for suppressing gcc warningsBorislav Petkov2007-05-071-1/+1
* mm: simplify filemap_nopageNick Piggin2007-05-071-24/+0
* add pfn_valid_within helper for sub-MAX_ORDER hole detectionAndy Whitcroft2007-05-071-6/+2
* allow oom_adj of saintly processesJoshua N Pritikin2007-05-071-2/+4
* mm: make read_cache_page synchronousNick Piggin2007-05-072-14/+38
* slab: ensure cache_alloc_refill terminatesPekka Enberg2007-05-071-0/+8
* mm: remove gcc workaroundNick Piggin2007-05-071-12/+0
* Use ZVC counters to establish exact size of dirtyable pagesChristoph Lameter2007-05-071-10/+40
* Safer nr_node_ids and nr_node_ids determination and initial valuesChristoph Lameter2007-05-071-1/+1
* Add apply_to_page_range() which applies a function to a pte rangeJeremy Fitzhardinge2007-05-071-0/+94
* slab: introduce kreallocPekka Enberg2007-05-072-2/+82
* [PATCH] x86-64: skip cache_free_alien() on non NUMASiddha, Suresh B2007-05-021-2/+5
* [PATCH] i386: PARAVIRT: add kmap_atomic_pte for mapping highpte pagesJeremy Fitzhardinge2007-05-021-0/+9
* [PATCH] x86: PARAVIRT: add hooks to intercept mm creation and destructionJeremy Fitzhardinge2007-05-021-0/+4