aboutsummaryrefslogtreecommitdiffstats
path: root/mm/slub.c
Commit message (Expand)AuthorAgeFilesLines
...
* memory hotplug: make kmem_cache_node for SLUB on memory online avoid panicYasunori Goto2007-10-221-0/+118
* Slab API: remove useless ctor parameter and reorder parametersChristoph Lameter2007-10-171-6/+6
* SLUB: simplify IRQ off handlingChristoph Lameter2007-10-171-11/+7
* slub: list_locations() can use GFP_TEMPORARYAndrew Morton2007-10-161-1/+1
* SLUB: Optimize cacheline use for zeroingChristoph Lameter2007-10-161-2/+12
* SLUB: Place kmem_cache_cpu structures in a NUMA aware wayChristoph Lameter2007-10-161-14/+154
* SLUB: Avoid touching page struct when freeing to per cpu slabChristoph Lameter2007-10-161-5/+9
* SLUB: Move page->offset to kmem_cache_cpu->offsetChristoph Lameter2007-10-161-41/+11
* SLUB: Do not use page->mappingChristoph Lameter2007-10-161-2/+0
* SLUB: Avoid page struct cacheline bouncing due to remote frees to cpu slabChristoph Lameter2007-10-161-74/+116
* Group short-lived and reclaimable kernel allocationsMel Gorman2007-10-161-0/+3
* Categorize GFP flagsChristoph Lameter2007-10-161-2/+3
* Memoryless nodes: SLUB supportChristoph Lameter2007-10-161-8/+8
* Slab allocators: fail if ksize is called with a NULL parameterChristoph Lameter2007-10-161-1/+2
* {slub, slob}: use unlikely() for kfree(ZERO_OR_NULL_PTR) checkSatyam Sharma2007-10-161-4/+4
* SLUB: direct pass through of page size or higher kmalloc requestsChristoph Lameter2007-10-161-25/+38
* slub.c:early_kmem_cache_node_alloc() shouldn't be __initAdrian Bunk2007-10-161-2/+2
* SLUB: accurately compare debug flags during slab cache mergeChristoph Lameter2007-09-111-15/+23
* slub: do not fail if we cannot register a slab with sysfsChristoph Lameter2007-08-311-2/+6
* SLUB: do not fail on broken memory configurationsChristoph Lameter2007-08-221-1/+8
* SLUB: use atomic_long_read for atomic_long variablesChristoph Lameter2007-08-221-3/+3
* SLUB: Fix dynamic dma kmalloc cache creationChristoph Lameter2007-08-091-14/+45
* SLUB: Remove checks for MAX_PARTIAL from kmem_cache_shrinkChristoph Lameter2007-08-091-7/+2
* slub: fix bug in slub debug supportPeter Zijlstra2007-07-301-1/+1
* slub: add lock debugging checkPeter Zijlstra2007-07-301-0/+1
* mm: Remove slab destructors from kmem_cache_create().Paul Mundt2007-07-201-3/+1
* slub: fix ksize() for zero-sized pointersLinus Torvalds2007-07-191-1/+1
* SLUB: Fix CONFIG_SLUB_DEBUG use for CONFIG_NUMAChristoph Lameter2007-07-171-0/+4
* SLUB: Move sysfs operations outside of slub_lockChristoph Lameter2007-07-171-13/+15
* SLUB: Do not allocate object bit array on stackChristoph Lameter2007-07-171-14/+25
* Slab allocators: Cleanup zeroing allocationsChristoph Lameter2007-07-171-11/+0
* SLUB: Do not use length parameter in slab_alloc()Christoph Lameter2007-07-171-11/+9
* SLUB: Style fix up the loop to disable small slabsChristoph Lameter2007-07-171-1/+1
* mm/slub.c: make code staticAdrian Bunk2007-07-171-3/+3
* SLUB: Simplify dma index -> size calculationChristoph Lameter2007-07-171-9/+1
* SLUB: faster more efficient slab determination for __kmallocChristoph Lameter2007-07-171-7/+64
* SLUB: do proper locking during dma slab creationChristoph Lameter2007-07-171-2/+9
* SLUB: extract dma_kmalloc_cache from get_cache.Christoph Lameter2007-07-171-30/+36
* SLUB: add some more inlines and #ifdef CONFIG_SLUB_DEBUGChristoph Lameter2007-07-171-6/+7
* Slab allocators: support __GFP_ZERO in all allocatorsChristoph Lameter2007-07-171-9/+15
* Slab allocators: consistent ZERO_SIZE_PTR support and NULL result semanticsChristoph Lameter2007-07-171-13/+16
* Slab allocators: consolidate code for krealloc in mm/util.cChristoph Lameter2007-07-171-37/+0
* SLUB Debug: fix initial object debug state of NUMA bootstrap objectsChristoph Lameter2007-07-171-1/+2
* SLUB: ensure that the number of objects per slab stays low for high ordersChristoph Lameter2007-07-171-2/+19
* SLUB slab validation: Move tracking information alloc outside of lockChristoph Lameter2007-07-171-10/+7
* SLUB: use list_for_each_entry for loops over all slabsChristoph Lameter2007-07-171-38/+13
* SLUB: change error reporting format to follow lockdep looselyChristoph Lameter2007-07-171-123/+154
* SLUB: support slub_debug on by defaultChristoph Lameter2007-07-161-28/+51
* slub: remove useless EXPORT_SYMBOLChristoph Lameter2007-07-061-1/+0
* SLUB: Make lockdep happy by not calling add_partial with interrupts enabled d...Christoph Lameter2007-07-031-2/+6