aboutsummaryrefslogtreecommitdiffstats
path: root/mm/slub.c
Commit message (Expand)AuthorAgeFilesLines
* Merge branch 'slab/common-for-cgroups' into slab/for-linusPekka Enberg2012-10-031-89/+56
|\
| * slub: Zero initial memory segment for kmem_cache and kmem_cache_nodeChristoph Lameter2012-09-101-1/+1
| * Revert "mm/sl[aou]b: Move sysfs_slab_add to common"Pekka Enberg2012-09-051-2/+17
| * mm/sl[aou]b: Move kmem_cache refcounting to common codeChristoph Lameter2012-09-051-1/+0
| * mm/sl[aou]b: Shrink __kmem_cache_create() parameter listsChristoph Lameter2012-09-051-21/+18
| * mm/sl[aou]b: Move kmem_cache allocations into common codeChristoph Lameter2012-09-051-17/+7
| * mm/sl[aou]b: Move sysfs_slab_add to commonChristoph Lameter2012-09-051-13/+2
| * mm/sl[aou]b: Do slab aliasing call from common codeChristoph Lameter2012-09-051-4/+11
| * mm/sl[aou]b: Move duping of slab name to slab_common.cChristoph Lameter2012-09-051-19/+2
| * mm/sl[aou]b: Get rid of __kmem_cache_destroyChristoph Lameter2012-09-051-5/+5
| * mm/sl[aou]b: Move freeing of kmem_cache structure to common codeChristoph Lameter2012-09-051-2/+0
| * mm/sl[aou]b: Use "kmem_cache" name for slab cache with kmem_cache structChristoph Lameter2012-09-051-2/+0
| * mm/sl[aou]b: Extract a common function for kmem_cache_destroyChristoph Lameter2012-09-051-25/+11
| * mm/sl[aou]b: Move list_add() to slab_common.cChristoph Lameter2012-09-051-2/+0
| * mm/slub: Use kmem_cache for the kmem_cache structureChristoph Lameter2012-09-051-4/+4
| * mm/slub: Add debugging to verify correct cache use on kmem_cache_free()Christoph Lameter2012-09-051-0/+7
* | Merge branch 'slab/next' into slab/for-linusPekka Enberg2012-10-031-24/+39
|\ \
| * | slub: init_kmem_cache_cpus() and put_cpu_partial() can be staticFengguang Wu2012-10-031-2/+2
| * | mm, slub: Rename slab_alloc() -> slab_alloc_node() to match SLABEzequiel Garcia2012-09-251-9/+15
| * | mm, sl[au]b: Taint kernel when we detect a corrupted slabDave Jones2012-09-191-0/+2
| |/
| * slub: reduce failure of this_cpu_cmpxchg in put_cpu_partial() after unfreezingJoonsoo Kim2012-08-161-0/+1
| * slub: Take node lock during object free checksChristoph Lameter2012-08-161-12/+18
| * slub: use free_page instead of put_page for freeing kmalloc allocationGlauber Costa2012-08-161-1/+1
* | slub: consider pfmemalloc_match() in get_partial_node()Joonsoo Kim2012-09-171-5/+10
|/
* mm: slub: optimise the SLUB fast path to avoid pfmemalloc checksChristoph Lameter2012-07-311-4/+3
* mm: sl[au]b: add knowledge of PFMEMALLOC reserve pagesMel Gorman2012-07-311-2/+27
* mm, slub: ensure irqs are enabled for kmemcheckDavid Rientjes2012-07-101-7/+6
* mm, sl[aou]b: Move kmem_cache_create mutex handling to common codeChristoph Lameter2012-07-091-15/+13
* mm, sl[aou]b: Use a common mutex definitionChristoph Lameter2012-07-091-29/+25
* mm, sl[aou]b: Common definition for boot state of the slab allocatorsChristoph Lameter2012-07-091-16/+5
* mm, sl[aou]b: Extract common code for kmem_cache_create()Christoph Lameter2012-07-091-10/+1
* slub: remove invalid reference to list iterator variableJulia Lawall2012-07-091-1/+1
* slub: refactoring unfreeze_partials()Joonsoo Kim2012-06-201-34/+14
* slub: use __cmpxchg_double_slab() at interrupt disabled placeJoonsoo Kim2012-06-201-3/+9
* slab/mempolicy: always use local policy from interrupt contextAndi Kleen2012-06-201-1/+1
* mm, sl[aou]b: Extract common fields from struct kmem_cacheChristoph Lameter2012-06-141-40/+40
* Merge branch 'slub/cleanups' into slab/nextPekka Enberg2012-06-041-81/+92
|\
| * slub: pass page to node_match() instead of kmem_cache_cpu structureChristoph Lameter2012-06-011-4/+6
| * slub: Use page variable instead of c->page.Christoph Lameter2012-06-011-7/+10
| * slub: Separate out kmem_cache_cpu processing from deactivate_slabChristoph Lameter2012-06-011-12/+12
| * slub: Get rid of the node fieldChristoph Lameter2012-06-011-19/+16
| * slub: new_slab_objects() can also get objects from partial listChristoph Lameter2012-06-011-7/+9
| * slub: Simplify control flow in __slab_alloc()Christoph Lameter2012-06-011-8/+6
| * slub: Acquire_slab() avoid loopChristoph Lameter2012-06-011-13/+15
| * slub: Add frozen check in __slab_allocChristoph Lameter2012-06-011-0/+6
| * slub: Use freelist instead of "object" in __slab_allocChristoph Lameter2012-06-011-18/+20
* | Merge branch 'slab/for-linus' of git://git.kernel.org/pub/scm/linux/kernel/gi...Linus Torvalds2012-06-011-10/+13
|\ \ | |/ |/|
| * slub: use __SetPageSlab function to set PG_slab flagJoonsoo Kim2012-05-181-1/+1
| * slub: fix a memory leak in get_partial_node()Joonsoo Kim2012-05-181-3/+6
| * slub: remove unused argument of init_kmem_cache_node()Joonsoo Kim2012-05-161-4/+4