diff options
author | Joonsoo Kim <iamjoonsoo.kim@lge.com> | 2014-10-09 15:26:24 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-10-09 22:25:51 -0400 |
commit | 12220dea07f1ac6ac717707104773d771c3f3077 (patch) | |
tree | 5d12f754560c7b06e6d1bda9cf29000765fe921f | |
parent | 423c929cbbecc60e9c407f9048e58f5422f7995d (diff) | |
download | blackbird-op-linux-12220dea07f1ac6ac717707104773d771c3f3077.tar.gz blackbird-op-linux-12220dea07f1ac6ac717707104773d771c3f3077.zip |
mm/slab: support slab merge
Slab merge is good feature to reduce fragmentation. If new creating slab
have similar size and property with exsitent slab, this feature reuse it
rather than creating new one. As a result, objects are packed into fewer
slabs so that fragmentation is reduced.
Below is result of my testing.
* After boot, sleep 20; cat /proc/meminfo | grep Slab
<Before>
Slab: 25136 kB
<After>
Slab: 24364 kB
We can save 3% memory used by slab.
For supporting this feature in SLAB, we need to implement SLAB specific
kmem_cache_flag() and __kmem_cache_alias(), because SLUB implements some
SLUB specific processing related to debug flag and object size change on
these functions.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r-- | mm/slab.c | 26 | ||||
-rw-r--r-- | mm/slab.h | 2 |
2 files changed, 27 insertions, 1 deletions
diff --git a/mm/slab.c b/mm/slab.c index f989af87b72c..328233a724af 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2104,6 +2104,32 @@ static int __init_refok setup_cpu_cache(struct kmem_cache *cachep, gfp_t gfp) return 0; } +unsigned long kmem_cache_flags(unsigned long object_size, + unsigned long flags, const char *name, + void (*ctor)(void *)) +{ + return flags; +} + +struct kmem_cache * +__kmem_cache_alias(const char *name, size_t size, size_t align, + unsigned long flags, void (*ctor)(void *)) +{ + struct kmem_cache *cachep; + + cachep = find_mergeable(size, align, flags, name, ctor); + if (cachep) { + cachep->refcount++; + + /* + * Adjust the object sizes so that we clear + * the complete object on kzalloc. + */ + cachep->object_size = max_t(int, cachep->object_size, size); + } + return cachep; +} + /** * __kmem_cache_create - Create a cache. * @cachep: cache management descriptor diff --git a/mm/slab.h b/mm/slab.h index c44d28b60609..50d29d716db4 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -92,7 +92,7 @@ struct mem_cgroup; int slab_unmergeable(struct kmem_cache *s); struct kmem_cache *find_mergeable(size_t size, size_t align, unsigned long flags, const char *name, void (*ctor)(void *)); -#ifdef CONFIG_SLUB +#ifndef CONFIG_SLOB struct kmem_cache * __kmem_cache_alias(const char *name, size_t size, size_t align, unsigned long flags, void (*ctor)(void *)); |