summaryrefslogtreecommitdiffstats
path: root/mm/slub.c
Commit message (Collapse)AuthorAgeFilesLines
* slub: fix bug in slub debug supportPeter Zijlstra2007-07-301-1/+1
| | | | | | | We ClearSlabDebug() before the last SlabDebug() check. Clear it later. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Christoph Lameter <clameter@sgi.com>
* slub: add lock debugging checkPeter Zijlstra2007-07-301-0/+1
| | | | | | | | | | Ingo noticed that the SLUB code does include the lock debugging free check. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Ingo Molnar <mingo@elte.hu> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Christoph Lameter <clameter@sgi.com>
* mm: Remove slab destructors from kmem_cache_create().Paul Mundt2007-07-201-3/+1
| | | | | | | | | | | | | | Slab destructors were no longer supported after Christoph's c59def9f222d44bb7e2f0a559f2906191a0862d7 change. They've been BUGs for both slab and slub, and slob never supported them either. This rips out support for the dtor pointer from kmem_cache_create() completely and fixes up every single callsite in the kernel (there were about 224, not including the slab allocator definitions themselves, or the documentation references). Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* slub: fix ksize() for zero-sized pointersLinus Torvalds2007-07-191-1/+1
| | | | | | | | | | | | | The slab and slob allocators already did this right, but slub would call "get_object_page()" on the magic ZERO_SIZE_PTR, with all kinds of nasty end results. Noted by Ingo Molnar. Cc: Ingo Molnar <mingo@elte.hu> Cc: Christoph Lameter <clameter@sgi.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: Fix CONFIG_SLUB_DEBUG use for CONFIG_NUMAChristoph Lameter2007-07-171-0/+4
| | | | | | | | | | | | We currently cannot disable CONFIG_SLUB_DEBUG for CONFIG_NUMA. Now that embedded systems start to use NUMA we may need this. Put an #ifdef around places where NUMA only code uses fields only valid for CONFIG_SLUB_DEBUG. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: Move sysfs operations outside of slub_lockChristoph Lameter2007-07-171-13/+15
| | | | | | | | | | | | | | | Sysfs can do a gazillion things when called. Make sure that we do not call any sysfs functions while holding the slub_lock. Just protect the essentials: 1. The list of all slab caches 2. The kmalloc_dma array 3. The ref counters of the slabs. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: Do not allocate object bit array on stackChristoph Lameter2007-07-171-14/+25
| | | | | | | | | | | | | | The objects per slab increase with the current patches in mm since we allow up to order 3 allocs by default. More patches in mm actually allow to use 2M or higher sized slabs. For slab validation we need per object bitmaps in order to check a slab. We end up with up to 64k objects per slab resulting in a potential requirement of 8K stack space. That does not look good. Allocate the bit arrays via kmalloc. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Slab allocators: Cleanup zeroing allocationsChristoph Lameter2007-07-171-11/+0
| | | | | | | | | | | It becomes now easy to support the zeroing allocs with generic inline functions in slab.h. Provide inline definitions to allow the continued use of kzalloc, kmem_cache_zalloc etc but remove other definitions of zeroing functions from the slab allocators and util.c. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: Do not use length parameter in slab_alloc()Christoph Lameter2007-07-171-11/+9
| | | | | | | | | | We can get to the length of the object through the kmem_cache_structure. The additional parameter does no good and causes the compiler to generate bad code. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: Style fix up the loop to disable small slabsChristoph Lameter2007-07-171-1/+1
| | | | | | | | Do proper spacing and we only need to do this in steps of 8. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm/slub.c: make code staticAdrian Bunk2007-07-171-3/+3
| | | | | | | Signed-off-by: Adrian Bunk <bunk@stusta.de> Cc: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: Simplify dma index -> size calculationChristoph Lameter2007-07-171-9/+1
| | | | | | | | | There is no need to caculate the dma slab size ourselves. We can simply lookup the size of the corresponding non dma slab. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: faster more efficient slab determination for __kmallocChristoph Lameter2007-07-171-7/+64
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | kmalloc_index is a long series of comparisons. The attempt to replace kmalloc_index with something more efficient like ilog2 failed due to compiler issues with constant folding on gcc 3.3 / powerpc. kmalloc_index()'es long list of comparisons works fine for constant folding since all the comparisons are optimized away. However, SLUB also uses kmalloc_index to determine the slab to use for the __kmalloc_xxx functions. This leads to a large set of comparisons in get_slab(). The patch here allows to get rid of that list of comparisons in get_slab(): 1. If the requested size is larger than 192 then we can simply use fls to determine the slab index since all larger slabs are of the power of two type. 2. If the requested size is smaller then we cannot use fls since there are non power of two caches to be considered. However, the sizes are in a managable range. So we divide the size by 8. Then we have only 24 possibilities left and then we simply look up the kmalloc index in a table. Code size of slub.o decreases by more than 200 bytes through this patch. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: do proper locking during dma slab creationChristoph Lameter2007-07-171-2/+9
| | | | | | | | | | We modify the kmalloc_cache_dma[] array without proper locking. Do the proper locking and undo the dma cache creation if another processor has already created it. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: extract dma_kmalloc_cache from get_cache.Christoph Lameter2007-07-171-30/+36
| | | | | | | | | | | The rarely used dma functionality in get_slab() makes the function too complex. The compiler begins to spill variables from the working set onto the stack. The created function is only used in extremely rare cases so make sure that the compiler does not decide on its own to merge it back into get_slab(). Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: add some more inlines and #ifdef CONFIG_SLUB_DEBUGChristoph Lameter2007-07-171-6/+7
| | | | | | | | | | | Add #ifdefs around data structures only needed if debugging is compiled into SLUB. Add inlines to small functions to reduce code size. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Slab allocators: support __GFP_ZERO in all allocatorsChristoph Lameter2007-07-171-9/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A kernel convention for many allocators is that if __GFP_ZERO is passed to an allocator then the allocated memory should be zeroed. This is currently not supported by the slab allocators. The inconsistency makes it difficult to implement in derived allocators such as in the uncached allocator and the pool allocators. In addition the support zeroed allocations in the slab allocators does not have a consistent API. There are no zeroing allocator functions for NUMA node placement (kmalloc_node, kmem_cache_alloc_node). The zeroing allocations are only provided for default allocs (kzalloc, kmem_cache_zalloc_node). __GFP_ZERO will make zeroing universally available and does not require any addititional functions. So add the necessary logic to all slab allocators to support __GFP_ZERO. The code is added to the hot path. The gfp flags are on the stack and so the cacheline is readily available for checking if we want a zeroed object. Zeroing while allocating is now a frequent operation and we seem to be gradually approaching a 1-1 parity between zeroing and not zeroing allocs. The current tree has 3476 uses of kmalloc vs 2731 uses of kzalloc. Signed-off-by: Christoph Lameter <clameter@sgi.com> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Slab allocators: consistent ZERO_SIZE_PTR support and NULL result semanticsChristoph Lameter2007-07-171-13/+16
| | | | | | | | | | | | | | | | Define ZERO_OR_NULL_PTR macro to be able to remove the checks from the allocators. Move ZERO_SIZE_PTR related stuff into slab.h. Make ZERO_SIZE_PTR work for all slab allocators and get rid of the WARN_ON_ONCE(size == 0) that is still remaining in SLAB. Make slub return NULL like the other allocators if a too large memory segment is requested via __kmalloc. Signed-off-by: Christoph Lameter <clameter@sgi.com> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Slab allocators: consolidate code for krealloc in mm/util.cChristoph Lameter2007-07-171-37/+0
| | | | | | | | | | | | | | The size of a kmalloc object is readily available via ksize(). ksize is provided by all allocators and thus we can implement krealloc in a generic way. Implement krealloc in mm/util.c and drop slab specific implementations of krealloc. Signed-off-by: Christoph Lameter <clameter@sgi.com> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB Debug: fix initial object debug state of NUMA bootstrap objectsChristoph Lameter2007-07-171-1/+2
| | | | | | | | | | | | | The function we are calling to initialize object debug state during early NUMA bootstrap sets up an inactive object giving it the wrong redzone signature. The bootstrap nodes are active objects and should have active redzone signatures. Currently slab validation complains and reverts the object to active state. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: ensure that the number of objects per slab stays low for high ordersChristoph Lameter2007-07-171-2/+19
| | | | | | | | | | | | | | | | | | | | | | | | Currently SLUB has no provision to deal with too high page orders that may be specified on the kernel boot line. If an order higher than 6 (on a 4k platform) is generated then we will BUG() because slabs get more than 65535 objects. Add some logic that decreases order for slabs that have too many objects. This allow booting with slab sizes up to MAX_ORDER. For example slub_min_order=10 will boot with a default slab size of 4M and reduce slab sizes for small object sizes to lower orders if the number of objects becomes too big. Large slab sizes like that allow a concentration of objects of the same slab cache under as few as possible TLB entries and thus potentially reduces TLB pressure. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB slab validation: Move tracking information alloc outside of lockChristoph Lameter2007-07-171-10/+7
| | | | | | | | | | | | | | | | We currently have to do an GFP_ATOMIC allocation because the list_lock is already taken when we first allocate memory for tracking allocation information. It would be better if we could avoid atomic allocations. Allocate a size of the tracking table that is usually sufficient (one page) before we take the list lock. We will then only do the atomic allocation if we need to resize the table to become larger than a page (mostly only needed under large NUMA because of the tracking of cpus and nodes otherwise the table stays small). Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: use list_for_each_entry for loops over all slabsChristoph Lameter2007-07-171-38/+13
| | | | | | | | | | | Use list_for_each_entry() instead of list_for_each(). Get rid of for_all_slabs(). It had only one user. So fold it into the callback. This also gets rid of cpu_slab_flush. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: change error reporting format to follow lockdep looselyChristoph Lameter2007-07-171-123/+154
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Changes the error reporting format to loosely follow lockdep. If data corruption is detected then we generate the following lines: ============================================ BUG <slab-cache>: <problem> -------------------------------------------- INFO: <more information> [possibly multiple times] <object dump> FIX <slab-cache>: <remedial action> This also adds some more intelligence to the data corruption detection. Its now capable of figuring out the start and end. Add a comment on how to configure SLUB so that a production system may continue to operate even though occasional slab corruption occur through a misbehaving kernel component. See "Emergency operations" in Documentation/vm/slub.txt. [akpm@linux-foundation.org: build fix] Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: support slub_debug on by defaultChristoph Lameter2007-07-161-28/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | Add a new configuration variable CONFIG_SLUB_DEBUG_ON If set then the kernel will be booted by default with slab debugging switched on. Similar to CONFIG_SLAB_DEBUG. By default slab debugging is available but must be enabled by specifying "slub_debug" as a kernel parameter. Also add support to switch off slab debugging for a kernel that was built with CONFIG_SLUB_DEBUG_ON. This works by specifying slub_debug=- as a kernel parameter. Dave Jones wanted this feature. http://marc.info/?l=linux-kernel&m=118072189913045&w=2 [akpm@linux-foundation.org: clean up switch statement] Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* slub: remove useless EXPORT_SYMBOLChristoph Lameter2007-07-061-1/+0
| | | | | | | | | | | | | | kmem_cache_open is static. EXPORT_SYMBOL was leftover from some earlier time period where kmem_cache_open was usable outside of slub. (Fixes powerpc build error) Signed-off-by: Chrsitoph Lameter <clameter@sgi.com> Cc: Johannes Berg <johannes@sipsolutions.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: Make lockdep happy by not calling add_partial with interrupts enabled ↵Christoph Lameter2007-07-031-2/+6
| | | | | | | | | | | | | | | during bootstrap If we move the local_irq_enable() to the end of the function then add_partial() in early_kmem_cache_node_alloc() will be called with interrupts disabled like during regular operations. This makes lockdep happy. Signed-off-by: Christoph Lameter <clameter@sgi.com> Tested-by: Andre Noll <maan@systemlinux.org> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: fix behavior if the text output of list_locations overflows PAGE_SIZEChristoph Lameter2007-06-241-2/+4
| | | | | | | | | | | If slabs are allocated or freed from a large set of call sites (typical for the kmalloc area) then we may create more output than fits into a single PAGE and sysfs only gives us one page. The output should be truncated. This patch fixes the checks to do the truncation properly. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: minimum alignment fixesChristoph Lameter2007-06-161-5/+15
| | | | | | | | | | | | | | | | | | | | | | | | | If ARCH_KMALLOC_MINALIGN is set to a value greater than 8 (SLUBs smallest kmalloc cache) then SLUB may generate duplicate slabs in sysfs (yes again) because the object size is padded to reach ARCH_KMALLOC_MINALIGN. Thus the size of the small slabs is all the same. No arch sets ARCH_KMALLOC_MINALIGN larger than 8 though except mips which for some reason wants a 128 byte alignment. This patch increases the size of the smallest cache if ARCH_KMALLOC_MINALIGN is greater than 8. In that case more and more of the smallest caches are disabled. If we do that then the count of the active general caches that is displayed on boot is not correct anymore since we may skip elements of the kmalloc array. So count them separately. This approach was tested by Havard yesterday. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Haavard Skinnemoen <hskinnemoen@atmel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB slab validation: Alloc while interrupts are disabled must use GFP_ATOMICChristoph Lameter2007-06-161-1/+1
| | | | | | | | | | | | The data structure to manage the information gathered about functions allocating and freeing objects is allocated when the list_lock has already been taken. We need to allocate with GFP_ATOMIC instead of GFP_KERNEL. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: return ZERO_SIZE_PTR for kmalloc(0)Christoph Lameter2007-06-081-8/+18
| | | | | | | | | | | | | | | | | | | | | | Instead of returning the smallest available object return ZERO_SIZE_PTR. A ZERO_SIZE_PTR can be legitimately used as an object pointer as long as it is not deferenced. The dereference of ZERO_SIZE_PTR causes a distinctive fault. kfree can handle a ZERO_SIZE_PTR in the same way as NULL. This enables functions to use zero sized object. e.g. n = number of objects. objects = kmalloc(n * sizeof(object)); for (i = 0; i < n; i++) objects[i].x = y; kfree(objects); Signed-off-by: Christoph Lameter <clameter@sgi.com> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: fix locking for hotplug callbacksChristoph Lameter2007-06-011-1/+14
| | | | | | | | | | Hotplug callbacks are performed with interrupts enabled. Slub requires interrupts to be disabled for flushing caches. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Michal Piotrowski <michal.k.k.piotrowski@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: Fix NUMA / SYSFS bootstrap issueChristoph Lameter2007-05-311-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We need this patch in ASAP. Patch fixes the mysterious hang that remained on some particular configurations with lockdep on after the first fix that moved the #idef CONFIG_SLUB_DEBUG to the right location. See http://marc.info/?t=117963072300001&r=1&w=2 The kmem_cache_node cache is very special because it is needed for NUMA bootstrap. Under certain conditions (like for example if lockdep is enabled and significantly increases the size of spinlock_t) the structure may become exactly the size as one of the larger caches in the kmalloc array. That early during bootstrap we cannot perform merging properly. The unique id for the kmem_cache_node cache will match one of the kmalloc array. Sysfs will complain about a duplicate directory entry. All of this occurs while the console is not yet fully operational. Thus boot may appear to be silently failing. The kmem_cache_node cache is very special. During early boostrap the main allocation function is not operational yet and so we have to run our own small special alloc function during early boot. It is also special in that it is never freed. We really do not want any merging on that cache. Set the refcount -1 and forbid merging of slabs that have a negative refcount. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB Debug: fix check for super sized slabs (>512k 64bit, >256k 32bit)Christoph Lameter2007-05-231-1/+1
| | | | | | | | | | | | | | | | The check for super sized slabs where we can no longer move the free pointer behind the object for debugging purposes etc is accessing a field that is not setup yet. We must use objsize here since the size of the slab has not been determined yet. The effect of this is that a global slab shrink via "slabinfo -s" will show errors about offsets being wrong if booted with slub_debug. Potentially there are other troubles with huge slabs under slub_debug because the calculated free pointer offset is truncated. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB Debug: Fix object size calculationChristoph Lameter2007-05-231-1/+1
| | | | | | | | | | | The object size calculation is wrong if !CONFIG_SLUB_DEBUG because the #ifdef CONFIG_SLUB_DEBUG is now switching off the size adjustments for DESTROY_BY_RCU and ctor. Signed-off-by: Christoph Lameter <clameter@sgi.com> Acked-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: Simplify debug codeChristoph Lameter2007-05-171-55/+57
| | | | | | | | | | | | | | | Consolidate functionality into the #ifdef section. Extract tracing into one subroutine. Move object debug processing into the #ifdef section so that the code in __slab_alloc and __slab_free becomes minimal. Reduce number of functions we need to provide stubs for in the !SLUB_DEBUG case. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Remove SLAB_CTOR_CONSTRUCTORChristoph Lameter2007-05-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | SLAB_CTOR_CONSTRUCTOR is always specified. No point in checking it. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: David Howells <dhowells@redhat.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Steven French <sfrench@us.ibm.com> Cc: Michael Halcrow <mhalcrow@us.ibm.com> Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Cc: Miklos Szeredi <miklos@szeredi.hu> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Dave Kleikamp <shaggy@austin.ibm.com> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Cc: "J. Bruce Fields" <bfields@fieldses.org> Cc: Anton Altaparmakov <aia21@cantab.net> Cc: Mark Fasheh <mark.fasheh@oracle.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Jan Kara <jack@ucw.cz> Cc: David Chinner <dgc@sgi.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: Do our own flags based on PG_active and PG_errorChristoph Lameter2007-05-171-14/+14
| | | | | | | | | | | The atomicity when handling flags in SLUB is not necessary since both flags used by SLUB are not updated in a racy way. Flag updates are either done during slab creation or destruction or under slab_lock. Some of these flags do not have the non atomic variants that we need. So define our own. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: Define functions for cpu slab handling instead of using PageActiveChristoph Lameter2007-05-171-19/+38
| | | | | | | | | Use inline functions to access the per cpu bit. Intoduce the notion of "freezing" a slab to make things more understandable. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Slab allocators: Drop support for destructorsChristoph Lameter2007-05-171-31/+14
| | | | | | | | | | | | | | | | | | | | | There is no user of destructors left. There is no reason why we should keep checking for destructors calls in the slab allocators. The RFC for this patch was discussed at http://marc.info/?l=linux-kernel&m=117882364330705&w=2 Destructors were mainly used for list management which required them to take a spinlock. Taking a spinlock in a destructor is a bit risky since the slab allocators may run the destructors anytime they decide a slab is no longer needed. Patch drops destructor support. Any attempt to use a destructor will BUG(). Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Acked-by: Paul Mundt <lethal@linux-sh.org> Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* slub: don't confuse ctor and dtorHugh Dickins2007-05-161-1/+1
| | | | | | | | | | | kmem_cache_create() was swapping ctor and dtor in calling find_mergeable(): though it caused no bug, and probably never would, even if destructors are retained; but fix it so as not to generate anxiety ;) Signed-off-by: Hugh Dickins <hugh@veritas.com> Cc: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: remove nr_cpu_ids hackChristoph Lameter2007-05-101-3/+2
| | | | | | | | | This was in SLUB in order to head off trouble while the nr_cpu_ids functionality was not merged. Its merged now so no need to still have this. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* slub: support concurrent local and remote frees and allocs on a slabChristoph Lameter2007-05-101-36/+118
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Avoid atomic overhead in slab_alloc and slab_free SLUB needs to use the slab_lock for the per cpu slabs to synchronize with potential kfree operations. This patch avoids that need by moving all free objects onto a lockless_freelist. The regular freelist continues to exist and will be used to free objects. So while we consume the lockless_freelist the regular freelist may build up objects. If we are out of objects on the lockless_freelist then we may check the regular freelist. If it has objects then we move those over to the lockless_freelist and do this again. There is a significant savings in terms of atomic operations that have to be performed. We can even free directly to the lockless_freelist if we know that we are running on the same processor. So this speeds up short lived objects. They may be allocated and freed without taking the slab_lock. This is particular good for netperf. In order to maximize the effect of the new faster hotpath we extract the hottest performance pieces into inlined functions. These are then inlined into kmem_cache_alloc and kmem_cache_free. So hotpath allocation and freeing no longer requires a subroutine call within SLUB. [I am not sure that it is worth doing this because it changes the easy to read structure of slub just to reduce atomic ops. However, there is someone out there with a benchmark on 4 way and 8 way processor systems that seems to show a 5% regression vs. Slab. Seems that the regression is due to increased atomic operations use vs. SLAB in SLUB). I wonder if this is applicable or discernable at all in a real workload?] Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Move remote node draining out of slab allocatorsChristoph Lameter2007-05-091-84/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the slab allocators contain callbacks into the page allocator to perform the draining of pagesets on remote nodes. This requires SLUB to have a whole subsystem in order to be compatible with SLAB. Moving node draining out of the slab allocators avoids a section of code in SLUB. Move the node draining so that is is done when the vm statistics are updated. At that point we are already touching all the cachelines with the pagesets of a processor. Add a expire counter there. If we have to update per zone or global vm statistics then assume that the pageset will require subsequent draining. The expire counter will be decremented on each vm stats update pass until it reaches zero. Then we will drain one batch from the pageset. The draining will cause vm counter updates which will then cause another expiration until the pcp is empty. So we will drain a batch every 3 seconds. Note that remote node draining is a somewhat esoteric feature that is required on large NUMA systems because otherwise significant portions of system memory can become trapped in pcp queues. The number of pcp is determined by the number of processors and nodes in a system. A system with 4 processors and 2 nodes has 8 pcps which is okay. But a system with 1024 processors and 512 nodes has 512k pcps with a high potential for large amount of memory being caught in them. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* vmstat: use our own timer eventsChristoph Lameter2007-05-091-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | vmstat is currently using the cache reaper to periodically bring the statistics up to date. The cache reaper does only exists in SLUB as a way to provide compatibility with SLAB. This patch removes the vmstat calls from the slab allocators and provides its own handling. The advantage is also that we can use a different frequency for the updates. Refreshing vm stats is a pretty fast job so we can run this every second and stagger this by only one tick. This will lead to some overlap in large systems. F.e a system running at 250 HZ with 1024 processors will have 4 vm updates occurring at once. However, the vm stats update only accesses per node information. It is only necessary to stagger the vm statistics updates per processor in each node. Vm counter updates occurring on distant nodes will not cause cacheline contention. We could implement an alternate approach that runs the first processor on each node at the second and then each of the other processor on a node on a subsequent tick. That may be useful to keep a large amount of the second free of timer activity. Maybe the timer folks will have some feedback on this one? [jirislaby@gmail.com: add missing break] Cc: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Jiri Slaby <jirislaby@gmail.com> Cc: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Add suspend-related notifications for CPU hotplugRafael J. Wysocki2007-05-091-0/+2
| | | | | | | | | | | | | | | | | | | | | Since nonboot CPUs are now disabled after tasks and devices have been frozen and the CPU hotplug infrastructure is used for this purpose, we need special CPU hotplug notifications that will help the CPU-hotplug-aware subsystems distinguish normal CPU hotplug events from CPU hotplug events related to a system-wide suspend or resume operation in progress. This patch introduces such notifications and causes them to be used during suspend and resume transitions. It also changes all of the CPU-hotplug-aware subsystems to take these notifications into consideration (for now they are handled in the same way as the corresponding "normal" ones). [oleg@tv-sign.ru: cleanups] Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Cc: Gautham R Shenoy <ego@in.ibm.com> Cc: Pavel Machek <pavel@ucw.cz> Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* krealloc: fix kerneldoc commentsPekka J Enberg2007-05-091-1/+0
| | | | | | | | | | | No "blank" (or "*") line is allowed between the function name and lines for it parameter(s). Cc: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: rework slab order determinationChristoph Lameter2007-05-091-14/+52
| | | | | | | | | | | In some cases SLUB is creating uselessly slabs that are larger than slub_max_order. Also the layout of some of the slabs was not satisfactory. Go to an iterarive approach. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: include lifetime stats and sets of cpus / nodes in tracking outputChristoph Lameter2007-05-091-15/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have information about how long an object existed and about the nodes and cpus where the allocations and frees took place. Add that information to the tracking output in /sys/slab/xx/alloc_calls and /sys/slab/free_calls This will then enable slabinfo to output nice reports like this: christoph@qirst:~/slub$ ./slabinfo kmalloc-128 Slabcache: kmalloc-128 Aliases: 0 Order : 0 Sizes (bytes) Slabs Debug Memory ------------------------------------------------------------------------ Object : 128 Total : 12 Sanity Checks : On Total: 49152 SlabObj: 200 Full : 7 Redzoning : On Used : 24832 SlabSiz: 4096 Partial: 4 Poisoning : On Loss : 24320 Loss : 72 CpuSlab: 1 Tracking : On Lalig: 13968 Align : 8 Objects: 20 Tracing : Off Lpadd: 1152 kmalloc-128 has no kmem_cache operations kmalloc-128: Kernel object allocation ----------------------------------------------------------------------- 6 param_sysfs_setup+0x71/0x130 age=284512/284512/284512 pid=1 nodes=0-1,3 11 percpu_populate+0x39/0x80 age=283914/284428/284512 pid=1 nodes=0 21 __register_chrdev_region+0x31/0x170 age=282896/284347/284473 pid=1-1705 nodes=0-2 1 sys_inotify_init+0x76/0x1c0 age=283423 pid=1004 nodes=0 19 as_get_io_context+0x32/0xd0 age=6/247567/283988 pid=1-11782 nodes=0,2 10 ida_pre_get+0x4a/0x80 age=277666/283773/284526 pid=0-2177 nodes=0,2 24 kobject_kset_add_dir+0x37/0xb0 age=282727/283860/284472 pid=1-1723 nodes=0-2 1 acpi_ds_build_internal_buffer_obj+0xd3/0x11d age=284508 pid=1 nodes=0 24 con_insert_unipair+0xd7/0x110 age=284438/284438/284438 pid=1 nodes=0,2 1 uart_open+0x2d2/0x4b0 age=283896 pid=1 nodes=0 26 dma_pool_create+0x73/0x1a0 age=282762/282833/282916 pid=1705-1723 nodes=0 1 neigh_table_init_no_netlink+0xd2/0x210 age=284461 pid=1 nodes=0 2 neigh_parms_alloc+0x2b/0xe0 age=284410/284411/284412 pid=1 nodes=2 2 neigh_resolve_output+0x1e1/0x280 age=276289/276291/276293 pid=0-2443 nodes=0 1 netlink_kernel_create+0x90/0x170 age=284472 pid=1 nodes=0 4 xt_alloc_table_info+0x39/0xf0 age=283958/283958/283959 pid=1 nodes=1 3 fn_hash_insert+0x473/0x720 age=277653/277661/277666 pid=2177-2185 nodes=0 1 get_mtrr_state+0x285/0x2a0 age=284526 pid=0 nodes=0 1 cacheinfo_cpu_callback+0x26d/0x3e0 age=284458 pid=1 nodes=0 29 kernel_param_sysfs_setup+0x25/0x90 age=284511/284511/284512 pid=1 nodes=0-1,3 5 process_zones+0x5e/0x170 age=284546/284546/284546 pid=0 nodes=0 1 drm_core_init+0x48/0x160 age=284421 pid=1 nodes=2 kmalloc-128: Kernel object freeing ------------------------------------------------------------------------ 163 <not-available> age=4295176847 pid=0 nodes=0-3 1 __vunmap+0x6e/0xf0 age=282907 pid=1723 nodes=0 28 free_as_io_context+0x12/0x90 age=9243/262197/283474 pid=42-11754 nodes=0 1 acpi_get_object_info+0x1b7/0x1d4 age=284475 pid=1 nodes=0 1 do_acpi_find_child+0x45/0x4e age=284475 pid=1 nodes=0 NUMA nodes : 0 1 2 3 ------------------------------------------ All slabs 7 2 2 1 Partial slabs 2 2 0 0 Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* SLUB: add CONFIG_SLUB_DEBUGChristoph Lameter2007-05-091-75/+114
| | | | | | | | | | | | | | | | | CONFIG_SLUB_DEBUG can be used to switch off the debugging and sysfs components of SLUB. Thus SLUB will be able to replace SLOB. SLUB can arrange objects in a denser way than SLOB and the code size should be minimal without debugging and sysfs support. Note that CONFIG_SLUB_DEBUG is materially different from CONFIG_SLAB_DEBUG. CONFIG_SLAB_DEBUG is used to enable slab debugging in SLAB. SLUB enables debugging via a boot parameter. SLUB debug code should always be present. CONFIG_SLUB_DEBUG can be modified in the embedded config section. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
OpenPOWER on IntegriCloud