summaryrefslogtreecommitdiffstats
path: root/kernel/cpuset.c
Commit message (Collapse)AuthorAgeFilesLines
* [PATCH] cpusets: re-enable "dynamic sched domains"John Hawkes2005-09-071-12/+0
| | | | | | | | | Revert the hack introduced last week. Signed-off-by: John Hawkes <hawkes@sgi.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] cpusets: confine oom_killer to mem_exclusive cpusetPaul Jackson2005-09-071-0/+33
| | | | | | | | | | | | | | | | | | | | | Now the real motivation for this cpuset mem_exclusive patch series seems trivial. This patch keeps a task in or under one mem_exclusive cpuset from provoking an oom kill of a task under a non-overlapping mem_exclusive cpuset. Since only interrupt and GFP_ATOMIC allocations are allowed to escape mem_exclusive containment, there is little to gain from oom killing a task under a non-overlapping mem_exclusive cpuset, as almost all kernel and user memory allocation must come from disjoint memory nodes. This patch enables configuring a system so that a runaway job under one mem_exclusive cpuset cannot cause the killing of a job in another such cpuset that might be using very high compute and memory resources for a prolonged time. Signed-off-by: Paul Jackson <pj@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] cpusets: formalize intermediate GFP_KERNEL containmentPaul Jackson2005-09-071-8/+72
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes use of the previously underutilized cpuset flag 'mem_exclusive' to provide what amounts to another layer of memory placement resolution. With this patch, there are now the following four layers of memory placement available: 1) The whole system (interrupt and GFP_ATOMIC allocations can use this), 2) The nearest enclosing mem_exclusive cpuset (GFP_KERNEL allocations can use), 3) The current tasks cpuset (GFP_USER allocations constrained to here), and 4) Specific node placement, using mbind and set_mempolicy. These nest - each layer is a subset (same or within) of the previous. Layer (2) above is new, with this patch. The call used to check whether a zone (its node, actually) is in a cpuset (in its mems_allowed, actually) is extended to take a gfp_mask argument, and its logic is extended, in the case that __GFP_HARDWALL is not set in the flag bits, to look up the cpuset hierarchy for the nearest enclosing mem_exclusive cpuset, to determine if placement is allowed. The definition of GFP_USER, which used to be identical to GFP_KERNEL, is changed to also set the __GFP_HARDWALL bit, in the previous cpuset_gfp_hardwall_flag patch. GFP_ATOMIC and GFP_KERNEL allocations will stay within the current tasks cpuset, so long as any node therein is not too tight on memory, but will escape to the larger layer, if need be. The intended use is to allow something like a batch manager to handle several jobs, each job in its own cpuset, but using common kernel memory for caches and such. Swapper and oom_kill activity is also constrained to Layer (2). A task in or below one mem_exclusive cpuset should not cause swapping on nodes in another non-overlapping mem_exclusive cpuset, nor provoke oom_killing of a task in another such cpuset. Heavy use of kernel memory for i/o caching and such by one job should not impact the memory available to jobs in other non-overlapping mem_exclusive cpusets. This patch enables providing hardwall, inescapable cpusets for memory allocations of each job, while sharing kernel memory allocations between several jobs, in an enclosing mem_exclusive cpuset. Like Dinakar's patch earlier to enable administering sched domains using the cpu_exclusive flag, this patch also provides a useful meaning to a cpuset flag that had previously done nothing much useful other than restrict what cpuset configurations were allowed. Signed-off-by: Paul Jackson <pj@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] completely disable cpu_exclusive sched domainPaul Jackson2005-08-261-0/+13
| | | | | | | | | | | | | | | | | | | | At the suggestion of Nick Piggin and Dinakar, totally disable the facility to allow cpu_exclusive cpusets to define dynamic sched domains in Linux 2.6.13, in order to avoid problems first reported by John Hawkes (corrupt sched data structures and kernel oops). This has been built for ppc64, i386, ia64, x86_64, sparc, alpha. It has been built, booted and tested for cpuset functionality on an SN2 (ia64). Dinakar or Nick - could you verify that it for sure does avoid the problems Hawkes reported. Hawkes is out of town, and I don't have the recipe to reproduce what he found. Signed-off-by: Paul Jackson <pj@sgi.com> Acked-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] undo partial cpu_exclusive sched domain disablingPaul Jackson2005-08-261-19/+0
| | | | | | | | | | | | | | | | The partial disabling of Dinakar's new facility to allow cpu_exclusive cpusets to define dynamic sched domains doesn't go far enough. At the suggestion of Nick Piggin and Dinakar, let us instead totally disable this facility for 2.6.13, in order to avoid problems first reported by John Hawkes (corrupt sched data structures and kernel oops). This patch removes the partial disabling code in 2.6.13-rc7, in anticipation of the next patch, which will totally disable it instead. Signed-off-by: Paul Jackson <pj@sgi.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] cpu_exclusive sched domains build fixPaul Jackson2005-08-241-1/+3
| | | | | | | | | | | | | | | | | | | As reported by Paul Mackerras <paulus@samba.org>, the previous patch "cpu_exclusive sched domains fix" broke the ppc64 build with CONFIC_CPUSET, yielding error messages: kernel/cpuset.c: In function 'update_cpu_domains': kernel/cpuset.c:648: error: invalid lvalue in unary '&' kernel/cpuset.c:648: error: invalid lvalue in unary '&' On some arch's, the node_to_cpumask() is a function, returning a cpumask_t. But the for_each_cpu_mask() requires an lvalue mask. The following patch fixes this build failure by making a copy of the cpumask_t on the stack. Signed-off-by: Paul Jackson <pj@sgi.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] cpu_exclusive sched domains on partial nodes temp fixPaul Jackson2005-08-231-0/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | This keeps the kernel/cpuset.c routine update_cpu_domains() from invoking the sched.c routine partition_sched_domains() if the cpuset in question doesn't fall on node boundaries. I have boot tested this on an SN2, and with the help of a couple of ad hoc printk's, determined that it does indeed avoid calling the partition_sched_domains() routine on partial nodes. I did not directly verify that this avoids setting up bogus sched domains or avoids the oops that Hawkes saw. This patch imposes a silent artificial constraint on which cpusets can be used to define dynamic sched domains. This patch should allow proceeding with this new feature in 2.6.13 for the configurations in which it is useful (node alligned sched domains) while avoiding trying to setup sched domains in the less useful cases that can cause the kernel corruption and oops. Signed-off-by: Paul Jackson <pj@sgi.com> Acked-by: Ingo Molnar <mingo@elte.hu> Acked-by: Dinakar Guniguntala <dino@in.ibm.com> Acked-by: John Hawkes <hawkes@sgi.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] cpuset release ABBA deadlock fixPaul Jackson2005-08-091-20/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix possible cpuset_sem ABBA deadlock if 'notify_on_release' set. For a particular usage pattern, creating and destroying cpusets fairly frequently using notify_on_release, on a very large system, this deadlock can be seen every few days. If you are not using the cpuset notify_on_release feature, you will never see this deadlock. The existing code, on task exit (or cpuset deletion) did: get cpuset_sem if cpuset marked notify_on_release and is ready to release: compute cpuset path relative to /dev/cpuset mount point call_usermodehelper() forks /sbin/cpuset_release_agent with path drop cpuset_sem Unfortunately, the fork in call_usermodehelper can allocate memory, and allocating memory can require cpuset_sem, if the mems_generation values changed in the interim. This results in an ABBA deadlock, trying to obtain cpuset_sem when it is already held by the current task. To fix this, I put the cpuset path (which must be computed while holding cpuset_sem) in a temporary buffer, to be used in the call_usermodehelper call of /sbin/cpuset_release_agent only _after_ dropping cpuset_sem. So the new logic is: get cpuset_sem if cpuset marked notify_on_release and is ready to release: compute cpuset path relative to /dev/cpuset mount point stash path in kmalloc'd buffer drop cpuset_sem call_usermodehelper() forks /sbin/cpuset_release_agent with path free path The sharp eyed reader might notice that this patch does not contain any calls to kmalloc. The existing code in the check_for_release() routine was already kmalloc'ing a buffer to hold the cpuset path. In the old code, it just held the buffer for a few lines, over the cpuset_release_agent() call that in turn invoked call_usermodehelper(). In the new code, with the application of this patch, it returns that buffer via the new char **ppathbuf parameter, for later use and freeing in cpuset_release_agent(), which is called after cpuset_sem is dropped. Whereas the old code has just one call to cpuset_release_agent(), right in the check_for_release() routine, the new code has three calls to cpuset_release_agent(), from the various places that a cpuset can be released. This patch has been build and booted on SN2, and passed a stress test that previously hit the deadlock within a few seconds. Signed-off-by: Paul Jackson <pj@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] kernel/cpuset.c: add kerneldoc, fix typosRandy Dunlap2005-07-271-7/+19
| | | | | | | | | | | Add kerneldoc to kernel/cpuset.c Fix cpuset typos in init/Kconfig Signed-off-by: Randy Dunlap <rdunlap@xenotime.net> Acked-by: Paul Jackson <pj@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Dynamic sched domains: cpuset changesDinakar Guniguntala2005-06-251-13/+76
| | | | | | | | | | Adds the core update_cpu_domains code and updated cpusets documentation Signed-off-by: Dinakar Guniguntala <dino@in.ibm.com> Acked-by: Paul Jackson <pj@sgi.com> Acked-by: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] remove duplicate get_dentry functions in various placesChristoph Hellwig2005-06-231-7/+1
| | | | | | | | | | | | | | Various filesystem drivers have grown a get_dentry() function that's a duplicate of lookup_one_len, except that it doesn't take a maximum length argument and doesn't check for \0 or / in the passed in filename. Switch all these places to use lookup_one_len. Signed-off-by: Christoph Hellwig <hch@lst.de> Cc: Greg KH <greg@kroah.com> Cc: Paul Jackson <pj@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] cpuset exit NULL dereference fixPaul Jackson2005-05-271-5/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a race in the kernel cpuset code, between the code to handle notify_on_release, and the code to remove a cpuset. The notify_on_release code can end up trying to access a cpuset that has been removed. In the most common case, this causes a NULL pointer dereference from the routine cpuset_path. However all manner of bad things are possible, in theory at least. The existing code decrements the cpuset use count, and if the count goes to zero, processes the notify_on_release request, if appropriate. However, once the count goes to zero, unless we are holding the global cpuset_sem semaphore, there is nothing to stop another task from immediately removing the cpuset entirely, and recycling its memory. The obvious fix would be to always hold the cpuset_sem semaphore while decrementing the use count and dealing with notify_on_release. However we don't want to force a global semaphore into the mainline task exit path, as that might create a scaling problem. The actual fix is almost as easy - since this is only an issue for cpusets using notify_on_release, which the top level big cpusets don't normally need to use, only take the cpuset_sem for cpusets using notify_on_release. This code has been run for hours without a hiccup, while running a cpuset create/destroy stress test that could crash the existing kernel in seconds. This patch applies to the current -linus git kernel. Signed-off-by: Paul Jackson <pj@sgi.com> Acked-by: Simon Derr <simon.derr@bull.net> Acked-by: Dinakar Guniguntala <dino@in.ibm.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] cpuset: remove function attribute constBenoit Boissinot2005-04-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | gcc-4 warns with include/linux/cpuset.h:21: warning: type qualifiers ignored on function return type cpuset_cpus_allowed is declared with const extern const cpumask_t cpuset_cpus_allowed(const struct task_struct *p); First const should be __attribute__((const)), but the gcc manual explains that: "Note that a function that has pointer arguments and examines the data pointed to must not be declared const. Likewise, a function that calls a non-const function usually must not be const. It does not make sense for a const function to return void." The following patch remove const from the function declaration. Signed-off-by: Benoit Boissinot <benoit.boissinot@ens-lyon.org> Acked-by: Paul Jackson <pj@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds2005-04-161-0/+1564
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
OpenPOWER on IntegriCloud