summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'rcu/next' of ↵Ingo Molnar2012-02-2829-583/+2906
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcu The major features of this series are: - making RCU more aggressive about entering dyntick-idle mode in order to improve energy efficiency - converting a few more call_rcu()s to kfree_rcu()s - applying a number of rcutree fixes and cleanups to rcutiny - removing CONFIG_SMP #ifdefs from treercu - allowing RCU CPU stall times to be set via sysfs - adding CPU-stall capability to rcutorture - adding more RCU-abuse diagnostics - updating documentation - fixing yet more issues located by the still-ongoing top-to-bottom inspection of RCU, this time with a special focus on the CPU-hotplug code path. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * rcu: Stop spurious warnings from synchronize_sched_expeditedHugh Dickins2012-02-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | synchronize_sched_expedited() is spamming CONFIG_DEBUG_PREEMPT=y users with an unintended warning from the cpu_is_offline() check: use raw_smp_processor_id() instead of smp_processor_id() there. Because the warning is under a get_online_cpus(), it is not possible for any CPUs to go offline, though it is quite possible that the task might migrate between the raw_smp_processor_id() and the check of cpu_is_offline(). This is not a problem because the task cannot migrate from an offline CPU to an online one or vice versa. The point of the check is to verify that synchronize_sched_expedited() is not called from an offline CPU, for example, from a CPU_DYING notifier, or, more important, from an outgoing CPU making its way from its CPU_DYING notifiers to the idle loop. Signed-off-by: Hugh Dickins <hughd@google.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Hold off RCU_FAST_NO_HZ after timer postedPaul E. McKenney2012-02-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | This commit handles workloads that transition quickly between idle and non-idle, and where the CPU's callbacks cannot be invoked, but where RCU does not have anything immediate for the CPU to do. Without this patch, the RCU_FAST_NO_HZ code can be invoked repeatedly on each entry to idle. The commit sets the per-CPU rcu_dyntick_holdoff variable to hold off further attempts for a tick. Reported-by: "Abou Gazala, Neven M" <neven.m.abou.gazala@intel.com> Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Eliminate softirq-mediated RCU_FAST_NO_HZ idle-entry loopPaul E. McKenney2012-02-211-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a softirq is pending, the current CPU has RCU callbacks pending, and RCU does not immediately need anything from this CPU, then the current code resets the RCU_FAST_NO_HZ state machine. This means that upon exit from the subsequent softirq handler, RCU_FAST_NO_HZ will try really hard to force RCU into dyntick-idle mode. And if the same conditions hold after a few tries (determined by RCU_IDLE_OPT_FLUSHES), the same situation can repeat, possibly endlessly. This scenario is not particularly good for battery lifetime. This commit therefore suppresses the early exit from the RCU_FAST_NO_HZ state machine in the case where there is a softirq pending. This change forces the state machine to retain its memory, and to enter holdoff if this condition persists. Reported-by: "Abou Gazala, Neven M" <neven.m.abou.gazala@intel.com> Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Add RCU_NONIDLE() for idle-loop RCU read-side critical sectionsPaul E. McKenney2012-02-213-0/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | RCU, RCU-bh, and RCU-sched read-side critical sections are forbidden in the inner idle loop, that is, between the rcu_idle_enter() and the rcu_idle_exit() -- RCU will happily ignore any such read-side critical sections. However, things like powertop need tracepoints in the inner idle loop. This commit therefore provides an RCU_NONIDLE() macro that can be used to wrap code in the idle loop that requires RCU read-side critical sections. Suggested-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Acked-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
| * rcu: Allow nesting of rcu_idle_enter() and rcu_idle_exit()Paul E. McKenney2012-02-213-12/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use of RCU in the idle loop is incorrect, quite a few instances of just that have made their way into mainline, primarily event tracing. The problem with RCU read-side critical sections on CPUs that RCU believes to be idle is that RCU is completely ignoring the CPU, along with any attempts and RCU read-side critical sections. The approaches of eliminating the offending uses and of pushing the definition of idle down beyond the offending uses have both proved impractical. The new approach is to encapsulate offending uses of RCU with rcu_idle_exit() and rcu_idle_enter(), but this requires nesting for code that is invoked both during idle and and during normal execution. Therefore, this commit modifies rcu_idle_enter() and rcu_idle_exit() to permit nesting. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Acked-by: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
| * rcu: Remove redundant check for rcu_head misalignmentPaul E. McKenney2012-02-211-1/+0
| | | | | | | | | | | | | | | | There is now an unconditional check for rcu_head misalignment in __call_rcu(), so remove the old conditional one in debug_rcu_head_queue(). Reported-by: Josh Triplett <josh@joshtriplett.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * PTR_ERR should be called before its argument is cleared.Julia Lawall2012-02-211-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The semantic match that finds this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression e,e1; constant c; @@ *e = c ... when != e = e1 when != &e when != true IS_ERR(e) *PTR_ERR(e) // </smpl> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Reported-by: Josh Triplett <josh@joshtriplett.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Convert WARN_ON_ONCE() in rcu_lock_acquire() to lockdepHeiko Carstens2012-02-212-2/+16
| | | | | | | | | | | | | | | | | | | | The WARN_ON_ONCE() in rcu_lock_acquire() results in infinite recursion on S390, and also doesn't print very much information. Remove this. Updated patch to add lockdep-RCU assertions to RCU's read-side primitives. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Trace only after NULL-pointer checkPaul E. McKenney2012-02-211-2/+2
| | | | | | | | | | | | | | Fix a bonehead error introduced when adding event tracing to rcutorture. Move the traces to follow the NULL-pointer checks. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Call out dangers of expedited RCU primitivesPaul E. McKenney2012-02-215-22/+77
| | | | | | | | | | | | | | | | The expedited RCU primitives can be quite useful, but they have some high costs as well. This commit updates and creates docbook comments calling out the costs, and updates the RCU documentation as well. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Rework detection of use of RCU by offline CPUsPaul E. McKenney2012-02-215-71/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because newly offlined CPUs continue executing after completing the CPU_DYING notifiers, they legitimately enter the scheduler and use RCU while appearing to be offline. This calls for a more sophisticated approach as follows: 1. RCU marks the CPU online during the CPU_UP_PREPARE phase. 2. RCU marks the CPU offline during the CPU_DEAD phase. 3. Diagnostics regarding use of read-side RCU by offline CPUs use RCU's accounting rather than the cpu_online_map. (Note that __call_rcu() still uses cpu_online_map to detect illegal invocations within CPU_DYING notifiers.) 4. Offline CPUs are prevented from hanging the system by force_quiescent_state(), which pays attention to cpu_online_map. Some additional work (in a later commit) will be needed to guarantee that force_quiescent_state() waits a full jiffy before assuming that a CPU is offline, for example, when called from idle entry. (This commit also makes the one-jiffy wait explicit, since the old-style implicit wait can now be defeated by RCU_FAST_NO_HZ and by rcutorture.) This approach avoids the false positives encountered when attempting to use more exact classification of CPU online/offline state. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * lockdep: Add CPU-idle/offline warning to lockdep-RCU splatPaul E. McKenney2012-02-211-1/+7
| | | | | | | | | | | | | | | | | | | | | | It is illegal to use RCU from a CPU that has reported idleness or offlinedness to RCU. However, it can be quite difficult to determine from a stack trace whether or not a given CPU is idle or offline. Therefore, this commit adds idle/offline diagnostics to the lockdep-RCU error message. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: No interrupt disabling for rcu_prepare_for_idle()Paul E. McKenney2012-02-211-17/+1
| | | | | | | | | | | | | | | | | | | | The rcu_prepare_for_idle() function is always called with interrupts disabled, so there is no reason to disable interrupts again within rcu_prepare_for_idle(). Therefore, this commit removes all of the interrupt disabling, also removing a latent disabling-unbalance bug. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Move synchronize_sched_expedited() to rcutree.cPaul E. McKenney2012-02-212-116/+117
| | | | | | | | | | | | | | | | | | | | Now that TREE_RCU and TREE_PREEMPT_RCU no longer do anything different for the single-CPU case, there is no need for multiple definitions of synchronize_sched_expedited(). It is no longer in any sense a plug-in, so move it from kernel/rcutree_plugin.h to kernel/rcutree.c. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Check for illegal use of RCU from offlined CPUsPaul E. McKenney2012-02-215-4/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although it is legal to use RCU during early boot, it is anything but legal to use RCU at runtime from an offlined CPU. After all, RCU explicitly ignores offlined CPUs. This commit therefore adds checks for runtime use of RCU from offlined CPUs. These checks are not perfect, in particular, they can be subverted through use of things like rcu_dereference_raw(). Note that it is not possible to put checks in rcu_read_lock() and friends due to the fact that these primitives are used in code that might be used under either RCU or lock-based protection, which means that checking rcu_read_lock() gets you fat piles of false positives. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Update stall-warning documentationPaul E. McKenney2012-02-211-7/+80
| | | | | | | | | | | | | | | | | | | | | | | | Add documentation of CONFIG_RCU_CPU_STALL_VERBOSE, CONFIG_RCU_CPU_STALL_INFO, and RCU_STALL_DELAY_DELTA. Describe multiple stall-warning messages from a single stall, and the timing of the subsequent messages. Add headings. Remove RCU_SECONDS_TILL_STALL_RECHECK because this value is now computed at runtime from RCU_CPU_STALL_TIMEOUT, so that sysfs changes to the timeout value now directly affect the RCU_SECONDS_TILL_STALL_RECHECK value. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Add CPU-stall capability to rcutorturePaul E. McKenney2012-02-212-0/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | Add module parameters to rcutorture that induce a CPU stall. The stall_cpu parameter specifies how long to stall in seconds, defaulting to zero, which indicates no stalling is to be undertaken. The stall_cpu_holdoff parameter specifies how many seconds after insmod (or boot, if rcutorture is built into the kernel) that this stall is to start. The default value for stall_cpu_holdoff is ten seconds. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Make documentation give more realistic rcutorture durationPaul E. McKenney2012-02-211-1/+1
| | | | | | | | | | | | | | | | | | The torture.txt documentation gives an example rcutorture run with a 100-second duration. This is ridiculously short, unless maybe testing a fix for a egregious bug. Use a more-realistic one-hour duration for the example. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcutorture: Permit holding off CPU-hotplug operations during bootPaul E. McKenney2012-02-212-4/+21
| | | | | | | | | | | | | | | | | | | | | | When rcutorture is started automatically at boot time, it might well also start CPU-hotplug operations at that time, which might not be desirable. This commit therefore adds an rcutorture parameter that allows CPU-hotplug operations to be held off for the specified number of seconds after the start of boot. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Print scheduling-clock information on RCU CPU stall-warning messagesPaul E. McKenney2012-02-214-14/+194
| | | | | | | | | | | | | | | | | | | | | | There have been situations where RCU CPU stall warnings were caused by issues in scheduling-clock timer initialization. To make it easier to track these down, this commit causes the RCU CPU stall-warning messages to print out the number of scheduling-clock interrupts taken in the current grace period for each stalled CPU. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Set RCU CPU stall times via sysfsPaul E. McKenney2012-02-212-11/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The default CONFIG_RCU_CPU_STALL_TIMEOUT value of 60 seconds has served Linux users well for production use for quite some time. However, for debugging, there will be more than three minutes between subsequent stall-warning messages. This can be an annoyingly long wait if you are trying to work out where the offending infinite loop is hiding. Therefore, this commit provides a rcu_cpu_stall_timeout sysfs parameter that may be adjusted at boot time and at runtime to speed up debugging. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Remove #ifdef CONFIG_SMP from TREE_RCUPaul E. McKenney2012-02-212-31/+0
| | | | | | | | | | | | | | | | | | Now that both TINY_RCU and TINY_PREEMPT_RCU have been in place for awhile, it is time to remove UP support from TREE_RCU, which is what this commit does. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Note that rcu_access_pointer() can be used for teardownPaul E. McKenney2012-02-211-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is no convenient expression for rcu_deference_protected() when it is used in tearing down multilinked structures following a grace period. For example, suppose that an element containing an RCU-protected pointer to a second element is removed from an enclosing RCU-protected data structure, then the write-side lock is released, and finally synchronize_rcu() is invoked to wait for a grace period. Then it is necessary to traverse the pointer in order to free up the second element. But we are not in an RCU read-side critical section and we are holding no locks, so the usual rcu_dereference_check() and rcu_dereference_protected() primitives are not appropriate. Neither is rcu_dereference_raw(), as it is intended for use in data structures where the user defines the locking design (for example, list_head). So this responsibility is added to rcu_access_pointer()'s list, and this commit updates rcu_assign_pointer()'s header comment accordingly. Suggested-by: David Howells <dhowells@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: David Howells <dhowells@redhat.com>
| * rcu: Make rcu_sleep_check() also check rcu_lock_mapPaul E. McKenney2012-02-211-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | Although it is OK to be preempted in an RCU read-side critical section for TREE_PREEMPT_RCU, it is definitely not OK to be preempted, block, or might_sleep() within an RCU read-side critical section for TREE_RCU. Unfortunately, rcu_might_sleep() currently only checks for RCU-bh and RCU-sched read-side critical sections. This commit therefore makes rcu_might_sleep() check for RCU read-side critical sections, but only in TREE_RCU builds. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Check for idle-loop entry while in RCU read-side critical sectionPaul E. McKenney2012-02-211-0/+11
| | | | | | | | | | | | | | | | | | | | | | The inner idle loop is an extended quiescent state for all flavors of RCU, but there have been recent bug involving use of RCU read-side primitives from within the idle loop. Therefore, this commit enlists lockdep-RCU to detect attempts to enter the inner idle loop while in an RCU read-side critical section, emitting a lockdep-RCU splat if so. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Clean up straggling rcu_preempt_needs_cpu() namePaul E. McKenney2012-02-213-6/+6
| | | | | | | | | | | | | | | | | | | | The recent updates to RCU_CPU_FAST_NO_HZ have an rcu_needs_cpu() that does more than just check for callbacks, so get the name for rcu_preempt_needs_cpu() consistent with that change, now calling it rcu_preempt_cpu_has_callbacks(). Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Simplify unboosting checksPaul E. McKenney2012-02-212-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a port of commit #82e78d80 from TREE_PREEMPT_RCU to TINY_PREEMPT_RCU. This commit uses the fact that current->rcu_boost_mutex is set any time that the RCU_READ_UNLOCK_BOOSTED flag is set in the current->rcu_read_unlock_special bitmask. This allows tests of the bit to be changed to tests of the pointer, which in turn allows the RCU_READ_UNLOCK_BOOSTED flag to be eliminated. Please note that the check of current->rcu_read_unlock_special need not change because any time that RCU_READ_UNLOCK_BOOSTED was set, so was RCU_READ_UNLOCK_BLOCKED. Therefore, __rcu_read_unlock() can continue testing current->rcu_read_unlock_special for non-zero, as before. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Inform RCU of irq_exit() activityPaul E. McKenney2012-02-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a port to TINY_RCU of Peter Zijlstra's commit #ec433f0c5 The rcu_read_unlock_special() function relies on in_irq() to exclude scheduler activity from interrupt level. This fails because exit_irq() can invoke the scheduler after clearing the preempt_count() bits that in_irq() uses to determine that it is at interrupt level. This situation can result in failures as follows: $task IRQ SoftIRQ rcu_read_lock() /* do stuff */ <preempt> |= UNLOCK_BLOCKED rcu_read_unlock() --t->rcu_read_lock_nesting irq_enter(); /* do stuff, don't use RCU */ irq_exit(); sub_preempt_count(IRQ_EXIT_OFFSET); invoke_softirq() ttwu(); spin_lock_irq(&pi->lock) rcu_read_lock(); /* do stuff */ rcu_read_unlock(); rcu_read_unlock_special() rcu_report_exp_rnp() ttwu() spin_lock_irq(&pi->lock) /* deadlock */ rcu_read_unlock_special(t); This can be triggered 'easily' because invoke_softirq() immediately does a ttwu() of ksoftirqd/# instead of doing the in-place softirq stuff first, but even without that the above happens. Cure this by also excluding softirqs from the rcu_read_unlock_special() handler and ensuring the force_irqthreads ksoftirqd/# wakeup is done from full softirq context. It is also necessary to delay the ->rcu_read_lock_nesting decrement until after rcu_read_unlock_special(). This delay is handled by the commit "Protect __rcu_read_unlock() against scheduler-using irq handlers". Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Prevent RCU callbacks from executing before scheduler initializedPaul E. McKenney2012-02-212-7/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a port of commit #b0d3041 from TREE_RCU to TREE_PREEMPT_RCU. Under some rare but real combinations of configuration parameters, RCU callbacks are posted during early boot that use kernel facilities that are not yet initialized. Therefore, when these callbacks are invoked, hard hangs and crashes ensue. This commit therefore prevents RCU callbacks from being invoked until after the scheduler is fully up and running, as in after multiple tasks have been spawned. It might well turn out that a better approach is to identify the specific RCU callbacks that are causing this problem, but that discussion will wait until such time as someone really needs an RCU callback to be invoked (as opposed to merely registered) during early boot. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Streamline code produced by __rcu_read_unlock()Paul E. McKenney2012-02-211-1/+1
| | | | | | | | | | | | | | | | This is a port of commit #be0e1e21 to TINY_PREEMPT_RCU. This uses noinline to prevent rcu_read_unlock_special() from being inlined into __rcu_read_unlock(). Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Protect __rcu_read_unlock() against scheduler-using irq handlersPaul E. McKenney2012-02-211-8/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit ports commit #10f39bb1b2 (rcu: protect __rcu_read_unlock() against scheduler-using irq handlers) from TREE_PREEMPT_RCU to TINY_PREEMPT_RCU. The following is a corresponding port of that commit message. The addition of RCU read-side critical sections within runqueue and priority-inheritance critical sections introduced some deadlocks, for example, involving interrupts from __rcu_read_unlock() where the interrupt handlers call wake_up(). This situation can cause the instance of __rcu_read_unlock() invoked from interrupt to do some of the processing that would otherwise have been carried out by the task-level instance of __rcu_read_unlock(). When the interrupt-level instance of __rcu_read_unlock() is called with a scheduler lock held from interrupt-entry/exit situations where in_irq() returns false, deadlock can result. Of course, in a UP kernel, there are not really any deadlocks, but the upper-level critical section can still be be fatally confused by the lower-level critical section changing things out from under it. This commit resolves these deadlocks by using negative values of the per-task ->rcu_read_lock_nesting counter to indicate that an instance of __rcu_read_unlock() is in flight, which in turn prevents instances from interrupt handlers from doing any special processing. Note that nested rcu_read_lock()/rcu_read_unlock() pairs are still permitted, but they will never see ->rcu_read_lock_nesting go to zero, and will therefore never invoke rcu_read_unlock_special(), thus preventing them from seeing the RCU_READ_UNLOCK_BLOCKED bit should it be set in ->rcu_read_unlock_special. This patch also adds a check for ->rcu_read_unlock_special being negative in rcu_check_callbacks(), thus preventing the RCU_READ_UNLOCK_NEED_QS bit from being set should a scheduling-clock interrupt occur while __rcu_read_unlock() is exiting from an outermost RCU read-side critical section. Of course, __rcu_read_unlock() can be preempted during the time that ->rcu_read_lock_nesting is negative. This could result in the setting of the RCU_READ_UNLOCK_BLOCKED bit after __rcu_read_unlock() checks it, and would also result it this task being queued on the corresponding rcu_node structure's blkd_tasks list. Therefore, some later RCU read-side critical section would enter rcu_read_unlock_special() to clean up -- which could result in deadlock (OK, OK, fatal confusion) if that RCU read-side critical section happened to be in the scheduler where the runqueue or priority-inheritance locks were held. To prevent the possibility of fatal confusion that might result from preemption during the time that ->rcu_read_lock_nesting is negative, this commit also makes rcu_preempt_note_context_switch() check for negative ->rcu_read_lock_nesting, thus refraining from queuing the task (and from setting RCU_READ_UNLOCK_BLOCKED) if we are already exiting from the outermost RCU read-side critical section (in other words, we really are no longer actually in that RCU read-side critical section). In addition, rcu_preempt_note_context_switch() invokes rcu_read_unlock_special() to carry out the cleanup in this case, which clears out the ->rcu_read_unlock_special bits and dequeues the task (if necessary), in turn avoiding needless delay of the current RCU grace period and needless RCU priority boosting. It is still illegal to call rcu_read_unlock() while holding a scheduler lock if the prior RCU read-side critical section has ever had both preemption and irqs enabled. However, the common use case is legal, namely where then entire RCU read-side critical section executes with irqs disabled, for example, when the scheduler lock is held across the entire lifetime of the RCU read-side critical section. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Remove single-rcu_node optimization in rcu_start_gp()Paul E. McKenney2012-02-211-18/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The grace-period initialization sequence in rcu_start_gp() has a special case for systems where the rcu_node tree is a single rcu_node structure. This made sense some years ago when systems were smaller and up to 64 CPUs could share a single rcu_node structure, but now that large systems are common and a given leaf rcu_node structure can support only 16 CPUs (due to lock contention on the rcu_node's ->lock field), this optimization is almost never taken. And even the small mobile platforms that might make use of it might rather have the kernel text reduction. Therefore, this commit removes the check for single-rcu_node trees. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
| * rcu: Don't make callbacks go through second full grace periodPaul E. McKenney2012-02-211-6/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | RCU's current CPU-offline code path dumps all of the outgoing CPU's callbacks onto the RCU_NEXT_TAIL portion of the surviving CPU's callback list. This means that all the ready-to-invoke callbacks from the outgoing CPU must wait for another full RCU grace period. This was just fine when CPU-hotplug events were rare, but there is increasing evidence that users are planning to make increasing use of CPU hotplug. Therefore, this commit changes the callback-dumping procedure so that callbacks that are ready to invoke are moved to the RCU_DONE_TAIL portion of the surviving CPU's callback list. This avoids running these callbacks through a second unnecessary grace period. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Check for callback invocation from offline CPUsPaul E. McKenney2012-02-211-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because quiescent states are now reported from offline CPUs in CPU_DYING state, there is some possibility that such a CPU might note the end of a grace period and attempt to start invoking callbacks. This would be a very bad thing, and is supposed to be prevented by the fact that the CPU_DYING CPU gets rid of all its callbacks before reporting the quiescent state. However, there is other CPU-offline code in the kernel, and it is quite possible that someone will invoke RCU core processing from that code. Therefore, this commit adds a warning for this case. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Limit lazy-callback durationPaul E. McKenney2012-02-211-1/+11
| | | | | | | | | | | | | | | | | | | | Currently, a given CPU is permitted to remain in dyntick-idle mode indefinitely if it has only lazy RCU callbacks queued. This is vulnerable to corner cases in NUMA systems, so limit the time to six seconds by default. (Currently controlled by a cpp macro.) Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Make rcutorture flag online/offline failuresPaul E. McKenney2012-02-211-0/+4
| | | | | | | | | | | | | | | | Make rcutorture check for CPU-hotplug failures and complain if there were any. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Simplify offline processingPaul E. McKenney2012-02-213-99/+90
| | | | | | | | | | | | | | | | | | | | | | Move ->qsmaskinit and blkd_tasks[] manipulation to the CPU_DYING notifier. This simplifies the code by eliminating a potential deadlock and by reducing the responsibilities of force_quiescent_state(). Also rename functions to make their connection to the CPU-hotplug stages explicit. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * mac80211: Convert call_rcu() to kfree_rcu(), drop mesh_gate_node_reclaim()Paul E. McKenney2012-02-211-7/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The call_rcu() in mesh_gate_del() invokes mesh_gate_node_reclaim(), which simply calls kfree(). So convert the call_rcu() to kfree_rcu(), allowing mesh_gate_node_reclaim() to be eliminated. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: "John W. Linville" <linville@tuxdriver.com> Cc: Johannes Berg <johannes@sipsolutions.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-wireless@vger.kernel.org Cc: netdev@vger.kernel.org
| * ipv4: Convert call_rcu() to kfree_rcu(), drop opt_kfree_rcuPaul E. McKenney2012-02-211-6/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The call_rcu() in do_ip_setsockopt() invokes opt_kfree_rcu(), which just calls kfree(). So convert the call_rcu() to kfree_rcu(), which allows opt_kfree_rcu() to be eliminated. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: David S. Miller <davem@davemloft.net> Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> Cc: James Morris <jmorris@namei.org> Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org> Cc: Patrick McHardy <kaber@trash.net> Cc: netdev@vger.kernel.org
| * ipv4: Convert call_rcu() to kfree_rcu(), drop opt_kfree_rcu()Paul E. McKenney2012-02-211-8/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because opt_kfree_rcu() just calls kfree(), all call_rcu() uses of it may be converted to kfree_rcu(). This permits opt_kfree_rcu() to be eliminated. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: David S. Miller <davem@davemloft.net> Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> Cc: James Morris <jmorris@namei.org> Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org> Cc: Patrick McHardy <kaber@trash.net> Cc: netdev@vger.kernel.org
| * tcm_fc: Convert call_rcu() to kfree_rcu(), drop ft_tport_rcu_free()Paul E. McKenney2012-02-211-11/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The call_rcu() in ft_tport_delete() invokes ft_tport_rcu_free(), which just does a kfree(). So convert the call_rcu() to kfree_rcu(), allowing ft_tport_rcu_free() to be eliminated. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: "Nicholas A. Bellinger" <nab@linux-iscsi.org> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Jesper Juhl <jj@chaosbits.net> Cc: linux-scsi@vger.kernel.org Cc: target-devel@vger.kernel.org
| * s390: Convert call_rcu() to kfree_rcu(), drop ext_int_hash_update()Paul E. McKenney2012-02-211-8/+1
| | | | | | | | | | | | | | | | | | | | | | | | The call_rcu() in unregister_external_interrupt() invokes ext_int_hash_update(), which just does a kfree(). Convert the call_rcu() to kfree_rcu(), allowing ext_int_hash_update() to be eliminated. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
| * rcu: Move RCU_TRACE to lib/Kconfig.debugPaul E. McKenney2012-02-212-9/+10
| | | | | | | | | | | | | | | | | | The RCU_TRACE kernel parameter has always been intended for debugging, not for production use. Formalize this by moving RCU_TRACE from init/Kconfig to lib/Kconfig.debug. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Avoid waking up CPUs having only kfree_rcu() callbacksPaul E. McKenney2012-02-2110-47/+153
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When CONFIG_RCU_FAST_NO_HZ is enabled, RCU will allow a given CPU to enter dyntick-idle mode even if it still has RCU callbacks queued. RCU avoids system hangs in this case by scheduling a timer for several jiffies in the future. However, if all of the callbacks on that CPU are from kfree_rcu(), there is no reason to wake the CPU up, as it is not a problem to defer freeing of memory. This commit therefore tracks the number of callbacks on a given CPU that are from kfree_rcu(), and avoids scheduling the timer if all of a given CPU's callbacks are from kfree_rcu(). Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Add diagnostic for misaligned rcu_head structuresPaul E. McKenney2012-02-211-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | The push for energy efficiency will require that RCU tag rcu_head structures to indicate whether or not their invocation is time critical. This tagging is best carried out in the bottom bits of the ->next pointers in the rcu_head structures. This tagging requires that the rcu_head structures be properly aligned, so this commit adds the required diagnostics. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Add lockdep-RCU checks for simple self-deadlockPaul E. McKenney2012-02-215-0/+27
| | | | | | | | | | | | | | | | | | | | It is illegal to have a grace period within a same-flavor RCU read-side critical section, so this commit adds lockdep-RCU checks to splat when such abuse is encountered. This commit does not detect more elaborate RCU deadlock situations. These situations might be a job for lockdep enhancements. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Improve synchronize_rcu() diagnosticsFrederic Weisbecker2012-02-211-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Although TREE_PREEMPT_RCU indirectly uses might_sleep() to detect illegal use of synchronize_sched() and synchronize_rcu_bh() from within an RCU read-side critical section, this might_sleep() check is bypassed when there is only a single CPU (for example, when running an SMP kernel on a single-CPU system). This patch therefore adds a might_sleep() call to the rcu_blocking_is_gp() check that is unconditionally invoked from both synchronize_sched() and synchronize_rcu_bh(). Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * rcu: Bring RTFP.txt up to date.Paul E. McKenney2012-02-211-98/+1686
| | | | | | | | | | | | Add publications from 2010 and 2011 to RTFP.txt. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* | Merge tag 'ktest-fix-make-min-failed-build-for-real' of ↵Linus Torvalds2012-02-271-3/+5
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-ktest While demoing ktest at ELC in 2012, it was embarrassing that the make_min_config test failed to work because the snowball board I was testing it against had a config that would not build. But the make_min_config only tested the testing part and ignored build failures. The end result was a config file that would not boot. This time, for real. * tag 'ktest-fix-make-min-failed-build-for-real' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-ktest: ktest: Fix make_min_config test when build fails
OpenPOWER on IntegriCloud