summaryrefslogtreecommitdiffstats
path: root/kernel/rcu
Commit message (Collapse)AuthorAgeFilesLines
* rcutorture: Don't do busted forward-progress testingPaul E. McKenney2018-12-011-1/+2
| | | | | | | | | | | The "busted" rcutorture type is an intentionally broken implementation of RCU. Doing forward-progress testing on this implementation is not particularly meaningful on the one hand and can result in fatal abuse of the memory allocator on the other. This commit therefore disables forward-progress testing of the "busted" rcutorture type. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
* rcutorture: Use 100ms buckets for forward-progress callback histogramsPaul E. McKenney2018-12-011-3/+5
| | | | | | | | | This commit narrows the scope of each bucket of the forward-progress callback-invocation histograms from one second to 100 milliseconds, which aids debugging of forward-progress problems by making shorter-duration callback-invocation stalls visible. Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
* rcutorture: Recover from OOM during forward-progress testsPaul E. McKenney2018-12-011-11/+49
| | | | | | | | | | This commit causes the OOM handler to do rcu_barrier() calls and to free up forward-progress callbacks in order to recover from OOM events. The current test is terminated, but subsequent forward-progress tests can proceed. This allows a long test to result in multiple forward-progress failures, greatly reducing the required testing time. Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
* rcutorture: Print forward-progress test age upon failurePaul E. McKenney2018-12-011-1/+2
| | | | | | | | This commit prints the age of the forward-progress test in jiffies, in order to allow better interpretation of the callback-invocation histograms. Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
* rcutorture: Print time since GP end upon forward-progress failurePaul E. McKenney2018-12-012-1/+6
| | | | | | | | | If rcutorture's forward-progress tests fail while a grace period is not in progress, it is useful to print the time since the last grace period ended as a way to detect failure to launch a new grace period. This commit therefore makes this change. Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
* rcutorture: Print histogram of CB invocation at OOM timePaul E. McKenney2018-12-011-8/+16
| | | | | | | | One reason why a forward-progress test might fail would be if something prevented or delayed callback invocation. This commit therefore adds a callback-invocation histogram printout when OOM is reported to rcutorture. Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
* rcutorture: Print GP age upon forward-progress failurePaul E. McKenney2018-12-011-0/+2
| | | | Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
* rcu: Print per-CPU callback counts for forward-progress failuresPaul E. McKenney2018-12-011-0/+18
| | | | | | | | This commit prints out the non-zero per-CPU callback counts when a forware-progress error (OOM event) occurs. Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com> [ paulmck: Fix a pair of uninitialized locals spotted by kbuild test robot. ]
* rcu: Account for nocb-CPU callback counts in RCU CPU stall warningsPaul E. McKenney2018-12-013-9/+35
| | | | | | | | | | | | | | | The RCU CPU stall warnings print an estimate of the total number of RCU callbacks queued in the system, but this estimate leaves out the callbacks queued for nocbs CPUs. This commit therefore introduces rcu_get_n_cbs_cpu(), which gives an accurate callback estimate for both nocbs and normal CPUs, and uses this new function as needed. This commit also introduces a rcu_get_n_cbs_nocb_cpu() helper function that returns the number of callbacks for nocbs CPUs or zero otherwise, and also uses this function in place of direct access to ->nocb_q_count while in the area (fewer characters, you see). Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
* rcutorture: Dump grace-period diagnostics upon forward-progress OOMPaul E. McKenney2018-12-013-3/+50
| | | | | | | | This commit adds an OOM notifier during rcutorture forward-progress testing. If this notifier is invoked, it dumps out some grace-period state to help debug the forward-progress problem. Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
* rcutorture: Prepare for asynchronous access to rcu_fwd_startatPaul E. McKenney2018-12-011-2/+2
| | | | | | | | | Because rcutorture's forward-progress checking will trigger from an OOM notifier, this notifier will introduce asynchronous concurrent access to the rcu_fwd_startat variable. This commit therefore prepares for this by converting updates to WRITE_ONCE(). Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
* rcutorture: Affinity forward-progress test to avoid housekeeping CPUsPaul E. McKenney2018-12-013-0/+14
| | | | | | | | | | This commit affinities the forward-progress tests to avoid hogging a housekeeping CPU on the theory that the offloaded callbacks will be running on those housekeeping CPUs. Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com> [ paulmck: Fix NULL-pointer issue located by kbuild test robot. ] Tested-by: Rong Chen <rong.a.chen@intel.com>
* rcutorture: Break up too-long rcu_torture_fwd_prog() functionPaul E. McKenney2018-12-011-119/+135
| | | | | | | | | | | | This commit splits rcu_torture_fwd_prog_nr() and rcu_torture_fwd_prog_cr() functions out of rcu_torture_fwd_prog() in order to reduce indentation pain and because rcu_torture_fwd_prog() was getting a bit too long. In addition, this will enable easier conditional execution of the rcu_torture_fwd_prog_cr() function, which can give false-positive failures in some NO_HZ_FULL configurations due to overloading the housekeeping CPUs. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcutorture: Remove cbflood facilityPaul E. McKenney2018-12-011-85/+1
| | | | | | | | | | | Now that the forward-progress code does a full-bore continuous callback flood lasting multiple seconds, there is little point in also posting a mere 60,000 callbacks every second or so. This commit therefore removes the old cbflood testing. Over time, it may be desirable to concurrently do full-bore continuous callback floods on all CPUs simultaneously, but one dragon at a time. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcutorture: Add call_rcu() flooding forward-progress testsPaul E. McKenney2018-12-011-2/+127
| | | | | | | | | | This commit adds a call_rcu() flooding loop to the forward-progress test. This emulates tight userspace loops that force call_rcu() invocations, for example, the infamous loop containing close(open()) that instigated the addition of blimit. If RCU does not make sufficient forward progress in invoking the resulting flood of callbacks, rcutorture emits a warning. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
*-----. Merge branches 'bug.2018.11.12a', 'consolidate.2018.12.01a', ↵Paul E. McKenney2018-12-018-361/+373
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 'doc.2018.11.12a', 'fixes.2018.11.12a', 'initrd.2018.11.08b', 'sil.2018.11.12a' and 'srcu.2018.11.27a' into HEAD bug.2018.11.12a: Get rid of BUG_ON() and friends consolidate.2018.12.01a: Continued RCU flavor-consolidation cleanup doc.2018.11.12a: Documentation updates fixes.2018.11.12a: Miscellaneous fixes initrd.2018.11.08b: Automate creation of rcutorture initrd sil.2018.11.12a: Remove more spin_unlock_wait() calls
| | | | * srcu: Use "ssp" instead of "sp" for srcu_struct pointerPaul E. McKenney2018-11-272-304/+304
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In RCU, the distinction between "rsp", "rnp", and "rdp" has served well for a great many years, but in SRCU, "sp" vs. "sdp" has proven confusing. This commit therefore renames SRCU's "sp" pointers to "ssp", so that there is "ssp" for srcu_struct pointer, "snp" for srcu_node pointer, and "sdp" for srcu_data pointer. Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
| | | | * srcu: Lock srcu_data structure in srcu_gp_start()Dennis Krein2018-11-271-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The srcu_gp_start() function is called with the srcu_struct structure's ->lock held, but not with the srcu_data structure's ->lock. This is problematic because this function accesses and updates the srcu_data structure's ->srcu_cblist, which is protected by that lock. Failing to hold this lock can result in corruption of the SRCU callback lists, which in turn can result in arbitrarily bad results. This commit therefore makes srcu_gp_start() acquire the srcu_data structure's ->lock across the calls to rcu_segcblist_advance() and rcu_segcblist_accelerate(), thus preventing this corruption. Reported-by: Bart Van Assche <bvanassche@acm.org> Reported-by: Christoph Hellwig <hch@infradead.org> Reported-by: Sebastian Kuzminsky <seb.kuzminsky@gmail.com> Signed-off-by: Dennis Krein <Dennis.Krein@netapp.com> Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com> Tested-by: Dennis Krein <Dennis.Krein@netapp.com> Cc: <stable@vger.kernel.org> # 4.16.x
| | | | * srcu: Prevent __call_srcu() counter wrap with read-side critical sectionPaul E. McKenney2018-11-081-0/+3
| | | |/ | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ever since cdf7abc4610a ("srcu: Allow use of Tiny/Tree SRCU from both process and interrupt context"), it has been permissible to use SRCU read-side critical sections in interrupt context. This allows __call_srcu() to use SRCU read-side critical sections to prevent a new SRCU grace period from ending before the call to either srcu_funnel_gp_start() or srcu_funnel_exp_start completes, thus preventing SRCU grace-period counter overflow during that time. Note that this does not permit removal of the counter-wrap checks in srcu_gp_end(). These check are necessary to handle the case where a given CPU does not interact at all with SRCU for an extended time period. This commit therefore adds an SRCU read-side critical section to __call_srcu() in order to prevent grace period counter wrap during the funnel-locking process. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | * rcu: Avoid signed integer overflow in rcu_preempt_deferred_qs()Paul E. McKenney2018-11-121-8/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Subtracting INT_MIN can be interpreted as unconditional signed integer overflow, which according to the C standard is undefined behavior. Therefore, kernel build arguments notwithstanding, it would be good to future-proof the code. This commit therefore substitutes INT_MAX for INT_MIN in order to avoid undefined behavior. While in the neighborhood, this commit also creates some meaningful names for INT_MAX and friends in order to improve readability, as suggested by Joel Fernandes. Reported-by: Ran Rozenstein <ranro@mellanox.com> Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
| | | * rcu: Replace this_cpu_ptr() with __this_cpu_read()Paul E. McKenney2018-11-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Because __this_cpu_read() can be lighter weight than equivalent uses of this_cpu_ptr(), this commit replaces the latter with the former. Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
| | | * rcu: Speed up expedited GPs when interrupting RCU readerPaul E. McKenney2018-11-122-4/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In PREEMPT kernels, an expedited grace period might send an IPI to a CPU that is executing an RCU read-side critical section. In that case, it would be nice if the rcu_read_unlock() directly interacted with the RCU core code to immediately report the quiescent state. And this does happen in the case where the reader has been preempted. But it would also be a nice performance optimization if immediate reporting also happened in the preemption-free case. This commit therefore adds an ->exp_hint field to the task_struct structure's ->rcu_read_unlock_special field. The IPI handler sets this hint when it has interrupted an RCU read-side critical section, and this causes the outermost rcu_read_unlock() call to invoke rcu_read_unlock_special(), which, if preemption is enabled, reports the quiescent state immediately. If preemption is disabled, then the report is required to be deferred until preemption (or bottom halves or interrupts or whatever) is re-enabled. Because this is a hint, it does nothing for more complicated cases. For example, if the IPI interrupts an RCU reader, but interrupts are disabled across the rcu_read_unlock(), but another rcu_read_lock() is executed before interrupts are re-enabled, the hint will already have been cleared. If you do crazy things like this, reporting will be deferred until some later RCU_SOFTIRQ handler, context switch, cond_resched(), or similar. Reported-by: Joel Fernandes <joel@joelfernandes.org> Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com> Acked-by: Joel Fernandes (Google) <joel@joelfernandes.org>
| | | * rcu: Trace end of grace period before end of grace periodPaul E. McKenney2018-11-121-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, rcu_gp_cleanup() traces the end of the old grace period after the old grace period has officially ended. This might make intuitive sense, but it also makes for confusing event-trace output because the "end" trace displays not the old but instead the new grace-period number. This commit therefore traces the end of an old grace period just before that grace period officially ends. Reported-by: Aravinda Prasad <aravinda@linux.vnet.ibm.com> Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
| | | * rcu: Adjust the comment of function rcu_is_watchingZhouyi Zhou2018-11-121-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because RCU avoids interrupting idle CPUs, rcu_is_watching() is used to test whether or not it is currently legal to run RCU read-side critical sections on this CPU. However, the first sentence and last sentences of current comment for rcu_is_watching have opposite meaning of what is expected. This commit therefore fixes this header comment. Signed-off-by: Zhouyi Zhou <zhouzhouyi@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
| | | * rcu: Add jiffies-since-GP-activity to show_rcu_gp_kthreads()Paul E. McKenney2018-11-121-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit adds a printout of the number of jiffies since the last time that the RCU grace-period kthread did any processing. This can be useful when tracking down forward-progress issues. Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
| | | * rcu: Add state name to show_rcu_gp_kthreads() outputPaul E. McKenney2018-11-121-12/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit adds the name of the RCU grace-period state to the show_rcu_gp_kthreads() output in order to ease debugging. This commit also moves gp_state_getname() up in the code so that show_rcu_gp_kthreads() can use it. Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
| | | * rcu: Parameterize rcu_check_gp_start_stall()Paul E. McKenney2018-11-121-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to debug forward-progress stalls, it is necessary to check for excessively delayed grace-period starts. This is currently done for RCU CPU stall warnings by rcu_check_gp_start_stall(), which checks to see if the start of a requested grace period has been delayed by an RCU CPU stall warning period. Because rcutorture will need to check for the time consumed by an RCU forward-progress delay, this commit promotes gpssdelay from a local variable to a formal parameter. It is not necessary to export rcu_check_gp_start_stall() because rcutorture will access it via a wrapper function. Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
| | | * rcu: Avoid double multiply by HZPaul E. McKenney2018-11-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The rcu_check_gp_start_stall() function multiplies the return value from rcu_jiffies_till_stall_check() by HZ, but the units are already in jiffies. This commit therefore avoids the need for introduction of a jiffies-squared unit by removing the extraneous multiplication. Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
| | | * rcu: Stop expedited grace periods from relying on stop-machinePaul E. McKenney2018-11-111-2/+4
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The CPU-selection code in sync_rcu_exp_select_cpus() disables preemption to prevent the cpu_online_mask from changing. However, this relies on the stop-machine mechanism in the CPU-hotplug offline code, which is not desirable (it would be good to someday remove the stop-machine mechanism). This commit therefore instead uses the relevant leaf rcu_node structure's ->ffmask, which has a bit set for all CPUs that are fully functional. A given CPU's bit is cleared very early during offline processing by rcutree_offline_cpu() and set very late during online processing by rcutree_online_cpu(). Therefore, if a CPU's bit is set in this mask, and preemption is disabled, we have to be before the synchronize_sched() in the CPU-hotplug offline code, which means that the CPU is guaranteed to be workqueue-ready throughout the duration of the enclosing preempt_disable() region of code. This also has the side-effect of using WORK_CPU_UNBOUND if all the CPUs for this leaf rcu_node structure are offline, which is an acceptable difference in behavior. Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Consolidate the RCU update functions invoked by sync.cPaul E. McKenney2018-11-081-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit retains all the various gp_ops[] entries, but makes their update functions all be synchronize_rcu(), call_rcu() and rcu_barrier(). The read-side checks remain consistent with the various RCU flavors, which still exist on the read side. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org>
| * | rcu: Eliminate synchronize_rcu_mult()Paul E. McKenney2018-11-081-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that synchronize_rcu() waits for both RCU read-side critical sections and preempt-disabled regions of code, the sole caller of synchronize_rcu_mult() can be replaced by synchronize_rcu(). This patch makes this change and removes synchronize_rcu_mult(). Note that _wait_rcu_gp() still supports synchronize_rcu_mult(), and thus might be simplified in the future to take only take a single call_rcu() function rather than the current list of them. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | rcu: Fix rcu_{node,data} comments about gp_seq_neededJoel Fernandes (Google)2018-11-081-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Recent changes have removed the old ->gp_seq_needed field from the rcu_state structure, which in turn obsoleted a couple of comments in the rcu_node and rcu_data structures. This commit therefore updates these comments accordingly. Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org> Cc: <kernel-team@android.com> Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
| * | rcu: Remove unused rcu_state externsJoel Fernandes (Google)2018-11-081-11/+0
| |/ | | | | | | | | | | | | | | | | | | The rcu_bh_state and rcu_sched_state variables were removed during the RCU flavor consolidations, but external declarations remain in tree.h. This commit therefore removes these obsolete declarations. Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org> Cc: <kernel-team@android.com> Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
* | rcu: Eliminate BUG_ON() for kernel/rcu/update.cPaul E. McKenney2018-11-121-1/+2
| | | | | | | | | | | | | | | | | | | | The update.c file has a number of calls to BUG_ON(), which panics the kernel, which is not a good strategy for devices (like embedded) that don't have a way to capture console output. This commit therefore converts these BUG_ON() calls to WARN_ON_ONCE() and WARN_ONCE(). Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
* | rcu: Eliminate BUG_ON() for kernel/rcu/tree_plugin.hPaul E. McKenney2018-11-121-3/+6
| | | | | | | | | | | | | | | | | | | | | | The tree_plugin.h file has a number of calls to BUG_ON(), which panics the kernel, which is not a good strategy for devices (like embedded) that don't have a way to capture console output. This commit therefore converts these BUG_ON() calls to WARN_ON_ONCE() and WARN_ONCE(). Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com> [ paulmck: Fix typo: s/rcuo/rcub/. ]
* | rcu: Eliminate BUG_ON() for kernel/rcu/tree.cPaul E. McKenney2018-11-081-2/+3
| | | | | | | | | | | | | | | | | | | | The tree.c file has a number of calls to BUG_ON(), which panics the kernel, which is not a good strategy for devices (like embedded) that don't have a way to capture console output. This commit therefore converts these BUG_ON() calls to WARN_ON_ONCE() and WARN_ONCE(). Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
* | rcu: Eliminate BUG_ON() for sync.cPaul E. McKenney2018-11-081-7/+6
|/ | | | | | | | | | | | The sync.c file has a number of calls to BUG_ON(), which panics the kernel, which is not a good strategy for devices (like embedded) that don't have a way to capture console output. This commit therefore changes these BUG_ON() calls to WARN_ON_ONCE(), but does so quite naively. Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com> Acked-by: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org>
*---. Merge branches 'doc.2018.08.30a', 'dynticks.2018.08.30b', 'srcu.2018.08.30b' ↵Paul E. McKenney2018-08-3012-2385/+2004
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | and 'torture.2018.08.29a' into HEAD doc.2018.08.30a: Documentation updates dynticks.2018.08.30b: RCU flavor consolidation updates and cleanups srcu.2018.08.30b: SRCU updates torture.2018.08.29a: Torture-test updates
| | | * rcutorture: Maintain self-propagating CB only during forward-progress testPaul E. McKenney2018-08-291-5/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current forward-progress testing maintains a self-propagating callback during the full test. This could result in false negatives for stutter-end checking, where it might appear that RCU was clearing out old callbacks only because it was being continually motivated by the self-propagating callback. This commit therefore shuts down the self-propagating callback at the end of each forward-progress test interval. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | * rcutorture: Check GP completion at stutter endPaul E. McKenney2018-08-291-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The rcu_torture_writer() function invokes stutter_wait() at the end of each writer pass, which occasionally blocks for an extended time period in order to ensure that RCU can handle intermittent loads. But part of handling a busy period is invoking all the callbacks before the end of the idle period induced by stutter_wait(). This commit therefore adds a return value to stutter_wait() indicating whether stutter_wait() actually waited. In addition, this commit causes rcu_torture_writer() to test this value and if set, checks that all the elements of the rcu_tortures[] array have been freed up. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | * rcutorture: Print forward-progress test interval on errorPaul E. McKenney2018-08-291-5/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit prints the duration of the forward-progress test interval in the case that no forward progress was observed as an aid to debugging. When forward progress does happen, it prints out the number of rcu_torture_writer() versions and grace periods that elapsed during the forward-progress test. At the end of the run, it also prints the number of attempted and actual forward-progress tests. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | * rcutorture: Adjust number of reader kthreads per CPU-hotplug operationsPaul E. McKenney2018-08-291-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, rcutorture provisions rcu_torture_reader() kthreads based on the initial number of CPUs. This can be problematic when CPU hotplug is enabled, as a system with a very large number of CPUs will provision a very large number of rcu_torture_reader() kthreads. All of these kthreads will continue running even if the CPU-hotplug operations result in only one remaining online CPU. This can result in all sorts of strange artifacts due simply to massive overload. This commit therefore causes the rcu_torture_reader() kthreads to start blocking as the number of online CPUs decreases. This is accomplished by numbering these kthreads, and having each check to make sure that the number of online CPUs is at least as large as its assigned number. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | * rcutorture: Reduce priority of forward-progress testingPaul E. McKenney2018-08-291-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On !SMP tests, the forward-progress kthread might prevent RCU's grace-period kthread from running, which would defeat RCU's forward-progress measures. On PREEMPT tests without RCU priority boosting, the forward-progress kthread might preempt a reader for an extended time period, which would also defeat RCU's forward-progress measures. This commit therefore reduced rcutorture's forward-progress kthread's priority in those cases. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | * rcutorture: Limit reader duration if irq or bh disabledPaul E. McKenney2018-08-291-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are debug checks in some environments that will complain if the duration of a bh-disabled region of code exceeds about 50 milliseconds. Because rcu_read_delay() can produce a 50-millisecond delay and because there could be up to eight reader segments with such delays, this commit limits the maximum delay to 10 milliseconds if either interrupts or softirqs are disabled. Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | * rcutorture: Increase rcu_read_delay() longdelay_msPaul E. McKenney2018-08-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | RCU now takes certain actions 100 and 200 milliseconds into a grace period by default, but rcutorture only runs RCU read-side critical sections with durations up to 50 milliseconds. This commit therefore increases test coverage by increasing the maximum critical-section duration to 300 milliseconds. Note that the existing code automatically dials down the probability of long delays based on the maximum duration, which means that this change should not significantly change the rate of execution of RCU read-side critical sections. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | * rcutorture: Add self-propagating callback to forward-progress testingPaul E. McKenney2018-08-291-0/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If rcutorture is run on a quiet system with the rcutorture.stutter module parameter set high, then there can legitimately be an extended period during which no RCU forward progress takes place. This can result in false-positive no-forward-progress splats. This commit therefore makes rcu_torture_fwd_prog() create a self-propagating RCU callback to ensure that grace periods are in progress for the duration of the forward-progress test. Note that the RCU flavor under test must define ->call(), ->sync(), and ->cb_barrier() for this self-propagating callback to be created. If one or more of those rcu_torture_ops fields are NULL, then the rcu_torture_fwd_prog() function will silently proceed without creating the self-propagating callback. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | * rcutorture: Vary forward-progress test intervalPaul E. McKenney2018-08-291-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some of the Linux kernel's RCU implementations provide several mechanisms to promote forward progress that operate over different timeframes. This commit therefore causes rcu_torture_fwd_prog() to vary the duration of its forward-progress testing in order to test each such mechanism. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | * rcutorture: Avoid no-test complaint if too few forward-progress triesPaul E. McKenney2018-08-291-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In a too-short test, random delays can cause each attempt to do forward-progress testing to fail to complete, thus resulting in spurious splats. This commit therefore requires at least five tries before complaining about rcutorture runs that failed to produce at least one valid forward-progress testing attempt. Note that actual forward-progress failures will splat regardless of the number of tries. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | * rcutorture: Also use GP sequence to judge forward progressPaul E. McKenney2018-08-291-4/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, rcutorture relies solely on the progress of rcu_torture_writer() to judge grace-period forward progress. In theory, this is the gold standard of forward progress, but in practice rcutorture separately detects and reports rcu_torture_writer() stalls. This commit therefore adds the grace-period sequence number (when provided) to the judgment of grace-period forward progress, which makes it easier to distinguish between failure of actual grace periods to progress on the one hand and downstream forward-progress failures on the other. For example, given this change, if rcu_torture_writer() stalls, but rcu_torture_fwd_prog() does not complain, then the grace-period computation is working, which is a hint that the failure lies in callback processing, wakeup of the rcu_torture_writer() kthread, or similar. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| | | * rcutorture: Add forward-progress tests for RCU grace periodsPaul E. McKenney2018-08-292-1/+73
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit adds a kthread that loops going into and out of RCU read-side critical sections, but also including a cond_resched(), optionally guarded by a check of need_resched(), in that same loop. This commit relies solely on rcu_torture_writer() progress to judge the forward progress of grace periods. Note that Tasks RCU and SRCU are exempted from forward-progress testing due their (intentionally) less-robust forward-progress guarantees. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
OpenPOWER on IntegriCloud