summaryrefslogtreecommitdiffstats
path: root/kernel/locking
diff options
context:
space:
mode:
authorJason Low <jason.low2@hp.com>2015-03-06 23:45:31 -0800
committerIngo Molnar <mingo@kernel.org>2015-03-07 09:50:49 +0100
commit9198f6edfd9ced74fd90b238d5a354aeac89bdfa (patch)
tree2985a9d40507f742620b25c4c6e267165f40031e /kernel/locking
parent4d3199e4ca8e6670b54dc5ee070ffd54385988e9 (diff)
downloadblackbird-op-linux-9198f6edfd9ced74fd90b238d5a354aeac89bdfa.tar.gz
blackbird-op-linux-9198f6edfd9ced74fd90b238d5a354aeac89bdfa.zip
locking/rwsem: Fix lock optimistic spinning when owner is not running
Ming reported soft lockups occurring when running xfstest due to the following tip:locking/core commit: b3fd4f03ca0b ("locking/rwsem: Avoid deceiving lock spinners") When doing optimistic spinning in rwsem, threads should stop spinning when the lock owner is not running. While a thread is spinning on owner, if the owner reschedules, owner->on_cpu returns false and we stop spinning. However, this commit essentially caused the check to get ignored because when we break out of the spin loop due to !on_cpu, we continue spinning if sem->owner != NULL. This patch fixes this by making sure we stop spinning if the owner is not running. Furthermore, just like with mutexes, refactor the code such that we don't have separate checks for owner_running(). This makes it more straightforward in terms of why we exit the spin on owner loop and we would also avoid needing to "guess" why we broke out of the loop to make this more readable. Reported-and-tested-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Jason Low <jason.low2@hp.com> Acked-by: Davidlohr Bueso <dave@stgolabs.net> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Dave Jones <davej@codemonkey.org.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Michel Lespinasse <walken@google.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tim Chen <tim.c.chen@linux.intel.com> Link: http://lkml.kernel.org/r/1425714331.2475.388.camel@j-VirtualBox Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/locking')
-rw-r--r--kernel/locking/rwsem-xadd.c31
1 files changed, 11 insertions, 20 deletions
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 06e2214edf98..3417d0172a5d 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -324,32 +324,23 @@ done:
return ret;
}
-static inline bool owner_running(struct rw_semaphore *sem,
- struct task_struct *owner)
-{
- if (sem->owner != owner)
- return false;
-
- /*
- * Ensure we emit the owner->on_cpu, dereference _after_ checking
- * sem->owner still matches owner, if that fails, owner might
- * point to free()d memory, if it still matches, the rcu_read_lock()
- * ensures the memory stays valid.
- */
- barrier();
-
- return owner->on_cpu;
-}
-
static noinline
bool rwsem_spin_on_owner(struct rw_semaphore *sem, struct task_struct *owner)
{
long count;
rcu_read_lock();
- while (owner_running(sem, owner)) {
- /* abort spinning when need_resched */
- if (need_resched()) {
+ while (sem->owner == owner) {
+ /*
+ * Ensure we emit the owner->on_cpu, dereference _after_
+ * checking sem->owner still matches owner, if that fails,
+ * owner might point to free()d memory, if it still matches,
+ * the rcu_read_lock() ensures the memory stays valid.
+ */
+ barrier();
+
+ /* abort spinning when need_resched or owner is not running */
+ if (!owner->on_cpu || need_resched()) {
rcu_read_unlock();
return false;
}
OpenPOWER on IntegriCloud