diff options
author | Jason Low <jason.low2@hp.com> | 2018-04-26 11:34:22 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2018-04-27 09:48:49 +0200 |
commit | 7f56b58a92aaf2cab049f32a19af7cc57a3972f2 (patch) | |
tree | 7b211626cbb2ec12d0e504b359f486f33e4a37ba /kernel/locking | |
parent | f9c811fac48cfbbfb452b08d1042386947868d07 (diff) | |
download | blackbird-op-linux-7f56b58a92aaf2cab049f32a19af7cc57a3972f2.tar.gz blackbird-op-linux-7f56b58a92aaf2cab049f32a19af7cc57a3972f2.zip |
locking/mcs: Use smp_cond_load_acquire() in MCS spin loop
For qspinlocks on ARM64, we would like to use WFE instead
of purely spinning. Qspinlocks internally have lock
contenders spin on an MCS lock.
Update arch_mcs_spin_lock_contended() such that it uses
the new smp_cond_load_acquire() so that ARM64 can also
override this spin loop with its own implementation using WFE.
On x86, this can also be cheaper than spinning on
smp_load_acquire().
Signed-off-by: Jason Low <jason.low2@hp.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-9-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/locking')
-rw-r--r-- | kernel/locking/mcs_spinlock.h | 10 |
1 files changed, 6 insertions, 4 deletions
diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h index f046b7ce9dd6..5e10153b4d3c 100644 --- a/kernel/locking/mcs_spinlock.h +++ b/kernel/locking/mcs_spinlock.h @@ -23,13 +23,15 @@ struct mcs_spinlock { #ifndef arch_mcs_spin_lock_contended /* - * Using smp_load_acquire() provides a memory barrier that ensures - * subsequent operations happen after the lock is acquired. + * Using smp_cond_load_acquire() provides the acquire semantics + * required so that subsequent operations happen after the + * lock is acquired. Additionally, some architectures such as + * ARM64 would like to do spin-waiting instead of purely + * spinning, and smp_cond_load_acquire() provides that behavior. */ #define arch_mcs_spin_lock_contended(l) \ do { \ - while (!(smp_load_acquire(l))) \ - cpu_relax(); \ + smp_cond_load_acquire(l, VAL); \ } while (0) #endif |