summaryrefslogtreecommitdiffstats
path: root/kernel/locking/qspinlock.c
Commit message (Expand)AuthorAgeFilesLines
* treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 157Thomas Gleixner2019-05-301-10/+1
* locking/qspinlock_stat: Introduce generic lockevent_*() counting APIsWaiman Long2019-04-101-4/+4
* locking/qspinlock: Remove unnecessary BUG_ON() callWaiman Long2019-02-281-3/+0
* locking/qspinlock_stat: Track the no MCS node available caseWaiman Long2019-02-041-1/+2
* locking/qspinlock: Handle > 4 slowpath nesting levelsWaiman Long2019-02-041-0/+15
* locking/pvqspinlock: Extend node size when pvqspinlock is configuredWaiman Long2018-10-171-8/+26
* locking/qspinlock_stat: Count instances of nested lock slowpathsWaiman Long2018-10-171-0/+5
* locking/qspinlock, x86: Provide liveness guaranteePeter Zijlstra2018-10-161-1/+15
* locking/qspinlock: Rework some commentsPeter Zijlstra2018-10-161-10/+26
* locking/qspinlock: Re-order codePeter Zijlstra2018-10-161-29/+27
* locking/qspinlock: Add stat tracking for pending vs. slowpathWaiman Long2018-04-271-3/+11
* locking/qspinlock: Use try_cmpxchg() instead of cmpxchg() when lockingWill Deacon2018-04-271-10/+9
* locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb()Will Deacon2018-04-271-16/+17
* locking/qspinlock: Use smp_cond_load_relaxed() to wait for next nodeWill Deacon2018-04-271-4/+2
* locking/qspinlock: Use atomic_cond_read_acquire()Will Deacon2018-04-271-6/+6
* locking/qspinlock: Kill cmpxchg() loop when claiming lock from head of queueWill Deacon2018-04-271-11/+8
* locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpathWill Deacon2018-04-271-44/+58
* locking/qspinlock: Bound spinning on pending->locked transition in slowpathWill Deacon2018-04-271-3/+17
* locking/qspinlock: Merge 'struct __qspinlock' into 'struct qspinlock'Will Deacon2018-04-271-43/+3
* locking/qspinlock: Ensure node->count is updated before initialising nodeWill Deacon2018-02-131-0/+8
* locking/qspinlock: Ensure node is initialised before updating prev->nextWill Deacon2018-02-131-6/+7
* locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath()Paul E. McKenney2017-12-041-7/+5
* locking: Remove spin_unlock_wait() generic definitionsPaul E. McKenney2017-08-171-117/+0
* locking/qspinlock: Explicitly include asm/prefetch.hStafford Horne2017-07-081-0/+1
* locking/qspinlock: Use __this_cpu_dec() instead of full-blown this_cpu_dec()Pan Xinhui2016-06-271-1/+1
* locking/barriers: Introduce smp_acquire__after_ctrl_dep()Peter Zijlstra2016-06-141-1/+1
* locking/barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()Peter Zijlstra2016-06-141-6/+6
* locking/qspinlock: Add commentsPeter Zijlstra2016-06-081-0/+57
* locking/qspinlock: Clarify xchg_tail() orderingPeter Zijlstra2016-06-081-2/+13
* locking/qspinlock: Fix spin_unlock_wait() some morePeter Zijlstra2016-06-081-0/+60
* locking/qspinlock: Use smp_cond_acquire() in pending codeWaiman Long2016-02-291-4/+3
* locking/pvqspinlock: Queue node adaptive spinningWaiman Long2015-12-041-2/+3
* locking/pvqspinlock: Allow limited lock stealingWaiman Long2015-12-041-6/+20
* locking, sched: Introduce smp_cond_acquire() and use itPeter Zijlstra2015-12-041-2/+1
* locking/qspinlock: Avoid redundant read of next pointerWaiman Long2015-11-231-3/+6
* locking/qspinlock: Prefetch the next node cachelineWaiman Long2015-11-231-0/+10
* locking/qspinlock: Use _acquire/_release() versions of cmpxchg() & xchg()Waiman Long2015-11-231-5/+24
* locking/qspinlock/x86: Fix performance regression under unaccelerated VMsPeter Zijlstra2015-09-111-1/+1
* locking/pvqspinlock: Only kick CPU at unlock timeWaiman Long2015-08-031-3/+3
* locking/pvqspinlock: Implement simple paravirt support for the qspinlockWaiman Long2015-05-081-1/+67
* locking/qspinlock: Revert to test-and-set on hypervisorsPeter Zijlstra (Intel)2015-05-081-0/+3
* locking/qspinlock: Use a simple write to grab the lockWaiman Long2015-05-081-16/+50
* locking/qspinlock: Optimize for smaller NR_CPUSPeter Zijlstra (Intel)2015-05-081-1/+68
* locking/qspinlock: Extract out code snippets for the next patchWaiman Long2015-05-081-31/+48
* locking/qspinlock: Add pending bitPeter Zijlstra (Intel)2015-05-081-21/+98
* locking/qspinlock: Introduce a simple generic 4-byte queued spinlockWaiman Long2015-05-081-0/+209
OpenPOWER on IntegriCloud