summaryrefslogtreecommitdiffstats
path: root/arch/sparc/include/asm/backoff.h
Commit message (Collapse)AuthorAgeFilesLines
* sparc64: Improvde documentation and readability of atomic backoff code.David S. Miller2012-10-281-1/+41
| | | | | | | | | | | | | Document what's going on in asm/backoff.h with a large and descriptive comment. Refer to it above the cpu_relax() definition in asm/processor_64.h Rename the pause patching section to have "3insn" in it's name like the other patching sections do. Based upon feedback from Sam Ravnborg. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Use pause instruction when available.David S. Miller2012-10-271-13/+19
| | | | | | | | | | In atomic backoff and cpu_relax(), use the pause instruction found on SPARC-T4 and later. It makes the cpu strand unselectable for the given number of cycles, unless an intervening disrupting trap occurs. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Fix cpu strand yielding.David S. Miller2012-10-271-1/+4
| | | | | | | | | | | | | | | | | | | | | For atomic backoff, we just loop over an exponentially backed off counter. This is extremely ineffective as it doesn't actually yield the cpu strand so that other competing strands can use the cpu core. In cpus previous to SPARC-T4 we have to do this in a slightly hackish way, by doing an operation with no side effects that also happens to mark the strand as unavailable. The mechanism we choose for this is three reads of the %ccr (condition-code) register into %g0 (the zero register). SPARC-T4 has an explicit "pause" instruction, and we'll make use of that in a subsequent commit. Yield strands also in cpu_relax(). We really should have done this a very long time ago. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc64: Make lock backoff really a NOP on UP builds.David S. Miller2010-08-181-3/+8
| | | | | | | | | | | As noticed by Mikulas Patocka, the backoff macros don't completely nop out for UP builds, we still get a branch always and a delay slot nop. Fix this by making the branch to the backoff spin loop selective, then we can nop out the spin loop completely. Signed-off-by: David S. Miller <davem@davemloft.net>
* sparc, sparc64: use arch/sparc/includeSam Ravnborg2008-07-271-0/+31
The majority of this patch was created by the following script: *** ASM=arch/sparc/include/asm mkdir -p $ASM git mv include/asm-sparc64/ftrace.h $ASM git rm include/asm-sparc64/* git mv include/asm-sparc/* $ASM sed -ie 's/asm-sparc64/asm/g' $ASM/* sed -ie 's/asm-sparc/asm/g' $ASM/* *** The rest was an update of the top-level Makefile to use sparc for header files when sparc64 is being build. And a small fixlet to pick up the correct unistd.h from sparc64 code. Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
OpenPOWER on IntegriCloud