diff options
author | Milton Miller <miltonm@bga.com> | 2011-05-10 19:29:46 +0000 |
---|---|---|
committer | Benjamin Herrenschmidt <benh@kernel.crashing.org> | 2011-05-19 15:31:31 +1000 |
commit | 714542721b4a53a3ebbdd5f0619ac0f66e7df610 (patch) | |
tree | 50f79e4a44c0fe056e2a69e6347e7c8ae2722eff /arch/powerpc/kernel/.gitignore | |
parent | 1ece355b6825b7c61d1dc39a5c6cf49dc746e193 (diff) | |
download | blackbird-op-linux-714542721b4a53a3ebbdd5f0619ac0f66e7df610.tar.gz blackbird-op-linux-714542721b4a53a3ebbdd5f0619ac0f66e7df610.zip |
powerpc: Use bytes instead of bitops in smp ipi multiplexing
Since there are only 4 messages, we can replace the atomic bit set
(which uses atomic load reserve and store conditional sequence) with
a byte stores to seperate bytes. We still have to perform a load
reserve and store conditional sequence to avoid loosing messages on
reception but we can do that with a single call to xchg.
The do {} while and __BIG_ENDIAN specific mask testing was chosen by
looking at the generated asm code. On gcc-4.4, the bit masking becomes
a simple bit mask and test of the register returned from xchg without
storing and loading the value to the stack like attempts with a union
of bytes and an int (or worse, loading single bit constants from the
constant pool into non-voliatle registers that had to be preseved on
the stack). The do {} while avoids an unconditional branch to the
end of the loop to test the entry / repeat condition of a while loop
and instead optimises for the expected single iteration of the loop.
We have a full mb() at the beginning to cover ordering between send,
ipi, and receive so we can use xchg_local and forgo the further
acquire and release barriers of xchg.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Diffstat (limited to 'arch/powerpc/kernel/.gitignore')
0 files changed, 0 insertions, 0 deletions