diff options
author | Will Deacon <will.deacon@arm.com> | 2019-02-22 17:14:59 +0000 |
---|---|---|
committer | Will Deacon <will.deacon@arm.com> | 2019-04-08 12:01:02 +0100 |
commit | fb24ea52f78e0d595852e09e3a55697c8f442189 (patch) | |
tree | 00ca29c7b0b8df6258a1ad1faf34f6e838ada26c /drivers/infiniband/hw/qib/qib_sd7220.c | |
parent | 949b8c72768e3a7c69d270962b8a142ee8deec1b (diff) | |
download | blackbird-op-linux-fb24ea52f78e0d595852e09e3a55697c8f442189.tar.gz blackbird-op-linux-fb24ea52f78e0d595852e09e3a55697c8f442189.zip |
drivers: Remove explicit invocations of mmiowb()
mmiowb() is now implied by spin_unlock() on architectures that require
it, so there is no reason to call it from driver code. This patch was
generated using coccinelle:
@mmiowb@
@@
- mmiowb();
and invoked as:
$ for d in drivers include/linux/qed sound; do \
spatch --include-headers --sp-file mmiowb.cocci --dir $d --in-place; done
NOTE: mmiowb() has only ever guaranteed ordering in conjunction with
spin_unlock(). However, pairing each mmiowb() removal in this patch with
the corresponding call to spin_unlock() is not at all trivial, so there
is a small chance that this change may regress any drivers incorrectly
relying on mmiowb() to order MMIO writes between CPUs using lock-free
synchronisation. If you've ended up bisecting to this commit, you can
reintroduce the mmiowb() calls using wmb() instead, which should restore
the old behaviour on all architectures other than some esoteric ia64
systems.
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Diffstat (limited to 'drivers/infiniband/hw/qib/qib_sd7220.c')
-rw-r--r-- | drivers/infiniband/hw/qib/qib_sd7220.c | 4 |
1 files changed, 0 insertions, 4 deletions
diff --git a/drivers/infiniband/hw/qib/qib_sd7220.c b/drivers/infiniband/hw/qib/qib_sd7220.c index 12caf3db8c34..4f4a09c2dbcd 100644 --- a/drivers/infiniband/hw/qib/qib_sd7220.c +++ b/drivers/infiniband/hw/qib/qib_sd7220.c @@ -1068,7 +1068,6 @@ static int qib_sd_setvals(struct qib_devdata *dd) for (idx = 0; idx < NUM_DDS_REGS; ++idx) { data = ((dds_reg_map & 0xF) << 4) | TX_FAST_ELT; writeq(data, iaddr + idx); - mmiowb(); qib_read_kreg32(dd, kr_scratch); dds_reg_map >>= 4; for (midx = 0; midx < DDS_ROWS; ++midx) { @@ -1076,7 +1075,6 @@ static int qib_sd_setvals(struct qib_devdata *dd) data = dds_init_vals[midx].reg_vals[idx]; writeq(data, daddr); - mmiowb(); qib_read_kreg32(dd, kr_scratch); } /* End inner for (vals for this reg, each row) */ } /* end outer for (regs to be stored) */ @@ -1098,13 +1096,11 @@ static int qib_sd_setvals(struct qib_devdata *dd) didx = idx + min_idx; /* Store the next RXEQ register address */ writeq(rxeq_init_vals[idx].rdesc, iaddr + didx); - mmiowb(); qib_read_kreg32(dd, kr_scratch); /* Iterate through RXEQ values */ for (vidx = 0; vidx < 4; vidx++) { data = rxeq_init_vals[idx].rdata[vidx]; writeq(data, taddr + (vidx << 6) + idx); - mmiowb(); qib_read_kreg32(dd, kr_scratch); } } /* end outer for (Reg-writes for RXEQ) */ |