diff options
author | Mike Snitzer <snitzer@redhat.com> | 2014-10-04 10:55:32 -0600 |
---|---|---|
committer | Jens Axboe <axboe@fb.com> | 2014-10-04 10:55:32 -0600 |
commit | b277da0a8a594308e17881f4926879bd5fca2a2d (patch) | |
tree | 1af7df6ade218a4b246dd43a0771701a672c6cb8 /drivers/md/bcache | |
parent | 7b7b7f7e024460cb7d77f8f96b6eb1a8803f94d9 (diff) | |
download | blackbird-op-linux-b277da0a8a594308e17881f4926879bd5fca2a2d.tar.gz blackbird-op-linux-b277da0a8a594308e17881f4926879bd5fca2a2d.zip |
block: disable entropy contributions for nonrot devices
Clear QUEUE_FLAG_ADD_RANDOM in all block drivers that set
QUEUE_FLAG_NONROT.
Historically, all block devices have automatically made entropy
contributions. But as previously stated in commit e2e1a148 ("block: add
sysfs knob for turning off disk entropy contributions"):
- On SSD disks, the completion times aren't as random as they
are for rotational drives. So it's questionable whether they
should contribute to the random pool in the first place.
- Calling add_disk_randomness() has a lot of overhead.
There are more reliable sources for randomness than non-rotational block
devices. From a security perspective it is better to err on the side of
caution than to allow entropy contributions from unreliable "random"
sources.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Diffstat (limited to 'drivers/md/bcache')
-rw-r--r-- | drivers/md/bcache/super.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index d4713d098a39..4dd2bb7167f0 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -842,6 +842,7 @@ static int bcache_device_init(struct bcache_device *d, unsigned block_size, q->limits.logical_block_size = block_size; q->limits.physical_block_size = block_size; set_bit(QUEUE_FLAG_NONROT, &d->disk->queue->queue_flags); + clear_bit(QUEUE_FLAG_ADD_RANDOM, &d->disk->queue->queue_flags); set_bit(QUEUE_FLAG_DISCARD, &d->disk->queue->queue_flags); blk_queue_flush(q, REQ_FLUSH|REQ_FUA); |