summaryrefslogtreecommitdiffstats
path: root/drivers/md/md.h
diff options
context:
space:
mode:
authorGuoqing Jiang <jgq516@gmail.com>2019-07-24 11:09:19 +0200
committerSong Liu <songliubraving@fb.com>2019-08-07 10:25:02 -0700
commit9a567843f7ce0037bfd4d5fdc58a09d0a527b28b (patch)
treeaa29a87219000763dc42b289f2141fec5bc053f6 /drivers/md/md.h
parentcf89160793c439dca00e2563d0b7f153c274027b (diff)
downloadblackbird-op-linux-9a567843f7ce0037bfd4d5fdc58a09d0a527b28b.tar.gz
blackbird-op-linux-9a567843f7ce0037bfd4d5fdc58a09d0a527b28b.zip
md: allow last device to be forcibly removed from RAID1/RAID10.
When the 'last' device in a RAID1 or RAID10 reports an error, we do not mark it as failed. This would serve little purpose as there is no risk of losing data beyond that which is obviously lost (as there is with RAID5), and there could be other sectors on the device which are readable, and only readable from this device. This in general this maximises access to data. However the current implementation also stops an admin from removing the last device by direct action. This is rarely useful, but in many case is not harmful and can make automation easier by removing special cases. Also, if an attempt to write metadata fails the device must be marked as faulty, else an infinite loop will result, attempting to update the metadata on all non-faulty devices. So add 'fail_last_dev' member to 'struct mddev', then we can bypasses the 'last disk' checks for RAID1 and RAID10, and control the behavior per array by change sysfs node. Signed-off-by: NeilBrown <neilb@suse.de> [add sysfs node for fail_last_dev by Guoqing] Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com> Signed-off-by: Song Liu <songliubraving@fb.com>
Diffstat (limited to 'drivers/md/md.h')
-rw-r--r--drivers/md/md.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/drivers/md/md.h b/drivers/md/md.h
index 10f98200e2f8..b742659150a2 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -487,6 +487,7 @@ struct mddev {
unsigned int good_device_nr; /* good device num within cluster raid */
bool has_superblocks:1;
+ bool fail_last_dev:1;
};
enum recovery_flags {
OpenPOWER on IntegriCloud