summaryrefslogtreecommitdiffstats
path: root/drivers/md
Commit message (Collapse)AuthorAgeFilesLines
* md/bitmap: make sure reshape request are reflected in superblock.NeilBrown2012-05-221-0/+3
| | | | | | | As a reshape may change the sync_size and/or chunk_size, we need to update these whenever we write out the bitmap superblock. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: add bitmap_resize function to allow bitmap resizing.NeilBrown2012-05-222-30/+172
| | | | | | | | | | | | | | | | This function will allocate the new data structures and copy bits across from old to new, allowing for the possibility that the chunksize has changed. Use the same function for performing the initial allocation of the structures. This improves test coverage. When bitmap_resize is used to resize an existing bitmap, it only copies '1' bits in, not '0' bits. So when allocating the bitmap, ensure everything is initialised to ZERO. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: use DIV_ROUND_UP instead of open-codeNeilBrown2012-05-221-3/+2
| | | | | | Also take the opportunity to simplify CHUNK_BLOCK_RATIO. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: create a 'struct bitmap_counts' substructure of 'struct bitmap'NeilBrown2012-05-222-77/+84
| | | | | | | | | | The new "struct bitmap_counts" contains all the fields that are related to counting the number of active writes in each bitmap chunk. Having this separate will make it easier to change the chunksize or overall size of a bitmap atomically. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: make bitmap bitops atomic.NeilBrown2012-05-221-4/+2
| | | | | | | This allows us to remove spinlock protection which is more heavy-weight than simple atomics. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: make _page_attr bitops atomic.NeilBrown2012-05-221-32/+23
| | | | | | | | | | Using e.g. set_bit instead of __set_bit and using test_and_clear_bit allow us to remove some locking and contract other locked ranges. It is rare that we set or clear a lot of these bits, so gain should outweigh any cost. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: merge bitmap_file_unmap and bitmap_file_put.NeilBrown2012-05-221-24/+10
| | | | | | | | | | | There functions really do one thing together: release the 'bitmap_storage'. So make them just one function. Since we removed the locking (previous patch), we don't need to zero any fields before freeing them, so it all becomes a bit simpler. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: remove async freeing of bitmap file.NeilBrown2012-05-221-12/+6
| | | | | | | | | | | | | | | | | There is no real value in freeing things the moment there is an error. It is just as good to free the bitmap file and pages when the bitmap is explicitly removed (and replaced?) or at shutdown. With this gone, the bitmap will only disappear when the array is quiescent, so we can remove some locking. As the 'filemap' doesn't disappear now, include extra checks before trying to write any of it out. Also remove the check for "has it disappeared" in bitmap_daemon_write(). Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: convert some spin_lock_irqsave to spin_lock_irqNeilBrown2012-05-221-18/+14
| | | | | | | | All of these sites can only be called from process context with irqs enabled, so using irqsave/irqrestore just adds noise. Remove it. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: use set_bit, test_bit, etc for operation on bitmap->flags.NeilBrown2012-05-222-28/+24
| | | | | | | | | | We currently use '&' and '|' which isn't the norm in the kernel and doesn't allow easy atomicity. So change to bit numbers and {set,clear,test}_bit. This allows us to remove a spinlock/unlock (which was dubious anyway) and some other simplifications. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: remove single-bit manipulation on sb->stateNeilBrown2012-05-221-2/+2
| | | | | | | | | | | | | | | | Just do single-bit manipulations on bitmap->flags and copy whole value between that and sb->state. This will allow next patch which changes how bit manipulations are performed on bitmap->flags. This does result in BITMAP_STALE not being set in sb by bitmap_read_sb, however as the setting is determined by other information in the 'sb' we do not lose information this way. Normally, bitmap_load will be called shortly which will clear BITMAP_STALE anyway. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: remove bitmap_mask_stateNeilBrown2012-05-221-34/+3
| | | | | | | | | | | This function isn't really needed. It sets or clears a flag in both bitmap->flags and sb->state. However both times it is called, bitmap_update_sb is called soon afterwards which copies bitmap->flags to sb->state. So just make changes to bitmap->flags, and open-code those rather than hiding in a function. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: move storage allocation from bitmap_load to bitmap_create.NeilBrown2012-05-221-5/+6
| | | | | | | We should allocate memory for the storage-bitmap at create-time, not load time. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: separate bitmap file allocation to its own function.NeilBrown2012-05-221-46/+67
| | | | | | This will allow allocation before swapping in a new bitmap. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: store bytes in file rather than just in last page.NeilBrown2012-05-222-8/+10
| | | | | | | This number is more generally useful, and bytes-in-last-page is easily extracted from it. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: move some fields of 'struct bitmap' into a 'storage' substruct.NeilBrown2012-05-223-96/+110
| | | | | | | | | This new 'struct bitmap_storage' reflects the external storage of the bitmap. Having this clearly defined will make it easier to change the storage used while the array is active. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: change *_page_attr() to take a page number, not a page.NeilBrown2012-05-221-29/+26
| | | | | | | | Most often we have the page number, not the page. And that is what the *_page_attr() functions really want. So change the arguments to take that number. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: centralise allocation of bitmap file pages.NeilBrown2012-05-221-81/+68
| | | | | | | | | | | Instead of allocating pages in read_sb_page, read_page and bitmap_read_sb, allocate them all in bitmap_init_from disk. Also replace the hack of calling "attach_page_buffers(page, NULL)" to ensure that free_buffer() won't complain, by putting a test for PagePrivate in free_buffer(). Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: allow a bitmap with no backing storage.NeilBrown2012-05-222-62/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | An md bitmap comprises two parts - internal counting of active writes per 'chunk'. - external storage of whether there are any active writes on each chunk The second requires the first, but the first doesn't require the second. Not having backing storage means that the bitmap cannot expedite resync after a crash, but it still allows us to expedite the recovery of a recently-removed device. So: allow a bitmap to exist even if there is no backing device. In that case we default to 128M chunks. A particular value of this is that we can remove and re-add a bitmap (possibly of a different granularity) on a degraded array, and not lose the information needed to fast-recover the missing device. We don't actually activate these bitmaps yet - that will come in a later patch. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: add new 'space' attribute for bitmaps.NeilBrown2012-05-223-2/+73
| | | | | | | | | | | | | If we are to allow bitmaps to be resized when the array is resized, we need to know how much space there is. So create an attribute to store this information and set appropriate defaults. It can be set more precisely via sysfs, or future metadata extensions may allow it to be recorded. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: disentangle two different 'pending' flags.NeilBrown2012-05-222-102/+118
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are two different 'pending' concepts in the handling of the write intent bitmap. Firstly, a 'page' from the bitmap (which container PAGE_SIZE*8 bits) may have changes (bits cleared) that should be written in due course. There is no hurry for these and the page will transition from PENDING to NEEDWRITE and will then be written, though if it ever becomes DIRTY it will be written much sooner and PENDING will be cleared. Secondly, a page of counters - which contains PAGE_SIZE/2 counters, one for each bit, can usefully have a 'pending' flag which indicates if any of the counters are low (2 or 1) and ready to be processed by bitmap_daemon_work(). If this flag is clear we can skip the whole page. These two concepts are currently combined in the bitmap-file flag. This causes a tighter connection between the counters and the bitmap file than I would like - as I want to add some flexibility to the bitmap file. So introduce a new flag with the page-of-counters, and rewrite bitmap_daemon_work() so that it handles the two different 'pending' concepts separately. This also allows us to clear BITMAP_PAGE_PENDING when we write out a dirty page, which may occasionally reduce the number of times we write a page. Signed-off-by: NeilBrown <neilb@suse.de>
* raid5: support sync requestShaohua Li2012-05-222-2/+11
| | | | | | | | | REQ_SYNC is ignored in current raid5 code. Block layer does use it to do policy, for example ioscheduler. This patch adds it. Signed-off-by: Shaohua Li <shli@fusionio.com> Signed-off-by: NeilBrown <neilb@suse.de>
* raid5: remove unused variablesShaohua Li2012-05-221-4/+0
| | | | | | | The two variables are useless. Signed-off-by: Shaohua Li <shli@fusionio.com> Signed-off-by: NeilBrown <neilb@suse.de>
* md/raid10: Fix memleak in r10buf_pool_allocmajianpeng2012-05-221-3/+4
| | | | | | | | | | If the allocation of rep1_bio fails, we currently don't free the 'bio' of the same dev. Reported by kmemleak. Signed-off-by: majianpeng <majianpeng@gmail.com> Signed-off-by: NeilBrown <neilb@suse.de>
* md/raid1: allow fix_read_error to read from recovering device.majianpeng2012-05-221-1/+3
| | | | | | | | | | When attempting to fix a read error, it is acceptable to read from a device that is recovering, provided the recovery has got past the place we are reading from. This makes the test for "can we read from here" the same as the test in read_balance. Signed-off-by: majianpeng <majianpeng@gmail.com> Signed-off-by: NeilBrown <neilb@suse.de>
* md: move freeing of badblocks.page into md_rdev_clearNeilBrown2012-05-221-3/+2
| | | | | | | | This ensures that it is always freed - there were case where we failed to free the page. Reported-by: majianpeng <majianpeng@gmail.com> Signed-off-by: NeilBrown <neilb@suse.de>
* md: dm-raid should call helper function to clear rdev.NeilBrown2012-05-223-8/+6
| | | | | | | | | | | dm-raid currently open-codes the freeing of some members of and rdev. It is more maintainable to have it call common code from md.c which does this for all call-sites. So remove free_disk_sb to md_rdev_clear, export it, and use it in dm-raid.c Signed-off-by: NeilBrown <neilb@suse.de>
* md/raid10: add reshape supportNeilBrown2012-05-222-23/+872
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A 'near' or 'offset' lay RAID10 array can be reshaped to a different 'near' or 'offset' layout, a different chunk size, and a different number of devices. However the number of copies cannot change. Unlike RAID5/6, we do not support having user-space backup data that is being relocated during a 'critical section'. Rather, the data_offset of each device must change so that when writing any block to a new location, it will not over-write any data that is still 'live'. This means that RAID10 reshape is not supportable on v0.90 metadata. The different between the old data_offset and the new_offset must be at least the larger of the chunksize multiplied by offset copies of each of the old and new layout. (for 'near' mode, offset_copies == 1). A larger difference of around 64M seems useful for in-place reshapes as more data can be moved between metadata updates. Very large differences (e.g. 512M) seem to slow the process down due to lots of long seeks (on oldish consumer graded devices at least). Metadata needs to be updated whenever the place we are about to write to is considered - by the current metadata - to still contain data in the old layout. [unbalanced locking fix from Dan Carpenter <dan.carpenter@oracle.com>] Signed-off-by: NeilBrown <neilb@suse.de>
* md/raid10: split out interpretation of layout to separate function.NeilBrown2012-05-211-18/+49
| | | | | | | | We will soon be interpreting the layout (and chunksize etc) from multiple places to support reshape. So split it out into separate function. Signed-off-by: NeilBrown <neilb@suse.de>
* md/raid10: Introduce 'prev' geometry to support reshape.NeilBrown2012-05-212-23/+92
| | | | | | | | | | | | | | | | When RAID10 supports reshape it will need a 'previous' and a 'current' geometry, so introduce that here. Use the 'prev' geometry when before the reshape_position, and the current 'geo' when beyond it. At other times, use both as appropriate. For now, both are identical (And reshape_position is never set). When we use the 'prev' geometry, we must use the old data_offset. When we use the current (And a reshape is happening) we must use the new_data_offset. Signed-off-by: NeilBrown <neilb@suse.de>
* md: use resync_max_sectors for reshape as well as resync.NeilBrown2012-05-211-3/+5
| | | | | | | | | | | | | | Some resync type operations need to act on the address space of the device, others on the address space of the array. This only affects RAID10, so it sets resync_max_sectors to the array size (it defaults to the device size), and that is currently used for resync only. However reshape of a RAID10 must be done against the array size, not device size, so change code to use resync_max_sectors for both the resync and the reshape cases. This does not affect RAID5 or RAID1, just RAID10. Signed-off-by: NeilBrown <neilb@suse.de>
* md: teach sync_page_io about new_data_offset.NeilBrown2012-05-211-0/+4
| | | | | | | | | | | Some code in raid1 and raid10 use sync_page_io to read/write pages when responding to read errors. As we will shortly support changing data_offset for raid10, this function must understand new_data_offset. So add that understanding. Signed-off-by: NeilBrown <neilb@suse.de>
* md/raid10: collect some geometry fields into a dedicated structure.NeilBrown2012-05-212-108/+115
| | | | | | | | | | | | | | | We will shortly be adding reshape support for RAID10 which will require it having 2 concurrent geometries (before and after). To make that easier, collect most geometry fields into 'struct geom' and access them from there. Then we will more easily be able to add a second set of fields. Note that 'copies' is not in this struct and so cannot be changed. There is little need to change this number and doing so is a lot more difficult as it requires reallocating more things. So leave it out for now. Signed-off-by: NeilBrown <neilb@suse.de>
* md/raid5: allow for change in data_offset while managing a reshape.NeilBrown2012-05-212-33/+82
| | | | | | | | | | | The important issue here is incorporating the different in data_offset into calculations concerning when we might need to over-write data that is still thought to be valid. To this end we find the minimum offset difference across all devices and add that where appropriate. Signed-off-by: NeilBrown <neilb@suse.de>
* md/raid5: Use correct data_offset for all IO.NeilBrown2012-05-211-13/+59
| | | | | | | As there can now be two different data_offsets - an 'old' and a 'new' - we need to carefully choose between them. Signed-off-by: NeilBrown <neilb@suse.de>
* md: add possibility to change data-offset for devices.NeilBrown2012-05-215-32/+214
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | When reshaping we can avoid costly intermediate backup by changing the 'start' address of the array on the device (if there is enough room). So as a first step, allow such a change to be requested through sysfs, and recorded in v1.x metadata. (As we didn't previous check that all 'pad' fields were zero, we need a new FEATURE flag for this. A (belatedly) check that all remaining 'pad' fields are zero to avoid a repeat of this) The new data offset must be requested separately for each device. This allows each to have a different change in the data offset. This is not likely to be used often but as data_offset can be set per-device, new_data_offset should be too. This patch also removes the 'acknowledged' arg to rdev_set_badblocks as it is never used and never will be. At the same time we add a new arg ('in_new') which is currently always zero but will be used more soon. When a reshape finishes we will need to update the data_offset and rdev->sectors. So provide an exported function to do that. Signed-off-by: NeilBrown <neilb@suse.de>
* md: allow a reshape operation to be reversed.NeilBrown2012-05-213-13/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | Currently a reshape operation always progresses from the start of the array to the end unless the number of devices is being reduced, in which case it progressed in the opposite direction. To reverse a partial reshape which changes the number of devices you can stop the array and re-assemble with the raid-disks numbers reversed and it will undo. However for a reshape that does not change the number of devices it is not possible to reverse the reshape in the middle - you have to wait until it completes. So add a 'reshape_direction' attribute with is either 'forwards' or 'backwards' and can be explicitly set when delta_disks is zero. This will become more important when we allow the data_offset to change in a reshape. Then the explicit statement of what direction is being used will be more useful. This can be enabled in raid5 trivially as it already supports reverse reshape and just needs to use a different trigger to request it. Signed-off-by: NeilBrown <neilb@suse.de>
* md: using GFP_NOIO to allocate bio for flush requestShaohua Li2012-05-211-1/+1
| | | | | | | | | | | | | A flush request is usually issued in transaction commit code path, so using GFP_KERNEL to allocate memory for flush request bio falls into the classic deadlock issue. This is suitable for any -stable kernel to which it applies as it avoids a possible deadlock. Cc: stable@vger.kernel.org Signed-off-by: Shaohua Li <shli@fusionio.com> Signed-off-by: NeilBrown <neilb@suse.de>
* md/raid10: fix transcription error in calc_sectors conversion.NeilBrown2012-05-191-1/+1
| | | | | | | | | | | | The old code was sector_div(stride, fc); the new code was sector_dir(size, conf->near_copies); 'size' is right (the stride various wasn't really needed), but 'fc' means 'far_copies', and that is an important difference. Signed-off-by: NeilBrown <neilb@suse.de>
* MD: Add del_timer_sync to mddev_suspend (fix nasty panic)Jonathan Brassow2012-05-171-0/+2
| | | | | | | | | | | | | Use del_timer_sync to remove timer before mddev_suspend finishes. We don't want a timer going off after an mddev_suspend is called. This is especially true with device-mapper, since it can call the destructor function immediately following a suspend. This results in the removal (kfree) of the structures upon which the timer depends - resulting in a very ugly panic. Therefore, we add a del_timer_sync to mddev_suspend to prevent this. Cc: stable@vger.kernel.org Signed-off-by: NeilBrown <neilb@suse.de>
* md/raid10: set dev_sectors properly when resizing devices in array.NeilBrown2012-05-171-24/+32
| | | | | | | | | | | | | | | | | raid10 stores dev_sectors in 'conf' separately from the one in 'mddev' because it can have a very significant effect on block addressing and so need to be updated carefully. However raid10_resize isn't updating it at all! To update it correctly, we need to make sure it is a proper multiple of the chunksize taking various details of the layout in to account. This calculation is currently done in setup_conf. So split it out from there and call it from raid10_resize as well. Then set conf->dev_sectors properly. Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: fix calculation of 'chunks' - missing shift.NeilBrown2012-05-042-5/+1
| | | | | | | | | | | | | | | | commit 61a0d80c "md/bitmap: discard CHUNK_BLOCK_SHIFT macro" replaced CHUNK_BLOCK_RATIO() by the same text that was replacing CHUNK_BLOCK_SHIFT() - which is clearly wrong. The result is that 'chunks' is often too small by 1, which can sometimes result in a crash (not sure how). So use the correct replacement, and get rid of CHUNK_BLOCK_RATIO which is no longe used. Reported-by: Karl Newman <siliconfiend@gmail.com> Tested-by: Karl Newman <siliconfiend@gmail.com> Signed-off-by: NeilBrown <neilb@suse.de>
* md: fix possible corruption of array metadata on shutdown.NeilBrown2012-04-241-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | commit c744a65c1e2d59acc54333ce8 md: don't set md arrays to readonly on shutdown. removed the possibility of a 'BUG' when data is written to an array that has just been switched to read-only, but also introduced the possibility that the array metadata could be corrupted. If, when md_notify_reboot gets the mddev lock, the array is in a state where it is assembled but hasn't been started (as can happen if the personality module is not available, or in other unusual situations), then incorrect metadata will be written out making it impossible to re-assemble the array. So only call __md_stop_writes() if the array has actually been activated. This patch is needed for any stable kernel which has had the above commit applied. Cc: stable@vger.kernel.org Reported-by: Christoph Nelles <evilazrael@evilazrael.de> Signed-off-by: NeilBrown <neilb@suse.de>
* md: don't call ->add_disk unless there is good reason.NeilBrown2012-04-241-2/+2
| | | | | | | | | | | | | | | | | | | | | | Commit 7bfec5f35c68121e7b18 md/raid5: If there is a spare and a want_replacement device, start replacement. cause md_check_recovery to call ->add_disk much more often. Instead of only when the array is degraded, it is now called whenever md_check_recovery finds anything useful to do, which includes updating the metadata for clean<->dirty transition. This causes unnecessary work, and causes info messages from ->add_disk to be reported much too often. So refine md_check_recovery to only do any actual recovery checking (including ->add_disk) if MD_RECOVERY_NEEDED is set. This fix is suitable for 3.3.y: Cc: stable@vger.kernel.org Reported-by: Jan Ceuleers <jan.ceuleers@computer.org> Signed-off-by: NeilBrown <neilb@suse.de>
* DM RAID: Use safe version of rdev_for_eachJonathan Brassow2012-04-241-2/+2
| | | | | | | | | | Fix segfault caused by using rdev_for_each instead of rdev_for_each_safe Commit dafb20fa34320a472deb7442f25a0c086e0feb33 mistakenly replaced a safe iterator with an unsafe one when making some macro changes. Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: NeilBrown <neilb@suse.de>
* md/bitmap: prevent bitmap_daemon_work running while initialising bitmapNeilBrown2012-04-121-0/+2
| | | | | | | | | | | | | | | | If a bitmap is added while the array is active, it is possible for bitmap_daemon_work to run while the bitmap is being initialised. This is particularly a problem if bitmap_daemon_work sees bitmap->filemap as non-NULL before it has been filled in properly. So hold bitmap_info.mutex while filling in ->filemap to prevent problems. This patch is suitable for any -stable kernel, though it might not apply cleanly before about 3.1. Cc: stable@vger.kernel.org Signed-off-by: NeilBrown <neilb@suse.de>
* md/raid1,raid10: Fix calculation of 'vcnt' when processing error recovery.majianpeng2012-04-122-3/+4
| | | | | | | | | | | | If r1bio->sectors % 8 != 0,then the memcmp and a later memcpy will omit the last bio_vec. This is suitable for any stable kernel since 3.1 when bad-block management was introduced. Cc: stable@vger.kernel.org Signed-off-by: majianpeng <majianpeng@gmail.com> Signed-off-by: NeilBrown <neilb@suse.de>
* MD: Bitmap version cleanup.Andrei Warkentin2012-04-121-3/+0
| | | | | | | | | | | | bitmap_new_disk_sb() would still create V3 bitmap superblock with host-endian layout. Perhaps I'm confused, but shouldn't bitmap_new_disk_sb() be creating a V4 bitmap superblock instead, that is portable, as per comment in bitmap.h? Signed-off-by: Andrei Warkentin <andrey.warkentin@gmail.com> Signed-off-by: NeilBrown <neilb@suse.de>
* md/raid1,raid10: don't compare excess byte during consistency check.NeilBrown2012-04-032-2/+2
| | | | | | | | | | | | | | | | When comparing two pages read from different legs of a mirror, only compare the bytes that were read, not the whole page. In most cases we read a whole page, but in some cases with bad blocks or odd sizes devices we might read fewer than that. This bug has been present "forever" but at worst it might cause a report of two many mismatches and generate a little bit extra resync IO, so there is no need to back-port to -stable kernels. Reported-by: majianpeng <majianpeng@gmail.com> Signed-off-by: NeilBrown <neilb@suse.de>
* md/raid5: Fix a bug about judging if the operation is syncing or replacingmajianpeng2012-04-031-1/+3
| | | | | | | | | | | | | | | | | | | | | | When create a raid5 using assume-clean and echo check or repair to sync_action.Then component disks did not operated IO but the raid check/resync faster than normal. Because the judgement in function analyse_stripe(): if (do_recovery || sh->sector >= conf->mddev->recovery_cp) s->syncing = 1; else s->replacing = 1; When check or repair,the recovery_cp == MaxSectore,so syncing equal zero not one. This bug was introduced by commit 9a3e1101b827 md/raid5: detect and handle replacements during recovery. so this patch is suitable for 3.3-stable. Cc: stable@vger.kernel.org Signed-off-by: majianpeng <majianpeng@gmail.com> Signed-off-by: NeilBrown <neilb@suse.de>
OpenPOWER on IntegriCloud