summaryrefslogtreecommitdiffstats
path: root/drivers/md
Commit message (Collapse)AuthorAgeFilesLines
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-2.6-dmLinus Torvalds2009-04-0317-620/+877
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-2.6-dm: (36 commits) dm: set queue ordered mode dm: move wait queue declaration dm: merge pushback and deferred bio lists dm: allow uninterruptible wait for pending io dm: merge __flush_deferred_io into caller dm: move bio_io_error into __split_and_process_bio dm: rename __split_bio dm: remove unnecessary struct dm_wq_req dm: remove unnecessary work queue context field dm: remove unnecessary work queue type field dm: bio list add bio_list_add_head dm snapshot: persistent fix dtr cleanup dm snapshot: move status to exception store dm snapshot: move ctr parsing to exception store dm snapshot: use DMEMIT macro for status dm snapshot: remove dm_snap header dm snapshot: remove dm_snap header use dm exception store: move cow pointer dm exception store: move chunk_fields dm exception store: move dm_target pointer ...
| * dm: set queue ordered modeMikulas Patocka2009-04-021-0/+1
| | | | | | | | | | | | | | | | | | | | Set queue ordered mode. It doesn't really matter what we set here because we don't ever put any requests on the queue. But we need to set something other than QUEUE_ORDERED_NONE so that __generic_make_request passes barrier requests to us. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm: move wait queue declarationMikulas Patocka2009-04-021-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Move wait queue declaration and unplug to dm_wait_for_completion. The purpose is to minimize duplicate code in the further patches. The patch reorders functions a little bit. It doesn't change any functionality. For proper non-deadlock operation, add_wait_queue must happen before set_current_state(interruptible) and before the test for !atomic_read(&md->pending). Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm: merge pushback and deferred bio listsMikulas Patocka2009-04-021-21/+16
| | | | | | | | | | | | | | | | | | | | | | | | Merge pushback and deferred lists into one list - use deferred list for both deferred and pushed-back bios. This will be needed for proper support of barrier bios: it is impossible to support ordering correctly with two lists because the requests on both lists will be mixed up. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm: allow uninterruptible wait for pending ioMikulas Patocka2009-04-021-4/+5
| | | | | | | | | | | | | | | | | | | | Allow uninterruptible wait for pending IOs. Add argument "interruptible" to dm_wait_for_completion that specifies either interruptible or uninterruptible waiting. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm: merge __flush_deferred_io into callerMikulas Patocka2009-04-021-11/+7
| | | | | | | | | | | | | | | | | | Merge __flush_deferred_io() into the only caller, dm_wq_work(). There's no need to have a function that has only one caller. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm: move bio_io_error into __split_and_process_bioMikulas Patocka2009-04-021-11/+10
| | | | | | | | | | | | | | | | | | Move the bio_io_error() calls directly into __split_and_process_bio(). This avoids some code duplication in later patches. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm: rename __split_bioMikulas Patocka2009-04-021-4/+4
| | | | | | | | | | | | | | | | Rename __split_bio() to __split_and_process_bio() because it not only splits the bio to serveral parts, but also submits them to target drivers. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm: remove unnecessary struct dm_wq_reqMikulas Patocka2009-04-021-17/+7
| | | | | | | | | | | | | | | | | | | | Remove struct dm_wq_req and move "work" directly into struct mapped_device. In the revised implementation, the thread will do just one type of work (processing the queue). Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm: remove unnecessary work queue context fieldMikulas Patocka2009-04-021-8/+5
| | | | | | | | | | | | | | | | Remove the context field from struct dm_wq_req because we will no longer need it. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm: remove unnecessary work queue type fieldMikulas Patocka2009-04-021-17/+6
| | | | | | | | | | | | | | | | Remove "type" field from struct dm_wq_req because we no longer need it to have more than one value. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm: bio list add bio_list_add_headMikulas Patocka2009-04-021-0/+10
| | | | | | | | | | | | | | | | Introduce a function that adds a bio to the head of the list for use by the patch that will support barriers. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm snapshot: persistent fix dtr cleanupJonathan Brassow2009-04-021-5/+15
| | | | | | | | | | | | | | | | | | | | | | The persistent exception store destructor does not properly account for all conditions in which it can be called. If it is called after 'ctr' but before 'read_metadata' (e.g. if something else in 'snapshot_ctr' fails) then it will attempt to free areas of memory that haven't been allocated yet. Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm snapshot: move status to exception storeJonathan Brassow2009-04-024-16/+29
| | | | | | | | | | | | | | | | | | | | | | Let the exception store types print out their status through the new API, rather than having the snapshot code do it. Adjust the buffer position to allow for the preceding DMEMIT in the arguments to type->status(). Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm snapshot: move ctr parsing to exception storeJonathan Brassow2009-04-023-129/+132
| | | | | | | | | | | | | | | | First step of having the exception stores parse their own arguments - generalizing the interface. Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm snapshot: use DMEMIT macro for statusJonathan Brassow2009-04-021-9/+10
| | | | | | | | | | | | | | | | Use DMEMIT in place of snprintf. This makes it easier later when other modules are helping to populate our status output. Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm snapshot: remove dm_snap headerJonathan Brassow2009-04-022-87/+71
| | | | | | | | | | | | | | | | Move some of the last bits from dm-snap.h into dm-snap.c where they belong and remove dm-snap.h. Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm snapshot: remove dm_snap header useJonathan Brassow2009-04-025-57/+54
| | | | | | | | | | | | | | Move useful functions out of dm-snap.h and stop using dm-snap.h. Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm exception store: move cow pointerJonathan Brassow2009-04-026-23/+30
| | | | | | | | | | | | | | Move COW device from snapshot to exception store. Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm exception store: move chunk_fieldsJonathan Brassow2009-04-026-53/+67
| | | | | | | | | | | | | | Move chunk fields from snapshot to exception store. Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm exception store: move dm_target pointerJonathan Brassow2009-04-024-7/+7
| | | | | | | | | | | | | | Move target pointer from snapshot to exception store. Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm exception store: introduce registryJonathan Brassow2009-04-026-49/+307
| | | | | | | | | | | | | | Move exception stores into a registry. Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm raid1: add is_remote_recovering hook for clustersJonathan Brassow2009-04-021-2/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The logging API needs an extra function to make cluster mirroring possible. This new function allows us to check whether a mirror region is being recovered on another machine in the cluster. This helps us prevent simultaneous recovery I/O and process I/O to the same locations on disk. Cluster-aware log modules will implement this function. Single machine log modules will not. So, there is no performance penalty for single machine mirrors. Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Acked-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm exception store: separate type from instanceJonathan Brassow2009-04-024-25/+35
| | | | | | | | | | | | | | Introduce struct dm_exception_store_type. Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm log: remove struct dm_dirty_log_internalMike Snitzer2009-04-021-44/+14
| | | | | | | | | | | | | | | | | | | | Remove the 'dm_dirty_log_internal' structure. The resulting cleanup eliminates extra memory allocations. Therefore exposing the internal list_head to the external 'dm_dirty_log_type' structure is a worthwhile compromise. Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm log: use standard kernel module refcountMike Snitzer2009-04-021-16/+3
| | | | | | | | | | | | | | | | | | Avoid private module usage accounting by removing 'use' from dm_dirty_log_internal. The standard module reference counting is sufficient. Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm crypt: use kzfreeJohannes Weiner2009-04-021-4/+2
| | | | | | | | | | | | | | | | | | Use kzfree() instead of memset() + kfree(). Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm target: remove struct tt_internalCheng Renquan2009-04-022-61/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The tt_internal is really just a list_head to manage registered target_type in a double linked list, Here embed the list_head into target_type directly, 1. to avoid kmalloc/kfree; 2. then tt_internal is really unneeded; Cc: stable@kernel.org Signed-off-by: Cheng Renquan <crquan@gmail.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Reviewed-by: Alasdair G Kergon <agk@redhat.com>
| * dm table: fix upgrade mode raceAlasdair G Kergon2009-04-021-12/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | upgrade_mode() sets bdev to NULL temporarily, and does not have any locking to exclude anything from seeing that NULL. In dm_table_any_congested() bdev_get_queue() can dereference that NULL and cause a reported oops. Fix this by not changing that field during the mode upgrade. Cc: stable@kernel.org Cc: Neil Brown <neilb@suse.de> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm: path selector use module refcount directlyJun'ichi Nomura2009-04-021-18/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix refcount corruption in dm-path-selector Refcounting with non-atomic ops under shared lock will corrupt the counter in multi-processor system and may trigger BUG_ON(). Use module refcount. # same approach as dm-target-use-module-refcount-directly.patch here # https://www.redhat.com/archives/dm-devel/2008-December/msg00075.html Typical oops: kernel BUG at linux-2.6.29-rc3/drivers/md/dm-path-selector.c:90! Pid: 11148, comm: dmsetup Not tainted 2.6.29-rc3-nm #1 dm_put_path_selector+0x4d/0x61 [dm_multipath] Call Trace: [<ffffffffa031d3f9>] free_priority_group+0x33/0xb3 [dm_multipath] [<ffffffffa031d4aa>] free_multipath+0x31/0x67 [dm_multipath] [<ffffffffa031d50d>] multipath_dtr+0x2d/0x32 [dm_multipath] [<ffffffffa015d6c2>] dm_table_destroy+0x64/0xd8 [dm_mod] [<ffffffffa015b73a>] __unbind+0x46/0x4b [dm_mod] [<ffffffffa015b79f>] dm_swap_table+0x60/0x14d [dm_mod] [<ffffffffa015f963>] dev_suspend+0xfd/0x177 [dm_mod] [<ffffffffa0160250>] dm_ctl_ioctl+0x24c/0x29c [dm_mod] [<ffffffff80288cd3>] ? get_page_from_freelist+0x49c/0x61d [<ffffffffa015f866>] ? dev_suspend+0x0/0x177 [dm_mod] [<ffffffff802bf05c>] vfs_ioctl+0x2a/0x77 [<ffffffff802bf4f1>] do_vfs_ioctl+0x448/0x4a0 [<ffffffff802bf5a0>] sys_ioctl+0x57/0x7a [<ffffffff8020c05b>] system_call_fastpath+0x16/0x1b Cc: stable@kernel.org Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm target: use module refcount directlyCheng Renquan2009-04-021-17/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The tt_internal's 'use' field is superfluous: the module's refcount can do the work properly. An acceptable side-effect is that this increases the reference counts reported by 'lsmod'. Remove the superfluous test when removing a target module. [Crash possible without this on SMP - agk] Cc: stable@kernel.org Signed-off-by: Cheng Renquan <crquan@gmail.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Reviewed-by: Alasdair G Kergon <agk@redhat.com> Reviewed-by: Jonathan Brassow <jbrassow@redhat.com>
| * dm snapshot: avoid having two exceptions for the same chunkMikulas Patocka2009-04-021-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We need to check if the exception was completed after dropping the lock. After regaining the lock, __find_pending_exception checks if the exception was already placed into &s->pending hash. But we don't check if the exception was already completed and placed into &s->complete hash. If the process waiting in alloc_pending_exception was delayed at this point because of a scheduling latency and the exception was meanwhile completed, we'd miss that and allocate another pending exception for already completed chunk. It would lead to a situation where two records for the same chunk exist and potential data corruption because multiple snapshot I/Os to the affected chunk could be redirected to different locations in the snapshot. Cc: stable@kernel.org Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm snapshot: avoid dropping lock in __find_pending_exceptionMikulas Patocka2009-04-021-18/+24
| | | | | | | | | | | | | | | | | | It is uncommon and bug-prone to drop a lock in a function that is called with the lock held, so this is moved to the caller. Cc: stable@kernel.org Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm snapshot: refactor __find_pending_exceptionMikulas Patocka2009-04-021-24/+28
| | | | | | | | | | | | | | | | | | Move looking-up of a pending exception from __find_pending_exception to another function. Cc: stable@kernel.org Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm io: make sync_io uninterruptibleMikulas Patocka2009-04-021-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If someone sends signal to a process performing synchronous dm-io call, the kernel may crash. The function sync_io attempts to exit with -EINTR if it has pending signal, however the structure "io" is allocated on stack, so already submitted io requests end up touching unallocated stack space and corrupting kernel memory. sync_io sets its state to TASK_UNINTERRUPTIBLE, so the signal can't break out of io_schedule() --- however, if the signal was pending before sync_io entered while (1) loop, the corruption of kernel memory will happen. There is no way to cancel in-progress IOs, so the best solution is to ignore signals at this point. Cc: stable@kernel.org Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm raid1: switch read_record from kmalloc to slab to save memoryMikulas Patocka2009-04-021-4/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With my previous patch to save bi_io_vec, the size of dm_raid1_read_record is significantly increased (the vector list takes 3072 bytes on 32-bit machines and 4096 bytes on 64-bit machines). The structure dm_raid1_read_record used to be allocated with kmalloc, but kmalloc aligns the size on the next power-of-two so an object slightly greater than 4096 will allocate 8192 bytes of memory and half of that memory will be wasted. This patch turns kmalloc into a slab cache which doesn't have this padding so it will reduce the memory consumed. Cc: stable@kernel.org Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
| * dm: preserve bi_io_vec when resubmitting biosMikulas Patocka2009-04-021-0/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Device mapper saves and restores various fields in the bio, but it doesn't save bi_io_vec. If the device driver modifies this after a partially successful request, dm-raid1 and dm-multipath may attempt to resubmit a bio that has bi_size inconsistent with the size of vector. To make requests resubmittable in dm-raid1 and dm-multipath, we must save and restore the bio vector as well. To reduce the memory overhead involved in this, we do not save the pages in a vector and use a 16-bit field size if the page size is less than 65536. Cc: stable@kernel.org Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com>
* | Merge branch 'for-linus' of git://neil.brown.name/mdLinus Torvalds2009-04-0331-837/+3324
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'for-linus' of git://neil.brown.name/md: (53 commits) md/raid5 revise rules for when to update metadata during reshape md/raid5: minor code cleanups in make_request. md: remove CONFIG_MD_RAID_RESHAPE config option. md/raid5: be more careful about write ordering when reshaping. md: don't display meaningless values in sysfs files resync_start and sync_speed md/raid5: allow layout and chunksize to be changed on active array. md/raid5: reshape using largest of old and new chunk size md/raid5: prepare for allowing reshape to change layout md/raid5: prepare for allowing reshape to change chunksize. md/raid5: clearly differentiate 'before' and 'after' stripes during reshape. Documentation/md.txt update md: allow number of drives in raid5 to be reduced md/raid5: change reshape-progress measurement to cope with reshaping backwards. md: add explicit method to signal the end of a reshape. md/raid5: enhance raid5_size to work correctly with negative delta_disks md/raid5: drop qd_idx from r6_state md/raid6: move raid6 data processing to raid6_pq.ko md: raid5 run(): Fix max_degraded for raid level 4. md: 'array_size' sysfs attribute md: centralize ->array_sectors modifications ...
| * md/raid5 revise rules for when to update metadata during reshapeNeilBrown2009-03-312-6/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We currently update the metadata : 1/ every 3Megabytes 2/ When the place we will write new-layout data to is recorded in the metadata as still containing old-layout data. Rule one exists to avoid having to re-do too much reshaping in the face of a crash/restart. So it should really be time based rather than size based. So change it to "every 10 seconds". Rule two turns out to be too harsh when restriping an array 'in-place', as in that case the metadata much be updates for every stripe. For the in-place update, it can only possibly be safe from a crash if some user-space program data a backup of every e.g. few hundred stripes before allowing them to be reshaped. In that case, the constant metadata update is pointless. So only update the metadata if the new metadata will report that the end of the 'old-layout' data is beyond where we are currently writing 'new-layout' data. Signed-off-by: NeilBrown <neilb@suse.de>
| * md/raid5: minor code cleanups in make_request.NeilBrown2009-03-311-9/+7
| | | | | | | | | | | | | | ... and to be certain the that make_request doesn't wait forever, add a 'wake_up' when ->reshape_progress has been set to MaxSector Signed-off-by: NeilBrown <neilb@suse.de>
| * md: remove CONFIG_MD_RAID_RESHAPE config option.NeilBrown2009-03-312-39/+0
| | | | | | | | | | | | | | This was only needed when the code was experimental. Most of it is well tested now, so the option is no longer useful. Signed-off-by: NeilBrown <neilb@suse.de>
| * md/raid5: be more careful about write ordering when reshaping.NeilBrown2009-03-311-2/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we are reshaping an array, it is very important that we read the data from a particular sector offset before writing new data at that offset. In most cases when growing or shrinking an array we read long before we even consider writing. But when restriping an array without changing it size, there is a small possibility that we might have some data to available write before the read has happened at the same location. This would require some stripes to be in cache already. To guard against this small possibility, we check, before writing, that the 'old' stripe at the same location is not in the process of being read. And we ensure that we mark all 'source' stripes as such before allowing new 'destination' stripes to proceed. Signed-off-by: NeilBrown <neilb@suse.de>
| * md: don't display meaningless values in sysfs files resync_start and sync_speedNeilBrown2009-03-311-0/+4
| | | | | | | | | | | | | | | | When no resync if happening, both of these files currently have meaningless values (is slightly different ways). Change them to "none" in that case. Signed-off-by: NeilBrown <neilb@suse.de>
| * md/raid5: allow layout and chunksize to be changed on active array.NeilBrown2009-03-311-20/+56
| | | | | | | | | | | | | | | | | | If an array has 3 or more devices, we allow the chunksize or layout to be changed and when a reshape starts, we use these as the 'new' values. Signed-off-by: NeilBrown <neilb@suse.de>
| * md/raid5: reshape using largest of old and new chunk sizeNeilBrown2009-03-311-12/+22
| | | | | | | | | | | | | | | | This ensures that even when old and new stripes are overlapping, we will try to read all of the old before having to write any of the new. Signed-off-by: NeilBrown <neilb@suse.de>
| * md/raid5: prepare for allowing reshape to change layoutNeilBrown2009-03-312-14/+20
| | | | | | | | | | | | | | Add prev_algo to raid5_conf_t along the same lines as prev_chunk and previous_raid_disks. Signed-off-by: NeilBrown <neilb@suse.de>
| * md/raid5: prepare for allowing reshape to change chunksize.NeilBrown2009-03-312-16/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add "prev_chunk" to raid5_conf_t, similar to "previous_raid_disks", to remember what the chunk size was before the reshape that is currently underway. This seems like duplication with "chunk_size" and "new_chunk" in mddev_t, and to some extent it is, but there are differences. The values in mddev_t are always defined and often the same. The prev* values are only defined if a reshape is underway. Also (and more significantly) the raid5_conf_t values will be changed at the same time (inside an appropriate lock) that the reshape is started by setting reshape_position. In contrast, the new_chunk value is set when the sysfs file is written which could be well before the reshape starts. Signed-off-by: NeilBrown <neilb@suse.de>
| * md/raid5: clearly differentiate 'before' and 'after' stripes during reshape.NeilBrown2009-03-312-5/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | During a raid5 reshape, we have some stripes in the cache that are 'before' the reshape (and are still to be processed) and some that are 'after'. They are currently differentiated by having different ->disks values as the only reshape current supported involves changing the number of disks. However we will soon support reshapes that do not change the number of disks (chunk parity or chunk size). So make the difference more explicit with a 'generation' number. Signed-off-by: NeilBrown <neilb@suse.de>
| * md: allow number of drives in raid5 to be reducedNeilBrown2009-03-311-37/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When reshaping a raid5 to have fewer devices, we work from the end of the array to the beginning. md_do_sync gives addresses to sync_request that go from the beginning to the end. So largely ignore them use the internal state variable "reshape_progress" to keep track of what to do next. Never allow the size to be reduced below the minimum (4 for raid6, 3 otherwise). We require that the size of the array has already been reduced before the array is reshaped to a smaller size. This is because simply reducing the size is an easily reversible operation, while the reshape is immediately destructive and so is not reversible for the blocks at the ends of the devices. Thus to reshape an array to have fewer devices, you must first write an appropriately small size to md/array_size. When reshape finished, we remove any drives that are no longer needed and fix up ->degraded. Signed-off-by: NeilBrown <neilb@suse.de>
| * md/raid5: change reshape-progress measurement to cope with reshaping backwards.NeilBrown2009-03-312-38/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When reducing the number of devices in a raid4/5/6, the reshape process has to start at the end of the array and work down to the beginning. So we need to handle expand_progress and expand_lo differently. This patch renames "expand_progress" and "expand_lo" to avoid the implication that anything is getting bigger (expand->reshape) and every place they are used, we make sure that they are used the right way depending on whether delta_disks is positive or negative. Signed-off-by: NeilBrown <neilb@suse.de>
OpenPOWER on IntegriCloud