summaryrefslogtreecommitdiffstats
path: root/drivers/md
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'for-linus' of ↵Linus Torvalds2013-11-151-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial Pull trivial tree updates from Jiri Kosina: "Usual earth-shaking, news-breaking, rocket science pile from trivial.git" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (23 commits) doc: usb: Fix typo in Documentation/usb/gadget_configs.txt doc: add missing files to timers/00-INDEX timekeeping: Fix some trivial typos in comments mm: Fix some trivial typos in comments irq: Fix some trivial typos in comments NUMA: fix typos in Kconfig help text mm: update 00-INDEX doc: Documentation/DMA-attributes.txt fix typo DRM: comment: `halve' -> `half' Docs: Kconfig: `devlopers' -> `developers' doc: typo on word accounting in kprobes.c in mutliple architectures treewide: fix "usefull" typo treewide: fix "distingush" typo mm/Kconfig: Grammar s/an/a/ kexec: Typo s/the/then/ Documentation/kvm: Update cpuid documentation for steal time and pv eoi treewide: Fix common typo in "identify" __page_to_pfn: Fix typo in comment Correct some typos for word frequency clk: fixed-factor: Fix a trivial typo ...
| * treewide: fix "distingush" typoMichael Opdenacker2013-10-141-1/+1
| | | | | | | | | | Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* | Merge branch 'for-linus' of git://git.kernel.dk/linux-blockLinus Torvalds2013-11-1525-3005/+2587
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull second round of block driver updates from Jens Axboe: "As mentioned in the original pull request, the bcache bits were pulled because of their dependency on the immutable bio vecs. Kent re-did this part and resubmitted it, so here's the 2nd round of (mostly) driver updates for 3.13. It contains: - The bcache work from Kent. - Conversion of virtio-blk to blk-mq. This removes the bio and request path, and substitutes with the blk-mq path instead. The end result almost 200 deleted lines. Patch is acked by Asias and Christoph, who both did a bunch of testing. - A removal of bootmem.h include from Grygorii Strashko, part of a larger series of his killing the dependency on that header file. - Removal of __cpuinit from blk-mq from Paul Gortmaker" * 'for-linus' of git://git.kernel.dk/linux-block: (56 commits) virtio_blk: blk-mq support blk-mq: remove newly added instances of __cpuinit bcache: defensively handle format strings bcache: Bypass torture test bcache: Delete some slower inline asm bcache: Use ida for bcache block dev minor bcache: Fix sysfs splat on shutdown with flash only devs bcache: Better full stripe scanning bcache: Have btree_split() insert into parent directly bcache: Move spinlock into struct time_stats bcache: Kill sequential_merge option bcache: Kill bch_next_recurse_key() bcache: Avoid deadlocking in garbage collection bcache: Incremental gc bcache: Add make_btree_freeing_key() bcache: Add btree_node_write_sync() bcache: PRECEDING_KEY() bcache: bch_(btree|extent)_ptr_invalid() bcache: Don't bother with bucket refcount for btree node allocations bcache: Debug code improvements ...
| * | bcache: defensively handle format stringsKees Cook2013-11-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Just to be safe, call the error reporting function with "%s" to avoid any possible future format string leak. Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Bypass torture testKent Overstreet2013-11-105-8/+28
| | | | | | | | | | | | | | | | | | | | | More testing ftw! Also, now verify mode doesn't break if you read dirty data. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Delete some slower inline asmKent Overstreet2013-11-101-8/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Never saw a profile of bset_search_tree() where it wasn't bottlenecked on memory until I got my new Haswell machine, but when I tried it there it was suddenly burning 20% of the cpu in the inner loop on shrd... Turns out, the version of shrd that takes 64 bit operands has a 9 cycle latency. hah. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Use ida for bcache block dev minorKent Overstreet2013-11-101-6/+20
| | | | | | | | | | | | Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Fix sysfs splat on shutdown with flash only devsKent Overstreet2013-11-106-33/+30
| | | | | | | | | | | | | | | | | | Whoops. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Better full stripe scanningKent Overstreet2013-11-105-57/+99
| | | | | | | | | | | | | | | | | | | | | The old scanning-by-stripe code burned too much CPU, this should be better. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Have btree_split() insert into parent directlyKent Overstreet2013-11-101-46/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | The flow control in btree_insert_node() was... fragile... before, this'll use more stack (but since our btrees are never more than depth 1, that shouldn't matter) and it should be significantly clearer and less fragile. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Move spinlock into struct time_statsKent Overstreet2013-11-106-16/+17
| | | | | | | | | | | | | | | | | | Minor cleanup. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Kill sequential_merge optionKent Overstreet2013-11-104-31/+18
| | | | | | | | | | | | | | | | | | It never really made sense to expose this, so just kill it. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Kill bch_next_recurse_key()Kent Overstreet2013-11-103-21/+11
| | | | | | | | | | | | | | | | | | This dates from before the btree iterator, and now it's finally gone Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Avoid deadlocking in garbage collectionKent Overstreet2013-11-103-12/+13
| | | | | | | | | | | | | | | | | | | | | Not a complete fix - we could still deadlock if btree_insert_node() has to split... Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Incremental gcKent Overstreet2013-11-104-167/+226
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Big garbage collection rewrite; now, garbage collection uses the same mechanisms as used elsewhere for inserting/updating btree node pointers, instead of rewriting interior btree nodes in place. This makes the code significantly cleaner and less fragile, and means we can now make garbage collection incremental - it doesn't have to hold a write lock on the root of the btree for the entire duration of garbage collection. This means that there's less of a latency hit for doing garbage collection, which means we can gc more frequently (and do a better job of reclaiming from the cache), and we can coalesce across more btree nodes (improving our space efficiency). Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Add make_btree_freeing_key()Kent Overstreet2013-11-101-13/+18
| | | | | | | | | | | | | | | | | | Refactoring, prep work for incremental garbage collection. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Add btree_node_write_sync()Kent Overstreet2013-11-101-19/+16
| | | | | | | | | | | | | | | | | | | | | More refactoring - mostly making the interfaces more explicit about what we actually want to do. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: PRECEDING_KEY()Kent Overstreet2013-11-102-7/+20
| | | | | | | | | | | | | | | | | | btree_insert_key() was open coding this, this is just refactoring. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: bch_(btree|extent)_ptr_invalid()Kent Overstreet2013-11-104-21/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | Trying to treat btree pointers and leaf node pointers the same way was a mistake - going to start being more explicit about the type of key/pointer we're dealing with. This is the first part of that refactoring; this patch shouldn't change any actual behaviour. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Don't bother with bucket refcount for btree node allocationsKent Overstreet2013-11-104-27/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The bucket refcount (dropped with bkey_put()) is only needed to prevent the newly allocated bucket from being garbage collected until we've added a pointer to it somewhere. But for btree node allocations, the fact that we have btree nodes locked is enough to guard against races with garbage collection. Eventually the per bucket refcount is going to be replaced with something specific to bch_alloc_sectors(). Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Debug code improvementsKent Overstreet2013-11-1011-186/+162
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Couple changes: * Consolidate bch_check_keys() and bch_check_key_order(), and move the checks that only check_key_order() could do to bch_btree_iter_next(). * Get rid of CONFIG_BCACHE_EDEBUG - now, all that code is compiled in when CONFIG_BCACHE_DEBUG is enabled, and there's now a sysfs file to flip on the EDEBUG checks at runtime. * Dropped an old not terribly useful check in rw_unlock(), and refactored/improved a some of the other debug code. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Fix bch_ptr_bad()Kent Overstreet2013-11-101-34/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, bch_ptr_bad() could return false when there was a pointer to a nonexistant device... it only filtered out keys with PTR_CHECK_DEV pointers. This behaviour was intended for multiple cache device support; for that, just because the device for one of the pointers has gone away doesn't mean we want to filter out the rest of the pointers. But we don't yet explicitly filter/check individual pointers, so without that this behaviour was wrong - a corrupt bkey with a bad device pointer could cause us to deref a bad pointer. Doh. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Pull on disk data structures out into a separate headerKent Overstreet2013-11-109-340/+14
| | | | | | | | | | | | | | | | | | | | | Now, the on disk data structures are in a header that can be exported to userspace - and having them all centralized is nice too. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Move sector allocator to alloc.cKent Overstreet2013-11-104-186/+189
| | | | | | | | | | | | | | | | | | Just reorganizing things a bit. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Break up struct searchKent Overstreet2013-11-108-385/+370
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With all the recent refactoring around struct btree op struct search has gotten rather large. But we can now easily break it up in a different way - we break out struct btree_insert_op which is for inserting data into the cache, and that's now what the copying gc code uses - struct search is now specific to request.c Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Convert bch_btree_insert() to bch_btree_map_leaf_nodes()Kent Overstreet2013-11-105-52/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | Last of the btree_map() conversions. Main visible effect is bch_btree_insert() is no longer taking a struct btree_op as an argument anymore - there's no fancy state machine stuff going on, it's just a normal function. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Don't use op->insert_collisionKent Overstreet2013-11-105-7/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | When we convert bch_btree_insert() to bch_btree_map_leaf_nodes(), we won't be passing struct btree_op to bch_btree_insert() anymore - so we need a different way of returning whether there was a collision (really, a replace collision). Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Kill op->replaceKent Overstreet2013-11-107-73/+71
| | | | | | | | | | | | | | | | | | | | | | | | This is prep work for converting bch_btree_insert to bch_btree_map_leaf_nodes() - we have to convert all its arguments to actual arguments. Bunch of churn, but should be straightforward. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Drop some closure stuffKent Overstreet2013-11-103-250/+40
| | | | | | | | | | | | | | | | | | | | | With a the recent bcache refactoring, some of the closure code isn't needed anymore. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Kill op->clKent Overstreet2013-11-108-81/+63
| | | | | | | | | | | | | | | | | | | | | This isn't used for waiting asynchronously anymore - so this is a fairly trivial refactoring. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Prune struct btree_opKent Overstreet2013-11-1011-171/+179
| | | | | | | | | | | | | | | | | | | | | Eventual goal is for struct btree_op to contain only what is necessary for traversing the btree. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Clean up cache_lookup_fnKent Overstreet2013-11-101-62/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There was some looping in submit_partial_cache_hit() and submit_partial_cache_hit() that isn't needed anymore - originally, we wouldn't necessarily process the full hit or miss all at once because when splitting the bio, we took into account the restrictions of the device we were sending it to. But, device bio size restrictions are now handled elsewhere, with a wrapper around generic_make_request() - so that looping has been unnecessary for awhile now and we can now do quite a bit of cleanup. And if we trim the key we're reading from to match the subset we're actually reading, we don't have to explicitly calculate bi_sector anymore. Neat. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Convert bch_btree_read_async() to bch_btree_map_keys()Kent Overstreet2013-11-105-168/+125
| | | | | | | | | | | | | | | | | | | | | | | | | | | This is a fairly straightforward conversion, mostly reshuffling - op->lookup_done goes away, replaced by MAP_DONE/MAP_CONTINUE. And the code for handling cache hits and misses wasn't really btree code, so it gets moved to request.c. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Move some stuff to btree.cKent Overstreet2013-11-103-97/+96
| | | | | | | | | | | | | | | | | | | | | With the new btree_map() functions, we don't need to export the stuff needed for traversing the btree anymore. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Add btree_map() functionsKent Overstreet2013-11-105-97/+186
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lots of stuff has been open coding its own btree traversal - which is generally pretty simple code, but there are a few subtleties. This adds new new functions, bch_btree_map_nodes() and bch_btree_map_keys(), which do the traversal for you. Everything that's open coding btree traversal now (with the exception of garbage collection) is slowly going to be converted to these two functions; being able to write other code at a higher level of abstraction is a big improvement w.r.t. overall code quality. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Convert writeback to a kthreadKent Overstreet2013-11-104-206/+203
| | | | | | | | | | | | | | | | | | | | | | | | This simplifies the writeback flow control quite a bit - previously, it was conceptually two coroutines, refill_dirty() and read_dirty(). This makes the code quite a bit more straightforward. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Convert gc to a kthreadKent Overstreet2013-11-108-60/+74
| | | | | | | | | | | | | | | | | | | | | | | | | | | We needed a dedicated rescuer workqueue for gc anyways... and gc was conceptually a dedicated thread, just one that wasn't running all the time. Switch it to a dedicated thread to make the code a bit more straightforward. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Convert bucket_wait to wait_queue_head_tKent Overstreet2013-11-106-67/+70
| | | | | | | | | | | | | | | | | | | | | | | | At one point we did do fancy asynchronous waiting stuff with bucket_wait, but that's all gone (and bucket_wait is used a lot less than it used to be). So use the standard primitives. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Convert try_wait to wait_queue_head_tKent Overstreet2013-11-104-99/+75
| | | | | | | | | | | | | | | | | | | | | We never waited on c->try_wait asynchronously, so just use the standard primitives. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Move keylist out of btree_opKent Overstreet2013-11-106-28/+36
| | | | | | | | | | | | | | | | | | | | | Slowly working on pruning struct btree_op - the aim is for it to only contain things that are actually necessary for traversing the btree. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Refactor journalling flow controlKent Overstreet2013-11-107-179/+207
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Making things less asynchronous that don't need to be - bch_journal() only has to block when the journal or journal entry is full, which is emphatically not a fast path. So make it a normal function that just returns when it finishes, to make the code and control flow easier to follow. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Refactor read request code a bitKent Overstreet2013-11-101-36/+35
| | | | | | | | | | | | | | | | | | More refactoring, and renaming. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Refactor request_write()Kent Overstreet2013-11-102-187/+183
| | | | | | | | | | | | | | | | | | | | | Try to improve some of the naming a bit to be more consistent, and also improve the flow of control in request_write() a bit. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Clean up keylist codeKent Overstreet2013-11-105-52/+57
| | | | | | | | | | | | | | | | | | More random refactoring. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Add explicit keylist arg to btree_insert()Kent Overstreet2013-11-105-16/+18
| | | | | | | | | | | | | | | | | | | | | | | | Some refactoring - better to explicitly pass stuff around instead of having it all in the "big bag of state", struct btree_op. Going to prune struct btree_op quite a bit over time. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Convert btree_insert_check_key() to btree_insert_node()Kent Overstreet2013-11-104-72/+79
| | | | | | | | | | | | | | | | | | | | | | | | This was the main point of all this refactoring - now, btree_insert_check_key() won't fail just because the leaf node happened to be full. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Insert multiple keys at a timeKent Overstreet2013-11-101-17/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We'll often end up with a list of adjacent keys to insert - because bch_data_insert() may have to fragment the data it writes. Originally, to simplify things and avoid having to deal with corner cases bch_btree_insert() would pass keys from this list one at a time to btree_insert_recurse() - mainly because the list of keys might span leaf nodes, so it was easier this way. With the btree_insert_node() refactoring, it's now a lot easier to just pass down the whole list and have btree_insert_recurse() iterate over leaf nodes until it's done. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Add btree_insert_node()Kent Overstreet2013-11-103-66/+105
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The flow of control in the old btree insertion code was rather - backwards; we'd recurse down the btree (in btree_insert_recurse()), and then if we needed to split the keys to be inserted into the parent node would be effectively returned up to btree_insert_recurse(), which would notice there was more work to do and finish the insertion. The main problem with this was that the full logic for btree insertion could only be used by calling btree_insert_recurse; if you'd gotten to a btree leaf some other way and had a key to insert, if it turned out that node needed to be split you were SOL. This inverts the flow of control so btree_insert_node() does _full_ btree insertion, including splitting - and takes a (leaf) btree node to insert into as a parameter. This means we can now _correctly_ handle cache misses - for cache misses, we need to insert a fake "check" key into the btree when we discover we have a cache miss - while we still have the btree locked. Previously, if the btree node was full inserting a cache miss would just fail. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Explicitly track btree node's parentKent Overstreet2013-11-102-10/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is prep work for the reworked btree insertion code. The way we set b->parent is ugly and hacky... the problem is, when btree_split() or garbage collection splits or rewrites a btree node, the parent changes for all its (potentially already cached) children. I may change this later and add some code to look through the btree node cache and find all our cached child nodes and change the parent pointer then... Signed-off-by: Kent Overstreet <kmo@daterainc.com>
| * | bcache: Remove unnecessary check in should_split()Kent Overstreet2013-11-101-3/+2
| | | | | | | | | | | | | | | | | | | | | Checking i->seq was redundant, because since ages ago we always initialize the new bset when advancing b->written Signed-off-by: Kent Overstreet <kmo@daterainc.com>
OpenPOWER on IntegriCloud