summaryrefslogtreecommitdiffstats
path: root/fs
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'for-linus' of git://oss.sgi.com/xfs/xfsLinus Torvalds2011-12-025-25/+74
|\ | | | | | | | | | | | | * 'for-linus' of git://oss.sgi.com/xfs/xfs: xfs: fix attr2 vs large data fork assert xfs: force buffer writeback before blocking on the ilock in inode reclaim xfs: validate acl count
| * xfs: fix attr2 vs large data fork assertChristoph Hellwig2011-11-291-25/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With Dmitry fsstress updates I've seen very reproducible crashes in xfs_attr_shortform_remove because xfs_attr_shortform_bytesfit claims that the attributes would not fit inline into the inode after removing an attribute. It turns out that we were operating on an inode with lots of delalloc extents, and thus an if_bytes values for the data fork that is larger than biggest possible on-disk storage for it which utterly confuses the code near the end of xfs_attr_shortform_bytesfit. Fix this by always allowing the current attribute fork, like we already do for the attr1 format, given that delalloc conversion will take care for moving either the data or attribute area out of line if it doesn't fit at that point - or making the point moot by merging extents at this point. Also document the function better, and clean up some loose bits. Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ben Myers <bpm@sgi.com>
| * xfs: force buffer writeback before blocking on the ilock in inode reclaimChristoph Hellwig2011-11-293-0/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we are doing synchronous inode reclaim we block the VM from making progress in memory reclaim. So if we encouter a flush locked inode promote it in the delwri list and wake up xfsbufd to write it out now. Without this we can get hangs of up to 30 seconds during workloads hitting synchronous inode reclaim. The scheme is copied from what we do for dquot reclaims. Reported-by: Simon Kirby <sim@hostway.ca> Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Simon Kirby <sim@hostway.ca> Signed-off-by: Ben Myers <bpm@sgi.com>
| * xfs: validate acl countChristoph Hellwig2011-11-281-0/+2
| | | | | | | | | | | | | | | | | | This prevents in-memory corruption and possible panics if the on-disk ACL is badly corrupted. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ben Myers <bpm@sgi.com>
* | Merge branch 'upstream-linus' of ↵Linus Torvalds2011-12-0131-536/+995
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jlbec/ocfs2 * 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jlbec/ocfs2: (31 commits) ocfs2: avoid unaligned access to dqc_bitmap ocfs2: Use filemap_write_and_wait() instead of write_inode_now() ocfs2: honor O_(D)SYNC flag in fallocate ocfs2: Add a missing journal credit in ocfs2_link_credits() -v2 ocfs2: send correct UUID to cleancache initialization ocfs2: Commit transactions in error cases -v2 ocfs2: make direntry invalid when deleting it fs/ocfs2/dlm/dlmlock.c: free kmem_cache_zalloc'd data using kmem_cache_free ocfs2: Avoid livelock in ocfs2_readpage() ocfs2: serialize unaligned aio ocfs2: Implement llseek() ocfs2: Fix ocfs2_page_mkwrite() ocfs2: Add comment about orphan scanning ocfs2: Clean up messages in the fs ocfs2/cluster: Cluster up now includes network connections too ocfs2/cluster: Add new function o2net_fill_node_map() ocfs2/cluster: Fix output in file elapsed_time_in_ms ocfs2/dlm: dlmlock_remote() needs to account for remastery ocfs2/dlm: Take inflight reference count for remotely mastered resources too ocfs2/dlm: Cleanup dlm_wait_for_node_death() and dlm_wait_for_node_recovery() ...
| * | ocfs2: avoid unaligned access to dqc_bitmapAkinobu Mita2011-12-012-5/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The dqc_bitmap field of struct ocfs2_local_disk_chunk is 32-bit aligned, but not 64-bit aligned. The dqc_bitmap is accessed by ocfs2_set_bit(), ocfs2_clear_bit(), ocfs2_test_bit(), or ocfs2_find_next_zero_bit(). These are wrapper macros for ext2_*_bit() which need to take an unsigned long aligned address (though some architectures are able to handle unaligned address correctly) So some 64bit architectures may not be able to access the dqc_bitmap correctly. This avoids such unaligned access by using another wrapper functions for ext2_*_bit(). The code is taken from fs/ext4/mballoc.c which also need to handle unaligned bitmap access. Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Acked-by: Joel Becker <jlbec@evilplan.org> Cc: Mark Fasheh <mfasheh@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Joel Becker <jlbec@evilplan.org>
| * | ocfs2: Use filemap_write_and_wait() instead of write_inode_now()Jan Kara2011-11-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since ocfs2 has no ->write_inode method, there's no point in calling write_inode_now() from ocfs2_cleanup_delete_inode(). Use filemap_write_and_wait() instead. This helps us to cleanup inode writing interfaces... Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Joel Becker <jlbec@evilplan.org>
| * | ocfs2: honor O_(D)SYNC flag in fallocateMark Fasheh2011-11-171-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | We need to sync the transaction which updates i_size if the file is marked as needing sync semantics. Signed-off-by: Mark Fasheh <mfasheh@suse.de> Signed-off-by: Joel Becker <jlbec@evilplan.org>
| * | ocfs2: Add a missing journal credit in ocfs2_link_credits() -v2Xiaowei.Hu2011-11-171-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With indexed_dir enabled, ocfs2 maintains a list of dirblocks having space. The credit calculation in ocfs2_link_credits() did not correctly account for adding an entry that exactly fills a dirblock that triggers removing that dirblock by changing the pointer in the previous block in the list. The credit calculation did not account for that previous block. To expose, do: mkfs.ocfs2 -b 512 -M local /dev/sdX mount /dev/sdX /ocfs2 mkdir /ocfs2/linkdir touch /ocfs2/linkdir/file1 for i in `seq 1 29` ; do link /ocfs2/linkdir/file1 /ocfs2/linkdir/linklinklinklinklinklink$i; done rm -f /ocfs2/linkdir/linklinklinklinklinklink10 sleep 8 link /ocfs2/linkdir/file1 /ocfs2/linkdir/linklinklinklinklinklinkaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa Note: The link names have been crafted for a 512 byte blocksize. Reproducing with a larger blocksize will require longer (or more) links. The sleep is important. We want jbd2 to commit the transaction so that the missing block does not piggy back on account of the previous transaction. Signed-off-by: XiaoweiHu <xiaowei.hu at oracle.com> Reviewed-by: WengangWang <wen.gang.wang at oracle.com> Reviewed-by: Sunil.Mushran <sunil.mushran at oracle.com> Signed-off-by: Joel Becker <jlbec@evilplan.org>
| * | ocfs2: send correct UUID to cleancache initializationDan Magenheimer2011-11-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ocfs2: Fix cleancache initialization call to correctly pass uuid As reported by Steven Whitehouse in https://lkml.org/lkml/2011/5/27/221 the ocfs2 volume UUID is incorrectly passed to cleancache. As a result, shared-ephemeral tmem pools will not actually be created; instead they will be private (unshared) which misses out on a major benefit of tmem. Reported-by: Steven Whitehouse <swhiteho@redhat.com> Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com> Signed-off-by: Joel Becker <jlbec@evilplan.org>
| * | ocfs2: Commit transactions in error cases -v2Wengang Wang2011-11-173-6/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are three cases found that in error cases, journal transactions are not committed nor aborted. We should take care of these case by committing the transactions. Otherwise, there would left a journal handle which will lead to , in same process context, the comming ocfs2_start_trans() gets wrong credits. Signed-off-by: Wengang Wang <wen.gang.wang@oracle.com> Signed-off-by: Joel Becker <jlbec@evilplan.org>
| * | ocfs2: make direntry invalid when deleting itWengang Wang2011-11-171-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we deleting a direntry from a directory, if it's the first in a block we invalid it by setting inode to 0; otherwise, we merge the deleted one to the prior and contiguous direntry. And we don't truncate directories. There is a problem for the later case since inode is not set to 0. This problem happens when the caller passes a file position as parameter to ocfs2_dir_foreach_blk(). If the position happens to point to a stale(not the first, deleted in betweens of ocfs2_dir_foreach_blk()s) direntry, we are not able to recognize its staleness. So that we treat it as a live one wrongly. The fix is to set inode to 0 in both cases indicating the direntry is stale. This won't introduce additional IOs. Signed-off-by: Wengang Wang <wen.gang.wang@oracle.com> Signed-off-by: Joel Becker <jlbec@evilplan.org>
| * | fs/ocfs2/dlm/dlmlock.c: free kmem_cache_zalloc'd data using kmem_cache_freeJulia Lawall2011-11-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Memory allocated using kmem_cache_zalloc should be freed using kmem_cache_free, not kfree. The semantic patch that fixes this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression x,e,e1,e2; @@ x = kmem_cache_zalloc(e1,e2) ... when != x = e ?-kfree(x) +kmem_cache_free(e1,x) // </smpl> Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: Joel Becker <jlbec@evilplan.org>
| * | Merge branch 'mw-3.1-jul25' of git://oss.oracle.com/git/smushran/linux-2.6 ↵Joel Becker2011-08-21389-10005/+9983
| |\ \ | | | | | | | | | | | | into ocfs2-fixes
| | * | ocfs2: Implement llseek()Sunil Mushran2011-07-253-2/+151
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ocfs2 implements its own llseek() to provide the SEEK_HOLE/SEEK_DATA functionality. SEEK_HOLE sets the file pointer to the start of either a hole or an unwritten (preallocated) extent, that is greater than or equal to the supplied offset. SEEK_DATA sets the file pointer to the start of an allocated extent (not unwritten) that is greater than or equal to the supplied offset. If the supplied offset is on a desired region, then the file pointer is set to it. Offsets greater than or equal to the file size return -ENXIO. Unwritten (preallocated) extents are considered holes because the file system treats reads to such regions in the same way as it does to holes. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| | * | ocfs2: Fix ocfs2_page_mkwrite()Wengang Wang2011-07-242-37/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch address two shortcomings in ocfs2_page_mkwrite(): 1. Makes the function return better VM_FAULT_* errors. 2. It handles a error that is triggered when a page is dropped from the mapping due to memory pressure. This patch locks the page to prevent that. [Patch was cleaned up by Sunil Mushran.] Signed-off-by: Wengang Wang <wen.gang.wang@oracle.com> Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| | * | ocfs2: Add comment about orphan scanningSunil Mushran2011-07-241-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | Add a comment that explains the reason as to why orphan scan scans all the slots. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| | * | ocfs2: Clean up messages in the fsSunil Mushran2011-07-244-14/+22
| | | | | | | | | | | | | | | | | | | | | | | | Convert useful messages from ML_NOTICE to KERN_NOTICE to improve readability. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| | * | ocfs2/cluster: Cluster up now includes network connections tooSunil Mushran2011-07-242-13/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The cluster up check only checks to see if the node is heartbeating or not. If yes it continues assuming that the node is connected to all the nodes. But if that is not the case, the cluster join aborts with a stack of errors that are not easy to comprehend. This patch adds the network connect check upfront and prints the nodes that the node is not yet connected to, before aborting. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| | * | ocfs2/cluster: Add new function o2net_fill_node_map()Sunil Mushran2011-07-243-33/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Patch adds function o2net_fill_node_map() to return the bitmap of nodes that it is connected to. This bitmap is also accessible by the user via the debugfs file, /sys/kernel/debug/o2net/connected_nodes. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| | * | ocfs2/cluster: Fix output in file elapsed_time_in_msSunil Mushran2011-07-241-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The o2hb debugfs file, elapsed_time_in_ms, should return values only after the timer is armed atleast once. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| | * | ocfs2/dlm: dlmlock_remote() needs to account for remasterySunil Mushran2011-07-241-10/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In dlmlock_remote(), we wait for the resource to stop being active before setting the inprogress flag. Active includes recovery, migration, etc. The problem here is that if the resource was being recovered or migrated, the new owner could very well be that node itself (and thus not a remote node). This problem was observed in Oracle bug#12583620. The error messages observed were as follows: dlm_send_remote_lock_request:337 ERROR: Error -40 (ELOOP) when sending message 503 (key 0xd6d8c7) to node 2 dlmlock_remote:271 ERROR: dlm status = DLM_BADARGS dlmlock:751 ERROR: dlm status = DLM_BADARGS Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| | * | ocfs2/dlm: Take inflight reference count for remotely mastered resources tooSunil Mushran2011-07-243-39/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The inflight reference count, in the lock resource, is taken to pin the resource in memory. We take it when a new resource is created and release it after a lock is attached to it. We do this to prevent the resource from getting purged prematurely. Earlier this reference count was being taken for locally mastered resources only. This patch extends the same functionality for remotely mastered ones. We are doing this because the same premature purging could occur for remotely mastered resources if the remote node were to die before completion of the create lock. Fix for Oracle bug#12405575. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| | * | ocfs2/dlm: Cleanup dlm_wait_for_node_death() and dlm_wait_for_node_recovery()Sunil Mushran2011-07-242-26/+24
| | | | | | | | | | | | | | | | | | | | | | | | dlm_wait_for_node_death() and dlm_wait_for_node_recovery() needed a facelift. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| | * | ocfs2/dlm: Trace insert/remove of resource to/from hashSunil Mushran2011-07-243-11/+15
| | | | | | | | | | | | | | | | | | | | | | | | Add mlog to trace adding and removing the resource from/to the hash table. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| | * | ocfs2/dlm: Clean up refmap helpersSunil Mushran2011-07-243-79/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Patch cleans up helpers that set/clear refmap bits and grab/drop inflight lock ref counts. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| | * | ocfs2/dlm: Cleanup up dlm_finish_local_lockres_recovery()Sunil Mushran2011-07-241-32/+25
| | | | | | | | | | | | | | | | | | | | | | | | dlm_finish_local_lockres_recovery() needed a facelift. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| | * | ocfs2: Clean up messages in stack_o2cb.cSunil Mushran2011-07-241-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | o2cb messages needed a facelift. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| | * | ocfs2/dlm: Clean up messages in o2dlmSunil Mushran2011-07-245-70/+71
| | | | | | | | | | | | | | | | | | | | | | | | o2dlm messages needed a facelift. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| | * | ocfs2/cluster: Clean up messages in o2netSunil Mushran2011-07-241-66/+53
| | | | | | | | | | | | | | | | | | | | | | | | o2net messages needed a facelift. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| | * | ocfs2/cluster: Abort heartbeat start on hard-ro devicesSunil Mushran2011-07-241-69/+116
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently if the heartbeat device is hard-ro, the o2hb thread keeps chugging along and dumping errors along the way. The user needs to manually stop the heartbeat. The patch addresses this shortcoming by adding a limit to the number of times the hb thread will iterate in an unsteady state. If the hb thread does not ready steady state in that many interation, the start is aborted. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
| * | | ocfs2: Avoid livelock in ocfs2_readpage()Jan Kara2011-07-281-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When someone writes to an inode, readers accessing the same inode via ocfs2_readpage() just busyloop trying to get ip_alloc_sem because do_generic_file_read() looks up the page again and retries ->readpage() when previous attempt failed with AOP_TRUNCATED_PAGE. When there are enough readers, they can occupy all CPUs and in non-preempt kernel the system is deadlocked because writer holding ip_alloc_sem is never run to release the semaphore. Fix the problem by making reader block on ip_alloc_sem to break the busy loop. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Joel Becker <jlbec@evilplan.org>
| * | | ocfs2: serialize unaligned aioMark Fasheh2011-07-285-2/+73
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix a corruption that can happen when we have (two or more) outstanding aio's to an overlapping unaligned region. Ext4 (e9e3bcecf44c04b9e6b505fd8e2eb9cea58fb94d) and xfs recently had to fix similar issues. In our case what happens is that we can have an outstanding aio on a region and if a write comes in with some bytes overlapping the original aio we may decide to read that region into a page before continuing (typically because of buffered-io fallback). Since we have no ordering guarantees with the aio, we can read stale or bad data into the page and then write it back out. If the i/o is page and block aligned, then we avoid this issue as there won't be any need to read data from disk. I took the same approach as Eric in the ext4 patch and introduced some serialization of unaligned async direct i/o. I don't expect this to have an effect on the most common cases of AIO. Unaligned aio will be slower though, but that's far more acceptable than data corruption. Signed-off-by: Mark Fasheh <mfasheh@suse.com> Signed-off-by: Joel Becker <jlbec@evilplan.org>
| * | | ocfs2: use proper little-endian bitopsAkinobu Mita2011-05-311-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using __test_and_{set,clear}_bit_le() with ignoring its return value can be replaced with __{set,clear}_bit_le(). Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Mark Fasheh <mfasheh@suse.com> Cc: ocfs2-devel@oss.oracle.com Signed-off-by: Joel Becker <jlbec@evilplan.org>
| * | | ocfs2: null deref on allocation errorDan Carpenter2011-05-311-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The original code had a null derefence in the error handling. Signed-off-by: Dan Carpenter <error27@gmail.com> Signed-off-by: Joel Becker <jlbec@evilplan.org>
| * | | ocfs2: checking the wrong variable in ocfs2_move_extent()Dan Carpenter2011-05-311-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "new_phys_cpos" is always a valid pointer here. ocfs2_probe_alloc_group() allocates "*new_phys_cpos". Signed-off-by: Dan Carpenter <error27@gmail.com> Signed-off-by: Joel Becker <jlbec@evilplan.org>
| * | | ocfs2: Bugfix for hard readonly mountTiger Yang2011-05-312-7/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ocfs2 cannot currently mount a device that is readonly at the media ("hard readonly"). Fix the broken places. see detail: http://oss.oracle.com/bugzilla/show_bug.cgi?id=1322 [ Description edited -- Joel ] Signed-off-by: Tiger Yang <tiger.yang@oracle.com> Reviewed-by: Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <jlbec@evilplan.org>
* | | | Merge branch 'for-linus' of ↵Linus Torvalds2011-12-018-23/+58
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: Btrfs: fix meta data raid-repair merge problem Btrfs: skip allocation attempt from empty cluster Btrfs: skip block groups without enough space for a cluster Btrfs: start search for new cluster at the beginning Btrfs: reset cluster's max_size when creating bitmap Btrfs: initialize new bitmaps' list Btrfs: fix oops when calling statfs on readonly device Btrfs: Don't error on resizing FS to same size Btrfs: fix deadlock on metadata reservation when evicting a inode Fix URL of btrfs-progs git repository in docs btrfs scrub: handle -ENOMEM from init_ipath()
| * | | | Btrfs: fix meta data raid-repair merge problemJan Schmidt2011-12-011-7/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 4a54c8c16 introduced raid-repair, killing the individual readpage_io_failed_hook entries from inode.c and disk-io.c. Commit 4bb31e92 introduced new readahead code, adding a readpage_io_failed_hook to disk-io.c. The raid-repair commit had logic to disable raid-repair, if readpage_io_failed_hook is set. Thus, the readahead commit effectively disabled raid-repair for meta data. This commit changes the logic to always attempt raid-repair when needed and call the readpage_io_failed_hook in case raid-repair fails. This is much more straight forward and should have been like that from the beginning. Signed-off-by: Jan Schmidt <list.btrfs@jan-o-sch.net> Reported-by: Stefan Behrens <sbehrens@giantdisaster.de> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | | Btrfs: skip allocation attempt from empty clusterAlexandre Oliva2011-11-301-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If we don't have a cluster, don't bother trying to allocate from it, jumping right away to the attempt to allocate a new cluster. Signed-off-by: Alexandre Oliva <oliva@lsd.ic.unicamp.br> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | | Btrfs: skip block groups without enough space for a clusterAlexandre Oliva2011-11-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We test whether a block group has enough free space to hold the requested block, but when we're doing clustered allocation, we can save some cycles by testing whether it has enough room for the cluster upfront, otherwise we end up attempting to set up a cluster and failing. Only in the NO_EMPTY_SIZE loop do we attempt an unclustered allocation, and by then we'll have zeroed the cluster size, so this patch won't stop us from using the block group as a last resort. Signed-off-by: Alexandre Oliva <oliva@lsd.ic.unicamp.br> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | | Btrfs: start search for new cluster at the beginningAlexandre Oliva2011-11-301-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of starting at zero (offset is always zero), request a cluster starting at search_start, that denotes the beginning of the current block group. Signed-off-by: Alexandre Oliva <oliva@lsd.ic.unicamp.br> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | | Btrfs: reset cluster's max_size when creating bitmapAlexandre Oliva2011-11-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The field that indicates the size of the largest contiguous chunk of free space in the cluster is not initialized when setting up bitmaps, it's only increased when we find a larger contiguous chunk. We end up retaining a larger value than appropriate for highly-fragmented clusters, which may cause pointless searches for large contiguous groups, and even cause clusters that do not meet the density requirements to be set up. Signed-off-by: Alexandre Oliva <oliva@lsd.ic.unicamp.br> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | | Btrfs: initialize new bitmaps' listAlexandre Oliva2011-11-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We're failing to create clusters with bitmaps because setup_cluster_no_bitmap checks that the list is empty before inserting the bitmap entry in the list for setup_cluster_bitmap, but the list field is only initialized when it is restored from the on-disk free space cache, or when it is written out to disk. Besides a potential race condition due to the multiple use of the list field, filesystem performance severely degrades over time: as we use up all non-bitmap free extents, the try-to-set-up-cluster dance is done at every metadata block allocation. For every block group, we fail to set up a cluster, and after failing on them all up to twice, we fall back to the much slower unclustered allocation. To make matters worse, before the unclustered allocation, we try to create new block groups until we reach the 1% threshold, which introduces additional bitmaps and thus block groups that we'll iterate over at each metadata block request.
| * | | | Btrfs: fix oops when calling statfs on readonly deviceLi Zefan2011-11-301-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To reproduce this bug: # dd if=/dev/zero of=img bs=1M count=256 # mkfs.btrfs img # losetup -r /dev/loop1 img # mount /dev/loop1 /mnt OOPS!! It triggered BUG_ON(!nr_devices) in btrfs_calc_avail_data_space(). To fix this, instead of checking write-only devices, we check all open deivces: # df -h /dev/loop1 Filesystem Size Used Avail Use% Mounted on /dev/loop1 250M 28K 238M 1% /mnt Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
| * | | | Btrfs: Don't error on resizing FS to same sizeMike Fleetwood2011-11-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It seems overly harsh to fail a resize of a btrfs file system to the same size when a shrink or grow would succeed. User app GParted trips over this error. Allow it by bypassing the shrink or grow operation. Signed-off-by: Mike Fleetwood <mike.fleetwood@googlemail.com>
| * | | | Btrfs: fix deadlock on metadata reservation when evicting a inodeMiao Xie2011-11-303-5/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When I ran the xfstests, I found the test tasks was blocked on meta-data reservation. By debugging, I found the reason of this bug: start transaction | v reserve meta-data space | v flush delay allocation -> iput inode -> evict inode ^ | | v wait for delay allocation flush <- reserve meta-data space And besides that, the flush on evicting inode will block the thread, which is reclaiming the memory, and make oom happen easily. Fix this bug by skipping the flush step when evicting inode. Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
| * | | | btrfs scrub: handle -ENOMEM from init_ipath()Dan Carpenter2011-11-301-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | init_ipath() can return an ERR_PTR(-ENOMEM). Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
* | | | | Merge branch 'dev' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4Linus Torvalds2011-11-291-1/+1
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | * 'dev' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: ext4: fix racy use-after-free in ext4_end_io_dio()
| * | | | | ext4: fix racy use-after-free in ext4_end_io_dio()Tejun Heo2011-11-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ext4_end_io_dio() queues io_end->work and then clears iocb->private; however, io_end->work calls aio_complete() which frees the iocb object. If that slab object gets reallocated, then ext4_end_io_dio() can end up clearing someone else's iocb->private, this use-after-free can cause a leak of a struct ext4_io_end_t structure. Detected and tested with slab poisoning. [ Note: Can also reproduce using 12 fio's against 12 file systems with the following configuration file: [global] direct=1 ioengine=libaio iodepth=1 bs=4k ba=4k size=128m [create] filename=${TESTDIR} rw=write -- tytso ] Google-Bug-Id: 5354697 Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Reported-by: Kent Overstreet <koverstreet@google.com> Tested-by: Kent Overstreet <koverstreet@google.com> Cc: stable@kernel.org
OpenPOWER on IntegriCloud