summaryrefslogtreecommitdiffstats
path: root/fs
Commit message (Collapse)AuthorAgeFilesLines
* eCryptfs: Fix lockdep-reported AB-BA mutex issueRoland Dreier2009-09-232-10/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lockdep reports the following valid-looking possible AB-BA deadlock with global_auth_tok_list_mutex and keysig_list_mutex: ecryptfs_new_file_context() -> ecryptfs_copy_mount_wide_sigs_to_inode_sigs() -> mutex_lock(&mount_crypt_stat->global_auth_tok_list_mutex); -> ecryptfs_add_keysig() -> mutex_lock(&crypt_stat->keysig_list_mutex); vs ecryptfs_generate_key_packet_set() -> mutex_lock(&crypt_stat->keysig_list_mutex); -> ecryptfs_find_global_auth_tok_for_sig() -> mutex_lock(&mount_crypt_stat->global_auth_tok_list_mutex); ie the two mutexes are taken in opposite orders in the two different code paths. I'm not sure if this is a real bug where two threads could actually hit the two paths in parallel and deadlock, but it at least makes lockdep impossible to use with ecryptfs since this report triggers every time and disables future lockdep reporting. Since ecryptfs_add_keysig() is called only from the single callsite in ecryptfs_copy_mount_wide_sigs_to_inode_sigs(), the simplest fix seems to be to move the lock of keysig_list_mutex back up outside of the where global_auth_tok_list_mutex is taken. This patch does that, and fixes the lockdep report on my system (and ecryptfs still works OK). The full output of lockdep fixed by this patch is: ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.31-2-generic #14~rbd2 ------------------------------------------------------- gdm/2640 is trying to acquire lock: (&mount_crypt_stat->global_auth_tok_list_mutex){+.+.+.}, at: [<ffffffff8121591e>] ecryptfs_find_global_auth_tok_for_sig+0x2e/0x90 but task is already holding lock: (&crypt_stat->keysig_list_mutex){+.+.+.}, at: [<ffffffff81217728>] ecryptfs_generate_key_packet_set+0x58/0x2b0 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&crypt_stat->keysig_list_mutex){+.+.+.}: [<ffffffff8108c897>] check_prev_add+0x2a7/0x370 [<ffffffff8108cfc1>] validate_chain+0x661/0x750 [<ffffffff8108d2e7>] __lock_acquire+0x237/0x430 [<ffffffff8108d585>] lock_acquire+0xa5/0x150 [<ffffffff815526cd>] __mutex_lock_common+0x4d/0x3d0 [<ffffffff81552b56>] mutex_lock_nested+0x46/0x60 [<ffffffff8121526a>] ecryptfs_add_keysig+0x5a/0xb0 [<ffffffff81213299>] ecryptfs_copy_mount_wide_sigs_to_inode_sigs+0x59/0xb0 [<ffffffff81214b06>] ecryptfs_new_file_context+0xa6/0x1a0 [<ffffffff8120e42a>] ecryptfs_initialize_file+0x4a/0x140 [<ffffffff8120e54d>] ecryptfs_create+0x2d/0x60 [<ffffffff8113a7d4>] vfs_create+0xb4/0xe0 [<ffffffff8113a8c4>] __open_namei_create+0xc4/0x110 [<ffffffff8113d1c1>] do_filp_open+0xa01/0xae0 [<ffffffff8112d8d9>] do_sys_open+0x69/0x140 [<ffffffff8112d9f0>] sys_open+0x20/0x30 [<ffffffff81013132>] system_call_fastpath+0x16/0x1b [<ffffffffffffffff>] 0xffffffffffffffff -> #0 (&mount_crypt_stat->global_auth_tok_list_mutex){+.+.+.}: [<ffffffff8108c675>] check_prev_add+0x85/0x370 [<ffffffff8108cfc1>] validate_chain+0x661/0x750 [<ffffffff8108d2e7>] __lock_acquire+0x237/0x430 [<ffffffff8108d585>] lock_acquire+0xa5/0x150 [<ffffffff815526cd>] __mutex_lock_common+0x4d/0x3d0 [<ffffffff81552b56>] mutex_lock_nested+0x46/0x60 [<ffffffff8121591e>] ecryptfs_find_global_auth_tok_for_sig+0x2e/0x90 [<ffffffff812177d5>] ecryptfs_generate_key_packet_set+0x105/0x2b0 [<ffffffff81212f49>] ecryptfs_write_headers_virt+0xc9/0x120 [<ffffffff8121306d>] ecryptfs_write_metadata+0xcd/0x200 [<ffffffff8120e44b>] ecryptfs_initialize_file+0x6b/0x140 [<ffffffff8120e54d>] ecryptfs_create+0x2d/0x60 [<ffffffff8113a7d4>] vfs_create+0xb4/0xe0 [<ffffffff8113a8c4>] __open_namei_create+0xc4/0x110 [<ffffffff8113d1c1>] do_filp_open+0xa01/0xae0 [<ffffffff8112d8d9>] do_sys_open+0x69/0x140 [<ffffffff8112d9f0>] sys_open+0x20/0x30 [<ffffffff81013132>] system_call_fastpath+0x16/0x1b [<ffffffffffffffff>] 0xffffffffffffffff other info that might help us debug this: 2 locks held by gdm/2640: #0: (&sb->s_type->i_mutex_key#11){+.+.+.}, at: [<ffffffff8113cb8b>] do_filp_open+0x3cb/0xae0 #1: (&crypt_stat->keysig_list_mutex){+.+.+.}, at: [<ffffffff81217728>] ecryptfs_generate_key_packet_set+0x58/0x2b0 stack backtrace: Pid: 2640, comm: gdm Tainted: G C 2.6.31-2-generic #14~rbd2 Call Trace: [<ffffffff8108b988>] print_circular_bug_tail+0xa8/0xf0 [<ffffffff8108c675>] check_prev_add+0x85/0x370 [<ffffffff81094912>] ? __module_text_address+0x12/0x60 [<ffffffff8108cfc1>] validate_chain+0x661/0x750 [<ffffffff81017275>] ? print_context_stack+0x85/0x140 [<ffffffff81089c68>] ? find_usage_backwards+0x38/0x160 [<ffffffff8108d2e7>] __lock_acquire+0x237/0x430 [<ffffffff8108d585>] lock_acquire+0xa5/0x150 [<ffffffff8121591e>] ? ecryptfs_find_global_auth_tok_for_sig+0x2e/0x90 [<ffffffff8108b0b0>] ? check_usage_backwards+0x0/0xb0 [<ffffffff815526cd>] __mutex_lock_common+0x4d/0x3d0 [<ffffffff8121591e>] ? ecryptfs_find_global_auth_tok_for_sig+0x2e/0x90 [<ffffffff8121591e>] ? ecryptfs_find_global_auth_tok_for_sig+0x2e/0x90 [<ffffffff8108c02c>] ? mark_held_locks+0x6c/0xa0 [<ffffffff81125b0d>] ? kmem_cache_alloc+0xfd/0x1a0 [<ffffffff8108c34d>] ? trace_hardirqs_on_caller+0x14d/0x190 [<ffffffff81552b56>] mutex_lock_nested+0x46/0x60 [<ffffffff8121591e>] ecryptfs_find_global_auth_tok_for_sig+0x2e/0x90 [<ffffffff812177d5>] ecryptfs_generate_key_packet_set+0x105/0x2b0 [<ffffffff81212f49>] ecryptfs_write_headers_virt+0xc9/0x120 [<ffffffff8121306d>] ecryptfs_write_metadata+0xcd/0x200 [<ffffffff81210240>] ? ecryptfs_init_persistent_file+0x60/0xe0 [<ffffffff8120e44b>] ecryptfs_initialize_file+0x6b/0x140 [<ffffffff8120e54d>] ecryptfs_create+0x2d/0x60 [<ffffffff8113a7d4>] vfs_create+0xb4/0xe0 [<ffffffff8113a8c4>] __open_namei_create+0xc4/0x110 [<ffffffff8113d1c1>] do_filp_open+0xa01/0xae0 [<ffffffff8129a93e>] ? _raw_spin_unlock+0x5e/0xb0 [<ffffffff8155410b>] ? _spin_unlock+0x2b/0x40 [<ffffffff81139e9b>] ? getname+0x3b/0x240 [<ffffffff81148a5a>] ? alloc_fd+0xfa/0x140 [<ffffffff8112d8d9>] do_sys_open+0x69/0x140 [<ffffffff81553b8f>] ? trace_hardirqs_on_thunk+0x3a/0x3f [<ffffffff8112d9f0>] sys_open+0x20/0x30 [<ffffffff81013132>] system_call_fastpath+0x16/0x1b Signed-off-by: Roland Dreier <rolandd@cisco.com> Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com>
* ecryptfs: Remove unneeded locking that triggers lockdep false positivesRoland Dreier2009-09-232-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In ecryptfs_destroy_inode(), inode_info->lower_file_mutex is locked, and just after the mutex is unlocked, the code does: kmem_cache_free(ecryptfs_inode_info_cache, inode_info); This means that if another context could possibly try to take the same mutex as ecryptfs_destroy_inode(), then it could end up getting the mutex just before the data structure containing the mutex is freed. So any such use would be an obvious use-after-free bug (catchable with slab poisoning or mutex debugging), and therefore the locking in ecryptfs_destroy_inode() is not needed and can be dropped. Similarly, in ecryptfs_destroy_crypt_stat(), crypt_stat->keysig_list_mutex is locked, and then the mutex is unlocked just before the code does: memset(crypt_stat, 0, sizeof(struct ecryptfs_crypt_stat)); Therefore taking this mutex is similarly not necessary. Removing this locking fixes false-positive lockdep reports such as the following (and they are false-positives for exactly the same reason that the locking is not needed): ================================= [ INFO: inconsistent lock state ] 2.6.31-2-generic #14~rbd3 --------------------------------- inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage. kswapd0/323 [HC0[0]:SC0[0]:HE1:SE1] takes: (&inode_info->lower_file_mutex){+.+.?.}, at: [<ffffffff81210d34>] ecryptfs_destroy_inode+0x34/0x100 {RECLAIM_FS-ON-W} state was registered at: [<ffffffff8108c02c>] mark_held_locks+0x6c/0xa0 [<ffffffff8108c10f>] lockdep_trace_alloc+0xaf/0xe0 [<ffffffff81125a51>] kmem_cache_alloc+0x41/0x1a0 [<ffffffff8113117a>] get_empty_filp+0x7a/0x1a0 [<ffffffff8112dd46>] dentry_open+0x36/0xc0 [<ffffffff8121a36c>] ecryptfs_privileged_open+0x5c/0x2e0 [<ffffffff81210283>] ecryptfs_init_persistent_file+0xa3/0xe0 [<ffffffff8120e838>] ecryptfs_lookup_and_interpose_lower+0x278/0x380 [<ffffffff8120f97a>] ecryptfs_lookup+0x12a/0x250 [<ffffffff8113930a>] real_lookup+0xea/0x160 [<ffffffff8113afc8>] do_lookup+0xb8/0xf0 [<ffffffff8113b518>] __link_path_walk+0x518/0x870 [<ffffffff8113bd9c>] path_walk+0x5c/0xc0 [<ffffffff8113be5b>] do_path_lookup+0x5b/0xa0 [<ffffffff8113bfe7>] user_path_at+0x57/0xa0 [<ffffffff811340dc>] vfs_fstatat+0x3c/0x80 [<ffffffff8113424b>] vfs_stat+0x1b/0x20 [<ffffffff81134274>] sys_newstat+0x24/0x50 [<ffffffff81013132>] system_call_fastpath+0x16/0x1b [<ffffffffffffffff>] 0xffffffffffffffff irq event stamp: 7811 hardirqs last enabled at (7811): [<ffffffff810c037f>] call_rcu+0x5f/0x90 hardirqs last disabled at (7810): [<ffffffff810c0353>] call_rcu+0x33/0x90 softirqs last enabled at (3764): [<ffffffff810631da>] __do_softirq+0x14a/0x220 softirqs last disabled at (3751): [<ffffffff8101440c>] call_softirq+0x1c/0x30 other info that might help us debug this: 2 locks held by kswapd0/323: #0: (shrinker_rwsem){++++..}, at: [<ffffffff810f67ed>] shrink_slab+0x3d/0x190 #1: (&type->s_umount_key#35){.+.+..}, at: [<ffffffff811429a1>] prune_dcache+0xd1/0x1b0 stack backtrace: Pid: 323, comm: kswapd0 Tainted: G C 2.6.31-2-generic #14~rbd3 Call Trace: [<ffffffff8108ad6c>] print_usage_bug+0x18c/0x1a0 [<ffffffff8108aff0>] ? check_usage_forwards+0x0/0xc0 [<ffffffff8108bac2>] mark_lock_irq+0xf2/0x280 [<ffffffff8108bd87>] mark_lock+0x137/0x1d0 [<ffffffff81164710>] ? fsnotify_clear_marks_by_inode+0x30/0xf0 [<ffffffff8108bee6>] mark_irqflags+0xc6/0x1a0 [<ffffffff8108d337>] __lock_acquire+0x287/0x430 [<ffffffff8108d585>] lock_acquire+0xa5/0x150 [<ffffffff81210d34>] ? ecryptfs_destroy_inode+0x34/0x100 [<ffffffff8108d2e7>] ? __lock_acquire+0x237/0x430 [<ffffffff815526ad>] __mutex_lock_common+0x4d/0x3d0 [<ffffffff81210d34>] ? ecryptfs_destroy_inode+0x34/0x100 [<ffffffff81164710>] ? fsnotify_clear_marks_by_inode+0x30/0xf0 [<ffffffff81210d34>] ? ecryptfs_destroy_inode+0x34/0x100 [<ffffffff8129a91e>] ? _raw_spin_unlock+0x5e/0xb0 [<ffffffff81552b36>] mutex_lock_nested+0x46/0x60 [<ffffffff81210d34>] ecryptfs_destroy_inode+0x34/0x100 [<ffffffff81145d27>] destroy_inode+0x87/0xd0 [<ffffffff81146b4c>] generic_delete_inode+0x12c/0x1a0 [<ffffffff81145832>] iput+0x62/0x70 [<ffffffff811423c8>] dentry_iput+0x98/0x110 [<ffffffff81142550>] d_kill+0x50/0x80 [<ffffffff81142623>] prune_one_dentry+0xa3/0xc0 [<ffffffff811428b1>] __shrink_dcache_sb+0x271/0x290 [<ffffffff811429d9>] prune_dcache+0x109/0x1b0 [<ffffffff81142abf>] shrink_dcache_memory+0x3f/0x50 [<ffffffff810f68dd>] shrink_slab+0x12d/0x190 [<ffffffff810f9377>] balance_pgdat+0x4d7/0x640 [<ffffffff8104c4c0>] ? finish_task_switch+0x40/0x150 [<ffffffff810f63c0>] ? isolate_pages_global+0x0/0x60 [<ffffffff810f95f7>] kswapd+0x117/0x170 [<ffffffff810777a0>] ? autoremove_wake_function+0x0/0x40 [<ffffffff810f94e0>] ? kswapd+0x0/0x170 [<ffffffff810773be>] kthread+0x9e/0xb0 [<ffffffff8101430a>] child_rip+0xa/0x20 [<ffffffff81013c90>] ? restore_args+0x0/0x30 [<ffffffff81077320>] ? kthread+0x0/0xb0 [<ffffffff81014300>] ? child_rip+0x0/0x20 Signed-off-by: Roland Dreier <roland@digitalvampire.org> Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com>
* Merge branch 'cputime' of git://git390.marist.edu/pub/scm/linux-2.6Linus Torvalds2009-09-241-1/+6
|\ | | | | | | | | * 'cputime' of git://git390.marist.edu/pub/scm/linux-2.6: [PATCH] Fix idle time field in /proc/uptime
| * [PATCH] Fix idle time field in /proc/uptimeMichael Abbott2009-09-241-1/+6
| | | | | | | | | | | | | | | | | | Git commit 79741dd changes idle cputime accounting, but unfortunately the /proc/uptime file hasn't caught up. Here the idle time calculation from /proc/stat is copied over. Signed-off-by: Michael Abbott <michael.abbott@diamond.ac.uk> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* | Merge branch 'for-linus' of ↵Linus Torvalds2009-09-2431-1981/+2740
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable: (42 commits) Btrfs: hash the btree inode during fill_super Btrfs: relocate file extents in clusters Btrfs: don't rename file into dummy directory Btrfs: check size of inode backref before adding hardlink Btrfs: fix releasepage to avoid unlocking extents we haven't locked Btrfs: Fix test_range_bit for whole file extents Btrfs: fix errors handling cached state in set/clear_extent_bit Btrfs: fix early enospc during balancing Btrfs: deal with NULL space info Btrfs: account for space used by the super mirrors Btrfs: fix extent entry threshold calculation Btrfs: remove dead code Btrfs: fix bitmap size tracking Btrfs: don't keep retrying a block group if we fail to allocate a cluster Btrfs: make balance code choose more wisely when relocating Btrfs: fix arithmetic error in clone ioctl Btrfs: add snapshot/subvolume destroy ioctl Btrfs: change how subvolumes are organized Btrfs: do not reuse objectid of deleted snapshot/subvol Btrfs: speed up snapshot dropping ...
| * \ Merge branch 'master' of ↵Chris Mason2009-09-2431-1981/+2740
| |\ \ | | |/ | |/| | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable into for-linus Conflicts: fs/btrfs/super.c
| | * Btrfs: hash the btree inode during fill_superYan Zheng2009-09-241-0/+1
| | | | | | | | | | | | | | | | | | | | | The snapshot deletion patches dropped this line, but the inode needs to be hashed. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: relocate file extents in clustersYan, Zheng2009-09-241-89/+148
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The extent relocation code copy file extents one by one when relocating data block group. This is inefficient if file extents are small. This patch makes the relocation code copy file extents in clusters. So we can can make better use of read-ahead. Signed-off-by: Yan Zheng <zheng.yan@oracle.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: don't rename file into dummy directoryYan, Zheng2009-09-241-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A recent change enforces only one access point to each subvolume. The first directory entry (the one added when the subvolume/snapshot was created) is treated as valid access point, all other subvolume links are linked to dummy empty directories. The dummy directories are temporary inodes that only in memory, so we can not rename file into them. Signed-off-by: Yan Zheng <zheng.yan@oracle.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: check size of inode backref before adding hardlinkYan, Zheng2009-09-243-24/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For every hardlink in btrfs, there is a corresponding inode back reference. All inode back references for hardlinks in a given directory are stored in single b-tree item. The size of b-tree item is limited by the size of b-tree leaf, so we can only create limited number of hardlinks to a given file in a directory. The original code lacks of the check, it oops if the number of hardlinks goes over the limit. This patch fixes the issue by adding check to btrfs_link and btrfs_rename. Signed-off-by: Yan Zheng <zheng.yan@oracle.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: fix releasepage to avoid unlocking extents we haven't lockedChris Mason2009-09-231-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During releasepage, we try to drop any extent_state structs for the bye offsets of the page we're releaseing. But the code was incorrectly telling clear_extent_bit to delete the state struct unconditionallly. Normally this would be fine because we have the page locked, but other parts of btrfs will lock down an entire extent, the most common place being IO completion. releasepage was deleting the extent state without first locking the extent, which may result in removing a state struct that another process had locked down. The fix here is to leave the NODATASUM and EXTENT_LOCKED bits alone in releasepage. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: Fix test_range_bit for whole file extentsChris Mason2009-09-231-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If test_range_bit finds an extent that goes all the way to (u64)-1, it can incorrectly wrap the u64 instead of treaing it like the end of the address space. This just adds a check for the highest possible offset so we don't wrap. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: fix errors handling cached state in set/clear_extent_bitChris Mason2009-09-231-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Both set and clear_extent_bit allow passing a cached state struct to reduce rbtree search times. clear_extent_bit was improperly bypassing some of the checks around making sure the extent state fields were correct for a given operation. The fix used here (from Yan Zheng) is to use the hit_next goto target instead of jumping all the way down to start clearing bits without making sure the cached state was exactly correct for the operation we were doing. This also fixes up the setting of the start variable for both ops in the case where we find an overlapping extent that begins before the range we want to change. In both cases we were incorrectly going backwards from the original requested change. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: fix early enospc during balancingChris Mason2009-09-221-13/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We now do extra checks before a balance to make sure there is room for the balance to take place. One of the checks was testing to see if we were trying to balance away the last block group of a given type. If there is no space available for new chunks, we should not try and balance away the last block group of a give type. But, the code wasn't checking for available chunk space, and so it was exiting too soon. The fix here is to combine some of the checks and make sure we try to allocate new chunks when we're balancing the last block group. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: deal with NULL space infoChris Mason2009-09-221-2/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | After a balance it is briefly possible for the space info field in the inode to be NULL. This adds some checks to make sure things properly deal with the NULL value. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: account for space used by the super mirrorsJosef Bacik2009-09-212-2/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As we get closer to proper -ENOSPC handling in btrfs, we need more accurate space accounting for the space info's. Currently we exclude the free space for the super mirrors, but the space they take up isn't accounted for in any of the counters. This patch introduces bytes_super, which keeps track of the amount of bytes used for a super mirror in the block group cache and space info. This makes sure that our free space caclucations will be completely accurate. Signed-off-by: Josef Bacik <jbacik@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: fix extent entry threshold calculationJosef Bacik2009-09-211-14/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a slight problem with the extent entry threshold calculation for the free space cache. We only adjust the threshold down as we add bitmaps, but never actually adjust the threshold up as we add bitmaps. This means we could fragment the free space so badly that we end up using all bitmaps to describe the free space, use all the free space which would result in the bitmaps being freed, but then go to add free space again as we delete things and immediately add bitmaps since the extent threshold would still be 0. Now as we free bitmaps the extent threshold will be ratcheted up to allow more extent entries to be added. Signed-off-by: Josef Bacik <jbacik@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: remove dead codeJosef Bacik2009-09-211-706/+0
| | | | | | | | | | | | | | | | | | | | | | | | This patch removes a bunch of dead code from the snapshot removal stuff. It was confusing me when doing the metadata ENOSPC stuff so I killed it. Signed-off-by: Josef Bacik <jbacik@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: fix bitmap size trackingJosef Bacik2009-09-211-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we first go to add free space, we allocate a new info and set the offset and bytes to the space we are adding. This is fine, except we actually set the size of a bitmap as we set the bits in it, so if we add space to a bitmap, we'd end up counting the same space twice. This isn't a huge deal, it just makes the allocator behave weirdly since it will think that a bitmap entry has more space than it ends up actually having. I used a BUG_ON() to catch when this problem happened, and with this patch I no longer get the BUG_ON(). Signed-off-by: Josef Bacik <jbacik@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: don't keep retrying a block group if we fail to allocate a clusterJosef Bacik2009-09-211-8/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The box can get locked up in the allocator if we happen upon a block group under these conditions: 1) During a commit, so caching threads cannot make progress 2) Our block group currently is in the middle of being cached 3) Our block group currently has plenty of free space in it 4) Our block group is so fragmented that it ends up having no free space chunks larger than min_bytes calculated by btrfs_find_space_cluster. What happens is we try and do btrfs_find_space_cluster, which fails because it is unable to find enough free space chunks that are large than min_bytes and are close enough together. Since the block group is not cached we do a wait_block_group_cache_progress, which waits for the number of bytes we need, except the block group already has _plenty_ of free space, its just severely fragmented, so we loop and try again, ad infinitum. This patch keeps us from waiting on the block group to finish caching if we failed to find a free space cluster before. It also makes sure that we don't even try to find a free space cluster if we are on our last loop in the allocator, since we will have tried everything at this point at it is futile. Signed-off-by: Josef Bacik <jbacik@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: make balance code choose more wisely when relocatingJosef Bacik2009-09-214-18/+148
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, we can panic the box if the first block group we go to move is of a type where there is no space left to move those extents. For example, if we fill the disk up with data, and then we try to balance and we have no room to move the data nor room to allocate new chunks, we will panic. Change this by checking to see if we have room to move this chunk around, and if not, return -ENOSPC and move on to the next chunk. This will make sure we remove block groups that are moveable, like if we have alot of empty metadata block groups, and then that way we make room to be able to balance our data chunks as well. Tested this with an fs that would panic on btrfs-vol -b normally, but no longer panics with this patch. V1->V2: -actually search for a free extent on the device to make sure we can allocate a chunk if need be. -fix btrfs_shrink_device to make sure we actually try to relocate all the chunks, and then if we can't return -ENOSPC so if we are doing a btrfs-vol -r we don't remove the device with data still on it. -check to make sure the block group we are going to relocate isn't the last one in that particular space -fix a bug in btrfs_shrink_device where we would change the device's size and not fix it if we fail to do our relocate Signed-off-by: Josef Bacik <jbacik@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: fix arithmetic error in clone ioctlSage Weil2009-09-211-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | Fix an arithmetic error that was breaking extents cloned via the clone ioctl starting in the second half of a file. Signed-off-by: Sage Weil <sage@newdream.net> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: add snapshot/subvolume destroy ioctlYan, Zheng2009-09-2112-233/+605
| | | | | | | | | | | | | | | | | | | | | | | | This patch adds snapshot/subvolume destroy ioctl. A subvolume that isn't being used and doesn't contains links to other subvolumes can be destroyed. Signed-off-by: Yan Zheng <zheng.yan@oracle.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: change how subvolumes are organizedYan, Zheng2009-09-219-168/+459
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | btrfs allows subvolumes and snapshots anywhere in the directory tree. If we snapshot a subvolume that contains a link to other subvolume called subvolA, subvolA can be accessed through both the original subvolume and the snapshot. This is similar to creating hard link to directory, and has the very similar problems. The aim of this patch is enforcing there is only one access point to each subvolume. Only the first directory entry (the one added when the subvolume/snapshot was created) is treated as valid access point. The first directory entry is distinguished by checking root forward reference. If the corresponding root forward reference is missing, we know the entry is not the first one. This patch also adds snapshot/subvolume rename support, the code allows rename subvolume link across subvolumes. Signed-off-by: Yan Zheng <zheng.yan@oracle.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: do not reuse objectid of deleted snapshot/subvolYan, Zheng2009-09-216-116/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new back reference format does not allow reusing objectid of deleted snapshot/subvol. So we use ++highest_objectid to allocate objectid for new snapshot/subvol. Now we use ++highest_objectid to allocate objectid for both new inode and new snapshot/subvolume, so this patch removes 'find hole' code in btrfs_find_free_objectid. Signed-off-by: Yan Zheng <zheng.yan@oracle.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: speed up snapshot droppingYan, Zheng2009-09-212-49/+207
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch contains two changes to avoid unnecessary tree block reads during snapshot dropping. First, check tree block's reference count and flags before reading the tree block. if reference count > 1 and there is no need to update backrefs, we can avoid reading the tree block. Second, save when snapshot was created in root_key.offset. we can compare block pointer's generation with snapshot's creation generation during updating backrefs. If a given block was created before snapshot was created, the snapshot can't be the tree block's owner. So we can avoid reading the block. Signed-off-by: Yan Zheng <zheng.yan@oracle.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: search for an allocation hint while filling file COWChris Mason2009-09-183-1/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The allocator has some nice knobs for sending hints about where to try and allocate new blocks, but when we're doing file allocations we're not sending any hint at all. This commit adds a simple extent map search to see if we can quickly and easily find a hint for the allocator. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: properly honor wbc->nr_to_write changesChris Mason2009-09-181-11/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When btrfs fills a delayed allocation, it tries to increase the wbc nr_to_write to cover a big part of allocation. The theory is that we're doing contiguous IO and writing a few more blocks will save seeks overall at a very low cost. The problem is that extent_write_cache_pages could ignore the new higher nr_to_write if nr_to_write had already gone down to zero. We fix that by rechecking the nr_to_write for every page that is processed in the pagevec. This updates the math around bumping the nr_to_write value to make sure we don't leave a tiny amount of IO hanging around for the very end of a new extent. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: improve async block group cachingYan Zheng2009-09-175-259/+382
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch gets rid of two limitations of async block group caching. The old code delays handling pinned extents when block group is in caching. To allocate logged file extents, the old code need wait until block group is fully cached. To get rid of the limitations, This patch introduces a data structure to track the progress of caching. Base on the caching progress, we know which extents should be added to the free space cache when handling the pinned extents. The logged file extents are also handled in a similar way. This patch also changes how pinned extents are tracked. The old code uses one tree to track pinned extents, and copy the pinned extents tree at transaction commit time. This patch makes it use two trees to track pinned extents. One tree for extents that are pinned in the running transaction, one tree for extents that can be unpinned. At transaction commit time, we swap the two trees. Signed-off-by: Yan Zheng <zheng.yan@oracle.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: Fix async thread shutdown raceChris Mason2009-09-151-6/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It was possible for an async worker thread to be selected to receive a new work item, but exit before the work item was actually placed into that thread's work list. This commit fixes the race by incrementing the num_pending counter earlier, and making sure to check the number of pending work items before a thread exits. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: fix worker thread double spin_lock_irqChris Mason2009-09-151-2/+2
| | | | | | | | | | | | | | | | | | | | | The exit-on-idle code for async worker threads was incorrectly calling spin_lock_irq with interrupts already off. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Btrfs: fix async worker startup raceChris Mason2009-09-151-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After a new worker thread starts, it is placed into the list of idle threads. But, this may race with a check for idle done by the worker thread itself, resulting in a double list_add operation. This fix adds a check to make sure the idle thread addition is done properly. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | * Merge branch 'master' of ↵Chris Mason2009-09-1118-317/+580
| | |\ | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable
| | | * Btrfs: zero page past end of inline file itemsChris Mason2009-09-111-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When btrfs_get_extent is reading inline file items for readpage, it needs to copy the inline extent into the page. If the inline extent doesn't cover all of the page, that means there is a hole in the file, or that our file is smaller than one page. readpage does zeroing for the case where the file is smaller than one page, but nobody is currently zeroing for the case where there is a hole after the inline item. This commit changes btrfs_get_extent to zero fill the page past the end of the inline item. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | | * Btrfs: fix btrfs page_mkwrite to return locked pageChris Mason2009-09-111-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This closes a whole where the page may be written before the page_mkwrite caller has a chance to dirty it (thanks to Nick Piggin) Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | | * Btrfs: Fix extent replacment raceChris Mason2009-09-117-13/+80
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Data COW means that whenever we write to a file, we replace any old extent pointers with new ones. There was a window where a readpage might find the old extent pointers on disk and cache them in the extent_map tree in ram in the middle of a given write replacing them. Even though both the readpage and the write had their respective bytes in the file locked, the extent readpage inserts may cover more bytes than it had locked down. This commit closes the race by keeping the new extent pinned in the extent map tree until after the on-disk btree is properly setup with the new extent pointers. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | | * Btrfs: Use PagePrivate2 to track pages in the data=ordered code.Chris Mason2009-09-115-55/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Btrfs writes go through delalloc to the data=ordered code. This makes sure that all of the data is on disk before the metadata that references it. The tracking means that we have to make sure each page in an extent is fully written before we add that extent into the on-disk btree. This was done in the past by setting the EXTENT_ORDERED bit for the range of an extent when it was added to the data=ordered code, and then clearing the EXTENT_ORDERED bit in the extent state tree as each page finished IO. One of the reasons we had to do this was because sometimes pages are magically dirtied without page_mkwrite being called. The EXTENT_ORDERED bit is checked at writepage time, and if it isn't there, our page become dirty without going through the proper path. These bit operations make for a number of rbtree searches for each page, and can cause considerable lock contention. This commit switches from the EXTENT_ORDERED bit to use PagePrivate2. As pages go into the ordered code, PagePrivate2 is set on each one. This is a cheap operation because we already have all the pages locked and ready to go. As IO finishes, the PagePrivate2 bit is cleared and the ordered accoutning is updated for each page. At writepage time, if the PagePrivate2 bit is missing, we go into the writepage fixup code to handle improperly dirtied pages. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | | * Btrfs: use a cached state for extent state operations during delallocChris Mason2009-09-115-24/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This changes the btrfs code to find delalloc ranges in the extent state tree to use the new state caching code from set/test bit. It reduces one of the biggest causes of rbtree searches in the writeback path. test_range_bit is also modified to take the cached state as a starting point while searching. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | | * Btrfs: don't lock bits in the extent tree during writepageChris Mason2009-09-111-21/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At writepage time, we have the page locked and we have the extent_map entry for this extent pinned in the extent_map tree. So, the page can't go away and its mapping can't change. There is no need for the extra extent_state lock bits during writepage. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | | * Btrfs: cache values for locking extentsChris Mason2009-09-113-36/+100
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many of the btrfs extent state tree users follow the same pattern. They lock an extent range in the tree, do some operation and then unlock. This translates to at least 2 rbtree searches, and maybe more if they are doing operations on the extent state tree. A locked extent in the tree isn't going to be merged or changed, and so we can safely return the extent state structure as a cached handle. This changes set_extent_bit to give back a cached handle, and also changes both set_extent_bit and clear_extent_bit to use the cached handle if it is available. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | | * Btrfs: reduce CPU usage in the extent_state treeChris Mason2009-09-113-68/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Btrfs is currently mirroring some of the page state bits into its extent state tree. The goal behind this was to use it in supporting blocksizes other than the page size. But, we don't currently support that, and we're using quite a lot of CPU on the rb tree and its spin lock. This commit starts a series of cleanups to reduce the amount of work done in the extent state tree as part of each IO. This commit: * Adds the ability to lock an extent in the state tree and also set other bits. The idea is to do locking and delalloc in one call * Removes the EXTENT_WRITEBACK and EXTENT_DIRTY bits. Btrfs is using a combination of the page bits and the ordered write code for this instead. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | | * Btrfs: Fix new state initialization orderChris Mason2009-09-111-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As the extent state tree is manipulated, there are call backs that are used to take extra actions when different state bits are set or cleared. One example of this is a counter for the total number of delayed allocation bytes in a single inode and in the whole FS. When new states are inserted, this callback is being done before we properly setup the new state. This hasn't caused problems before because the lock bit was always done first, and the existing call backs don't care about the lock bit. This patch makes sure the state is properly setup before using the callback, which is important for later optimizations that do more work without using the lock bit. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | | * Btrfs: switch extent_map to a rw lockChris Mason2009-09-1110-60/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are two main users of the extent_map tree. The first is regular file inodes, where it is evenly spread between readers and writers. The second is the chunk allocation tree, which maps blocks from logical addresses to phyiscal ones, and it is 99.99% reads. The mapping tree is a point of lock contention during heavy IO workloads, so this commit switches things to a rw lock. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | | * Btrfs: tweak congestion backoffChris Mason2009-09-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The btrfs io submission thread tries to back off congested devices in favor of rotating off to another disk. But, it tries to make sure it submits at least some IO before rotating on (the others may be congested too), and so it has a magic number of requests it tries to write before it hops. This makes the magic number smaller. Testing shows that we're spending too much time on congested devices and leaving the other devices idle. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | | * Btrfs: use larger nr_to_write for larger extentsChris Mason2009-09-111-5/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When btrfs fills a large delayed allocation extent, it is a good idea to try and convince the write_cache_pages caller to go ahead and write a good chunk of that extent. The extra IO is basically free because we know it is contiguous. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | | * Btrfs: reduce worker thread spin_lock_irq hold timesChris Mason2009-09-111-14/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This changes the btrfs worker threads to batch work items into a local list. It allows us to pull work items in large chunks and significantly reduces the number of times we need to take the worker thread spinlock. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | | * Btrfs: keep irqs on more often in the worker threadsChris Mason2009-09-112-10/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The btrfs worker thread spinlock was being used both for the queueing of IO and for the processing of ordered events. The ordered events never happen from end_io handlers, and so they don't need to use the _irq version of spinlocks. This adds a dedicated lock to the ordered lists so they don't have to run with irqs off. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | | * Btrfs: optimize set extent bitChris Mason2009-09-112-4/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Btrfs set_extent_bit call currently searches the rbtree every time it needs to find more extent_state objects to fill the requested operation. This adds a simple test with rb_next to see if the next object in the tree was adjacent to the one we just found. If so, we skip the search and just use the next object. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| | | * Btrfs: Allow worker threads to exit when idleChris Mason2009-09-113-32/+132
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Btrfs worker threads don't currently die off after they have been idle for a while, leading to a lot of threads sitting around doing nothing for each mount. Also, they are unable to start atomically (from end_io hanlders). This commit reworks the worker threads so they can be started from end_io handlers (just setting a flag that asks for a thread to be added at a later date) and so they can exit if they have been idle for a long time. Signed-off-by: Chris Mason <chris.mason@oracle.com>
* | | | Merge branch 'for-linus' of ↵Linus Torvalds2009-09-2432-425/+415
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6 * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6: truncate: use new helpers truncate: new helpers fs: fix overflow in sys_mount() for in-kernel calls fs: Make unload_nls() NULL pointer safe freeze_bdev: grab active reference to frozen superblocks freeze_bdev: kill bd_mount_sem exofs: remove BKL from super operations fs/romfs: correct error-handling code vfs: seq_file: add helpers for data filling vfs: remove redundant position check in do_sendfile vfs: change sb->s_maxbytes to a loff_t vfs: explicitly cast s_maxbytes in fiemap_check_ranges libfs: return error code on failed attr set seq_file: return a negative error code when seq_path_root() fails. vfs: optimize touch_time() too vfs: optimization for touch_atime() vfs: split generic_forget_inode() so that hugetlbfs does not have to copy it fs/inode.c: add dev-id and inode number for debugging in init_special_inode() libfs: make simple_read_from_buffer conventional
OpenPOWER on IntegriCloud