summaryrefslogtreecommitdiffstats
path: root/fs
Commit message (Collapse)AuthorAgeFilesLines
* /proc/stat: fix scalability of irq sum of all cpuKAMEZAWA Hiroyuki2010-10-271-8/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In /proc/stat, the number of per-IRQ event is shown by making a sum each irq's events on all cpus. But we can make use of kstat_irqs(). kstat_irqs() do the same calculation, If !CONFIG_GENERIC_HARDIRQ, it's not a big cost. (Both of the number of cpus and irqs are small.) If a system is very big and CONFIG_GENERIC_HARDIRQ, it does for_each_irq() for_each_cpu() - look up a radix tree - read desc->irq_stat[cpu] This seems not efficient. This patch adds kstat_irqs() for CONFIG_GENRIC_HARDIRQ and change the calculation as for_each_irq() look up radix tree for_each_cpu() - read desc->irq_stat[cpu] This reduces cost. A test on (4096cpusp, 256 nodes, 4592 irqs) host (by Jack Steiner) %time cat /proc/stat > /dev/null Before Patch: 2.459 sec After Patch : .561 sec [akpm@linux-foundation.org: unexport kstat_irqs, coding-style tweaks] [akpm@linux-foundation.org: fix unused variable 'per_irq_sum'] Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Tested-by: Jack Steiner <steiner@sgi.com> Acked-by: Jack Steiner <steiner@sgi.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* /proc/stat: scalability of irq num per cpuKAMEZAWA Hiroyuki2010-10-271-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | /proc/stat shows the total number of all interrupts to each cpu. But when the number of IRQs are very large, it take very long time and 'cat /proc/stat' takes more than 10 secs. This is because sum of all irq events are counted when /proc/stat is read. This patch adds "sum of all irq" counter percpu and reduce read costs. The cost of reading /proc/stat is important because it's used by major applications as 'top', 'ps', 'w', etc.... A test on a mechin (4096cpu, 256 nodes, 4592 irqs) shows %time cat /proc/stat > /dev/null Before Patch: 12.627 sec After Patch: 2.459 sec Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Tested-by: Jack Steiner <steiner@sgi.com> Acked-by: Jack Steiner <steiner@sgi.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* procfs: fix /proc/softirqs formattingDavidlohr Bueso2010-10-271-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The length of the BLOCK_IPOLL string is making i's value be printed too far to the right. This patch fixes this and makes the output a bit neater. Currently: CPU0 HI: 0 TIMER: 599792 NET_TX: 2 NET_RX: 6 BLOCK: 80807 BLOCK_IOPOLL: 0 TASKLET: 20012 SCHED: 0 HRTIMER: 63 RCU: 619279 With patch: CPU0 HI: 0 TIMER: 585582 NET_TX: 2 NET_RX: 6 BLOCK: 80320 BLOCK_IOPOLL: 0 TASKLET: 19287 SCHED: 0 HRTIMER: 62 RCU: 604441 Signed-off-by: Davidlohr Bueso <dave@gnu.org> Acked-by: Keika Kobayashi <kobayashi.kk@ncos.nec.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* /proc/pid/smaps: export amount of anonymous memory in a mappingNikanth Karthikesan2010-10-271-0/+6
| | | | | | | | | | | | | | | | | | | Export the number of anonymous pages in a mapping via smaps. Even the private pages in a mapping backed by a file, would be marked as anonymous, when they are modified. Export this information to user-space via smaps. Exporting this count will help gdb to make a better decision on which areas need to be dumped in its coredump; and should be useful to others studying the memory usage of a process. Signed-off-by: Nikanth Karthikesan <knikanth@suse.de> Acked-by: Hugh Dickins <hughd@google.com> Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Matt Mackall <mpm@selenic.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* coredump: default CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=yRoland McGrath2010-10-271-2/+2
| | | | | | | | | | | | | | | The userland ELF tools have been coping with partial-segments core files for a few years now. Multiple distro builds are now setting this option. It behooves everyone who ever deals with core files to have more info dumped in there, especially as more and more people's compilers are producing build IDs. Make it the default. Anyone using older tools confused by these core files can configure this option off, or just change /proc/PID/coredump_filter after boot. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* core_pattern: fix truncation by core_pattern handler with long parametersXiaotian Feng2010-10-271-60/+95
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We met a parameter truncated issue, consider following: > echo "|/root/core_pattern_pipe_test %p /usr/libexec/blah-blah-blah \ %s %c %p %u %g 11 12345678901234567890123456789012345678 %t" > \ /proc/sys/kernel/core_pattern This is okay because the strings is less than CORENAME_MAX_SIZE. "cat /proc/sys/kernel/core_pattern" shows the whole string. but after we run core_pattern_pipe_test in man page, we found last parameter was truncated like below: argc[10]=<12807486> The root cause is core_pattern allows % specifiers, which need to be replaced during parse time, but the replace may expand the strings to larger than CORENAME_MAX_SIZE. So if the last parameter is % specifiers, the replace code is using snprintf(out_ptr, out_end - out_ptr, ...), this will write out of corename array. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Xiaotian Feng <dfeng@redhat.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Oleg Nesterov <oleg@redhat.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: Neil Horman <nhorman@tuxdriver.com> Cc: Roland McGrath <roland@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* signals: move cred_guard_mutex from task_struct to signal_structKOSAKI Motohiro2010-10-272-9/+9
| | | | | | | | | | | | | | | | Oleg Nesterov pointed out we have to prevent multiple-threads-inside-exec itself and we can reuse ->cred_guard_mutex for it. Yes, concurrent execve() has no worth. Let's move ->cred_guard_mutex from task_struct to signal_struct. It naturally prevent multiple-threads-inside-exec. Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Reviewed-by: Oleg Nesterov <oleg@redhat.com> Acked-by: Roland McGrath <roland@redhat.com> Acked-by: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* isofs: work-around for Rock Ridge+Joliet CDs with empty ISO root directoryOndrej Zary2010-10-271-0/+40
| | | | | | | | | | | | If a CD has both Rock Ridge and Joliet extensions and the ISO root directory is empty, no files are visible. Disable Rock Ridge extensions in this case and use Joliet root directory instead. Signed-off-by: Ondrej Zary <linux@rainbow-software.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Guenter Roeck <guenter.roeck@ericsson.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'for-linus' of ↵Linus Torvalds2010-10-2690-643/+760
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6 * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6: (52 commits) split invalidate_inodes() fs: skip I_FREEING inodes in writeback_sb_inodes fs: fold invalidate_list into invalidate_inodes fs: do not drop inode_lock in dispose_list fs: inode split IO and LRU lists fs: switch bdev inode bdi's correctly fs: fix buffer invalidation in invalidate_list fsnotify: use dget_parent smbfs: use dget_parent exportfs: use dget_parent fs: use RCU read side protection in d_validate fs: clean up dentry lru modification fs: split __shrink_dcache_sb fs: improve DCACHE_REFERENCED usage fs: use percpu counter for nr_dentry and nr_dentry_unused fs: simplify __d_free fs: take dcache_lock inside __d_path fs: do not assign default i_ino in new_inode fs: introduce a per-cpu last_ino allocator new helper: ihold() ...
| * split invalidate_inodes()Al Viro2010-10-254-6/+51
| | | | | | | | | | | | | | | | | | Pull removal of fsnotify marks into generic_shutdown_super(). Split umount-time work into a new function - evict_inodes(). Make sure that invalidate_inodes() will be able to cope with I_FREEING once we change locking in iput(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: skip I_FREEING inodes in writeback_sb_inodesChristoph Hellwig2010-10-251-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | Skip I_FREEING inodes just like I_WILL_FREE and I_NEW when walking the writeback lists. Currenly this can't happen, but once we move from inode_lock to more fine grained locking we can have an inode that's still on the writeback lists but has I_FREEING set, and we absolutely need to skip it here, just like we do for all other inode list walks. Based on a patch from Dave Chinner. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: fold invalidate_list into invalidate_inodesChristoph Hellwig2010-10-251-27/+16
| | | | | | | | | | Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: do not drop inode_lock in dispose_listChristoph Hellwig2010-10-251-18/+2
| | | | | | | | | | | | | | | | | | Despite the comment above it we can not safely drop the lock here. invalidate_list is called from many other places that just umount. Also switch to proper list macros now that we never drop the lock. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: inode split IO and LRU listsNick Piggin2010-10-253-37/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The use of the same inode list structure (inode->i_list) for two different list constructs with different lifecycles and purposes makes it impossible to separate the locking of the different operations. Therefore, to enable the separation of the locking of the writeback and reclaim lists, split the inode->i_list into two separate lists dedicated to their specific tracking functions. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: switch bdev inode bdi's correctlyDave Chinner2010-10-251-5/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | bdev inodes can remain dirty even after their last close. Hence the BDI associated with the bdev->inode gets modified duringthe last close to point to the default BDI. However, the bdev inode still needs to be moved to the dirty lists of the new BDI, otherwise it will corrupt the writeback list is was left on. Add a new function bdev_inode_switch_bdi() to move all the bdi state from the old bdi to the new one safely. This is only a temporary measure until the bdev inode<->bdi lifecycle problems are sorted out. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: fix buffer invalidation in invalidate_listChristoph Hellwig2010-10-251-9/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | We must not call invalidate_inode_buffers in invalidate_list unless the inode can be reclaimed. If we remove the buffer association of a busy inode fsync won't find the buffers anymore. As invalidate_inode_buffers is called from various others sources than umount this actually does matter in practice. While at it change the loop to a more natural form and remove the WARN_ON for I_NEW, wich we already tested a few lines above. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fsnotify: use dget_parentChristoph Hellwig2010-10-251-28/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Use dget_parent instead of opencoding it. This simplifies the code, but more importanly prepares for the more complicated locking for a parent dget in the dcache scale patch series. It means we do grab a reference to the parent now if need to be watched, but not with the specified mask. If this turns out to be a problem we'll have to revisit it, but for now let's keep as much as possible dcache internals inside dcache.[ch]. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * smbfs: use dget_parentChristoph Hellwig2010-10-252-18/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | Use dget_parent instead of opencoding it. This simplifies the code, but more importanly prepares for the more complicated locking for a parent dget in the dcache scale patch series. Note that the d_time assignment in smb_renew_times moves out of d_lock, but it's a single atomic 32-bit value, and that's what other sites setting it do already. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * exportfs: use dget_parentChristoph Hellwig2010-10-251-9/+8
| | | | | | | | | | | | | | | | Use dget_parent instead of opencoding it. This simplifies the code, but more importanly prepares for the more complicated locking for a parent dget in the dcache scale patch series. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: use RCU read side protection in d_validateChristoph Hellwig2010-10-251-19/+12
| | | | | | | | | | | | | | | | | | d_validate does a purely read lookup in the dentry hash, so use RCU read side locking instead of dcache_lock. Split out from a larget patch by Nick Piggin <npiggin@suse.de>. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: clean up dentry lru modificationChristoph Hellwig2010-10-251-26/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | Always do a list_del_init on the LRU to make sure the list_empty invariant for not beeing on the LRU always holds true, and fold dentry_lru_del_init into dentry_lru_del. Replace the dentry_lru_add_tail primitive with a dentry_lru_move_tail operations that simpler when the dentry already is one the list, which is always is. Move the list_empty into dentry_lru_add to fit the scheme of the other lru helpers, and simplify locking once we move to a separate LRU lock. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: split __shrink_dcache_sbChristoph Hellwig2010-10-251-60/+67
| | | | | | | | | | | | | | | | | | | | | | Currently __shrink_dcache_sb has an extremly awkward calling convention because it tries to please very different callers. Split out the main loop into a shrink_dentry_list helper, which gets called directly from shrink_dcache_sb for the cases where all dentries need to be pruned, or from __shrink_dcache_sb for pruning only a certain number of dentries. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: improve DCACHE_REFERENCED usageNick Piggin2010-10-251-3/+6
| | | | | | | | | | | | | | | | | | | | | | dentry referenced bit is only set when installing the dentry back onto the LRU. However with lazy LRU, the dentry can already be on the LRU list at dput time, thus missing out on setting the referenced bit. Fix this. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: use percpu counter for nr_dentry and nr_dentry_unusedChristoph Hellwig2010-10-251-19/+32
| | | | | | | | | | | | | | | | | | | | | | | | The nr_dentry stat is a globally touched cacheline and atomic operation twice over the lifetime of a dentry. It is used for the benfit of userspace only. Turn it into a per-cpu counter and always decrement it in d_free instead of doing various batching operations to reduce lock hold times in the callers. Based on an earlier patch from Nick Piggin <npiggin@suse.de>. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: simplify __d_freeChristoph Hellwig2010-10-251-9/+5
| | | | | | | | | | | | | | Remove d_callback and always call __d_free with a RCU head. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: take dcache_lock inside __d_pathChristoph Hellwig2010-10-252-4/+4
| | | | | | | | | | | | | | | | All callers take dcache_lock just around the call to __d_path, so take the lock into it in preparation of getting rid of dcache_lock. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: do not assign default i_ino in new_inodeChristoph Hellwig2010-10-2516-2/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of always assigning an increasing inode number in new_inode move the call to assign it into those callers that actually need it. For now callers that need it is estimated conservatively, that is the call is added to all filesystems that do not assign an i_ino by themselves. For a few more filesystems we can avoid assigning any inode number given that they aren't user visible, and for others it could be done lazily when an inode number is actually needed, but that's left for later patches. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: introduce a per-cpu last_ino allocatorEric Dumazet2010-10-251-7/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | new_inode() dirties a contended cache line to get increasing inode numbers. This limits performance on workloads that cause significant parallel inode allocation. Solve this problem by using a per_cpu variable fed by the shared last_ino in batches of 1024 allocations. This reduces contention on the shared last_ino, and give same spreading ino numbers than before (i.e. same wraparound after 2^32 allocations). Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * new helper: ihold()Al Viro2010-10-2535-45/+52
| | | | | | | | | | | | Clones an existing reference to inode; caller must already hold one. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: remove inode_add_to_list/__inode_add_to_listChristoph Hellwig2010-10-252-39/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | Split up inode_add_to_list/__inode_add_to_list. Locking for the two lists will be split soon so these helpers really don't buy us much anymore. The __ prefixes for the sb list helpers will go away soon, but until inode_lock is gone we'll need them to distinguish between the locked and unlocked variants. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: move i_count increments into find_inode/find_inode_fastChristoph Hellwig2010-10-251-11/+6
| | | | | | | | | | | | | | | | Now that iunique is not abusing find_inode anymore we can move the i_ref increment back to where it belongs. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: Stop abusing find_inode_fast in iuniqueChristoph Hellwig2010-10-251-5/+25
| | | | | | | | | | | | | | | | | | | | | | Stop abusing find_inode_fast for iunique and opencode the inode hash walk. Introduce a new iunique_lock to protect the iunique counters once inode_lock is removed. Based on a patch originally from Nick Piggin. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: Factor inode hash operations into functionsDave Chinner2010-10-251-45/+55
| | | | | | | | | | | | | | | | | | | | | | | | Before replacing the inode hash locking with a more scalable mechanism, factor the removal of the inode from the hashes rather than open coding it in several places. Based on a patch originally from Nick Piggin. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: Implement lazy LRU updates for inodesNick Piggin2010-10-252-33/+64
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the inode LRU to use lazy updates to reduce lock and cacheline traffic. We avoid moving inodes around in the LRU list during iget/iput operations so these frequent operations don't need to access the LRUs. Instead, we defer the refcount checks to reclaim-time and use a per-inode state flag, I_REFERENCED, to tell reclaim that iget has touched the inode in the past. This means that only reclaim should be touching the LRU with any frequency, hence significantly reducing lock acquisitions and the amount contention on LRU updates. This also removes the inode_in_use list, which means we now only have one list for tracking the inode LRU status. This makes it much simpler to split out the LRU list operations under it's own lock. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: Convert nr_inodes and nr_unused to per-cpu countersDave Chinner2010-10-253-22/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The number of inodes allocated does not need to be tied to the addition or removal of an inode to/from a list. If we are not tied to a list lock, we could update the counters when inodes are initialised or destroyed, but to do that we need to convert the counters to be per-cpu (i.e. independent of a lock). This means that we have the freedom to change the list/locking implementation without needing to care about the counters. Based on a patch originally from Eric Dumazet. [AV: cleaned up a bit, fixed build breakage on weird configs Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * vfs: fix infinite loop caused by clone_mnt raceMiklos Szeredi2010-10-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | If clone_mnt() happens while mnt_make_readonly() is running, the cloned mount might have MNT_WRITE_HOLD flag set, which results in mnt_want_write() spinning forever on this mount. Needs CAP_SYS_ADMIN to trigger deliberately and unlikely to happen accidentally. But if it does happen it can hang the machine. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * switch hfs to hlist_add_fake()Al Viro2010-10-253-4/+1
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * list.h: new helper - hlist_add_fake()Al Viro2010-10-252-2/+2
| | | | | | | | | | | | | | | | | | | | Make node look as if it was on hlist, with hlist_del() working correctly. Usable without any locking... Convert a couple of places where we want to do that to inode->i_hash. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * new helper: inode_unhashed()Al Viro2010-10-254-6/+6
| | | | | | | | | | | | note: for race-free uses you inode_lock held Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * unexport invalidate_inodesAl Viro2010-10-252-1/+5
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * smbfs never retains inodes with zero refcount in the first placeAl Viro2010-10-251-1/+0
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * ntfs: don't call invalidate_inodes()Al Viro2010-10-251-15/+0
| | | | | | | | | | | | | | We are in fill_super(); again, no inodes with zero i_count could be around until we set MS_ACTIVE. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * gfs2: invalidate_inodes() is no-op thereAl Viro2010-10-252-2/+0
| | | | | | | | | | | | | | | | | | | | | | In fill_super() we hadn't MS_ACTIVE set yet, so there won't be any inodes with zero i_count sitting around. In put_super() we already have MS_ACTIVE removed *and* we had called invalidate_inodes() since then. So again there won't be any inodes with zero i_count... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * ext2_remount: don't bother with invalidate_inodes()Al Viro2010-10-251-3/+1
| | | | | | | | | | | | | | | | It's pointless - we *do* have busy inodes (root directory, for one), so that call will fail and attempt to change XIP flag will be ignored. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs/buffer.c: call __block_write_begin() if we have pageNamhyung Kim2010-10-251-5/+4
| | | | | | | | | | | | | | | | | | If we have the appropriate page already, call __block_write_begin() directly instead of releasing and regrabbing it inside of block_write_begin(). Signed-off-by: Namhyung Kim <namhyung@gmail.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * lockdep: fixup checking of dir inode annotationNamhyung Kim2010-10-251-1/+1
| | | | | | | | | | | | | | | | Since inode->i_mode shares its bits for S_IFMT, S_ISDIR should be used to distinguish whether it is a dir or not. Signed-off-by: Namhyung Kim <namhyung@gmail.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * aio: bump i_count instead of using igrabChris Mason2010-10-251-1/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The aio batching code is using igrab to get an extra reference on the inode so it can safely batch. igrab will go ahead and take the global inode spinlock, which can be a bottleneck on large machines doing lots of AIO. In this case, igrab isn't required because we already have a reference on the file handle. It is safe to just bump the i_count directly on the inode. Benchmarking shows this patch brings IOP/s on tons of flash up by about 2.5X. Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * fs/buffer.c: remove duplicated assignment on b_privateNamhyung Kim2010-10-251-1/+0
| | | | | | | | | | | | | | | | bh->b_private is initialized within init_buffer(), thus the assignment should be redundant. Remove it. Signed-off-by: Namhyung Kim <namhyung@gmail.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs: move exportfs since it is not a networking filesystemRandy Dunlap2010-10-251-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the EXPORTFS kconfig symbol out of the NETWORK_FILESYSTEMS block since it provides a library function that can be (and is) used by other (non-network) filesystems. This also eliminates a kconfig dependency warning: warning: (XFS_FS && BLOCK || NFSD && NETWORK_FILESYSTEMS && INET && FILE_LOCKING && BKL) selects EXPORTFS which has unmet direct dependencies (NETWORK_FILESYSTEMS) Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Alex Elder <aelder@sgi.com> Cc: xfs-masters@oss.sgi.com Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * hfs: use sync_dirty_bufferChristoph Hellwig2010-10-252-13/+2
| | | | | | | | | | | | | | Use sync_dirty_buffer instead of the incorrect opencoding it. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
OpenPOWER on IntegriCloud