summaryrefslogtreecommitdiffstats
path: root/fs
Commit message (Collapse)AuthorAgeFilesLines
* fs/proc/page.c: add PageAnon check to surely detect thpNaoya Horiguchi2014-01-231-3/+5
| | | | | | | | | | | | | | | | | | | stable_page_flags() checks !PageHuge && PageTransCompound && PageLRU to know that a specified page is thp or not. But sometimes it's not enough and we fail to detect thp when the thp is on pagevec. This happens only for a few seconds after LRU list operations, but it makes it difficult to control our applications depending on this flag. So this patch adds another check PageAnon to detect thps on pagevec. It might not give the future extensibility for thp pagecache, but it's OK at least for now. Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: David Rientjes <rientjes@google.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'for_linus' of ↵Linus Torvalds2014-01-233-8/+6
|\ | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs Pull UDF & jbd fixes from Jan Kara: "A cleanup of JBD log messages and UDF fix of a lockdep warning" * 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs: udf: Fix lockdep warning from udf_symlink() jbd: Revise KERN_EMERG error messages
| * udf: Fix lockdep warning from udf_symlink()Jan Kara2013-12-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Lockdep is complaining about UDF: ============================================= [ INFO: possible recursive locking detected ] 3.12.0+ #16 Not tainted --------------------------------------------- ln/7386 is trying to acquire lock: (&ei->i_data_sem){+.+...}, at: [<ffffffff8142f06d>] udf_get_block+0x8d/0x130 but task is already holding lock: (&ei->i_data_sem){+.+...}, at: [<ffffffff81431a8d>] udf_symlink+0x8d/0x690 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&ei->i_data_sem); lock(&ei->i_data_sem); *** DEADLOCK *** This is because we hold i_data_sem of the symlink inode while calling udf_add_entry() for the directory. I don't think this can ever lead to deadlocks since we never hold i_data_sem for two inodes in any other place. The fix is simple - move unlock of i_data_sem for symlink inode up. We don't need it for anything when linking symlink inode to directory. Reported-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: Jan Kara <jack@suse.cz>
| * jbd: Revise KERN_EMERG error messagesJan Kara2013-12-042-7/+5
| | | | | | | | | | | | | | | | | | | | Some of KERN_EMERG printk messages do not really deserve this log level and the one in log_wait_commit() is even rather useless (the journal has been previously aborted and *that* is where we should have been complaining). So make some messages just KERN_ERR and remove the useless message. Signed-off-by: Jan Kara <jack@suse.cz>
* | Merge branch 'for-linus' of ↵Linus Torvalds2014-01-235-32/+71
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse Pull fuse update from Miklos Szeredi: "This contains a fix for a potential use-after-module-unload bug noticed by Al and caching improvements for read-only fuse filesystems by Andrew Gallagher" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/fuse: fuse: support clients that don't implement 'open' fuse: don't invalidate attrs when not using atime fuse: fix SetPageUptodate() condition in STORE fuse: fix pipe_buf_operations
| * | fuse: support clients that don't implement 'open'Andrew Gallagher2014-01-222-10/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | open/release operations require userspace transitions to keep track of the open count and to perform any FS-specific setup. However, for some purely read-only FSs which don't need to perform any setup at open/release time, we can avoid the performance overhead of calling into userspace for open/release calls. This patch adds the necessary support to the fuse kernel modules to prevent open/release operations from hitting in userspace. When the client returns ENOSYS, we avoid sending the subsequent release to userspace, and also remember this so that future opens also don't trigger a userspace operation. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
| * | fuse: don't invalidate attrs when not using atimeAndrew Gallagher2014-01-223-4/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Various read operations (e.g. readlink, readdir) invalidate the cached attrs for atime changes. This patch adds a new function 'fuse_invalidate_atime', which checks for a read-only super block and avoids the attr invalidation in that case. Signed-off-by: Andrew Gallagher <andrewjcg@fb.com> Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
| * | fuse: fix SetPageUptodate() condition in STOREMiklos Szeredi2014-01-221-1/+2
| | | | | | | | | | | | | | | | | | | | | As noticed by Coverity the "num != 0" condition never triggers. Instead it should check for a complete page. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
| * | fuse: fix pipe_buf_operationsMiklos Szeredi2014-01-222-17/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Having this struct in module memory could Oops when if the module is unloaded while the buffer still persists in a pipe. Since sock_pipe_buf_ops is essentially the same as fuse_dev_pipe_buf_steal merge them into nosteal_pipe_buf_ops (this is the same as default_pipe_buf_ops except stealing the page from the buffer is not allowed). Reported-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Cc: stable@vger.kernel.org
* | | Merge tag 'for-f2fs-3.14' of ↵Linus Torvalds2014-01-2319-800/+1739
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs Pull f2fs updates from Jaegeuk Kim: "In this round, a couple of sysfs entries were introduced to tune the f2fs at runtime. In addition, f2fs starts to support inline_data and improves the read/write performance in some workloads by refactoring bio-related flows. This patch-set includes the following major enhancement patches. - support inline_data - refactor bio operations such as merge operations and rw type assignment - enhance the direct IO path - enhance bio operations - truncate a node page when it becomes obsolete - add sysfs entries: small_discards, max_victim_search, and in-place-update - add a sysfs entry to control max_victim_search The other bug fixes are as follows. - fix a bug in truncate_partial_nodes - avoid warnings during sparse and build process - fix error handling flows - fix potential bit overflows And, there are a bunch of cleanups" * tag 'for-f2fs-3.14' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (95 commits) f2fs: drop obsolete node page when it is truncated f2fs: introduce NODE_MAPPING for code consistency f2fs: remove the orphan block page array f2fs: add help function META_MAPPING f2fs: move a branch for code redability f2fs: call mark_inode_dirty to flush dirty pages f2fs: clean checkpatch warnings f2fs: missing REQ_META and REQ_PRIO when sync_meta_pages(META_FLUSH) f2fs: avoid f2fs_balance_fs call during pageout f2fs: add delimiter to seperate name and value in debug phrase f2fs: use spinlock rather than mutex for better speed f2fs: move alloc new orphan node out of lock protection region f2fs: move grabing orphan pages out of protection region f2fs: remove the needless parameter of f2fs_wait_on_page_writeback f2fs: update documents and a MAINTAINERS entry f2fs: add a sysfs entry to control max_victim_search f2fs: improve write performance under frequent fsync calls f2fs: avoid to read inline data except first page f2fs: avoid to left uninitialized data in page when read inline data f2fs: fix truncate_partial_nodes bug ...
| * | | f2fs: drop obsolete node page when it is truncatedJaegeuk Kim2014-01-231-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If a node page is trucated, we'd better drop the page in the node_inode's page cache for better memory footprint. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: introduce NODE_MAPPING for code consistencyJaegeuk Kim2014-01-224-31/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds NODE_MAPPING which is similar as META_MAPPING introduced by Gu Zheng. Cc: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: remove the orphan block page arrayGu Zheng2014-01-221-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As the orphan_blocks may be max to 504, so it is not security and rigorous to store such a large array in the kernel stack as Dan Carpenter said. In fact, grab_meta_page has locked the page in the page cache, and we can use find_get_page() to fetch the page safely in the downstream, so we can remove the page array directly. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: add help function META_MAPPINGGu Zheng2014-01-225-8/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce help function META_MAPPING() to get the cache meta blocks' address space. Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: move a branch for code redabilityJaegeuk Kim2014-01-221-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | This patch moves a function in f2fs_delete_entry for code readability. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: call mark_inode_dirty to flush dirty pagesJaegeuk Kim2014-01-223-5/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a dentry page is updated, we should call mark_inode_dirty to add the inode into the dirty list, so that its dentry pages are flushed to the disk. Otherwise, the inode can be evicted without flush. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: clean checkpatch warningsChris Fries2014-01-2011-30/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixed a variety of trivial checkpatch warnings. The only delta should be some minor formatting on log strings that were split / too long. Signed-off-by: Chris Fries <cfries@motorola.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: missing REQ_META and REQ_PRIO when sync_meta_pages(META_FLUSH)Changman Lee2014-01-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Doing sync_meta_pages with META_FLUSH when checkpoint, we overide rw using WRITE_FLUSH_FUA. At this time, we also should set REQ_META|REQ_PRIO. Signed-off-by: Changman Lee <cm224.lee@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: avoid f2fs_balance_fs call during pageoutJaegeuk Kim2014-01-161-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch should resolve the following bug. ========================================================= [ INFO: possible irq lock inversion dependency detected ] 3.13.0-rc5.f2fs+ #6 Not tainted --------------------------------------------------------- kswapd0/41 just changed the state of lock: (&sbi->gc_mutex){+.+.-.}, at: [<ffffffffa030503e>] f2fs_balance_fs+0xae/0xd0 [f2fs] but this lock took another, RECLAIM_FS-READ-unsafe lock in the past: (&sbi->cp_rwsem){++++.?} and interrupts could create inverse lock ordering between them. other info that might help us debug this: Chain exists of: &sbi->gc_mutex --> &sbi->cp_mutex --> &sbi->cp_rwsem Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&sbi->cp_rwsem); local_irq_disable(); lock(&sbi->gc_mutex); lock(&sbi->cp_mutex); <Interrupt> lock(&sbi->gc_mutex); *** DEADLOCK *** This bug is due to the f2fs_balance_fs call in f2fs_write_data_page. If f2fs_write_data_page is triggered by wbc->for_reclaim via kswapd, it should not call f2fs_balance_fs which tries to get a mutex grabbed by original syscall flow. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: add delimiter to seperate name and value in debug phraseChangman Lee2014-01-141-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Support for f2fs-tools/tools/f2stat to monitor /sys/kernel/debug/f2fs/status Signed-off-by: Changman Lee <cm224.lee@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: use spinlock rather than mutex for better speedGu Zheng2014-01-142-13/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the 2 previous changes, all the long time operations are moved out of the protection region, so here we can use spinlock rather than mutex (orphan_inode_mutex) for lower overhead. Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: move alloc new orphan node out of lock protection regionGu Zheng2014-01-141-6/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Move alloc new orphan node out of lock protection region. Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: move grabing orphan pages out of protection regionGu Zheng2014-01-141-7/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move grabing orphan block page out of protection region, and grab all the orphan block pages ahead. Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Reviewed-by: Chao Yu <chao2.yu@samsung.com> [Jaegeuk Kim: remove unnecessary code pointed by Chao Yu] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: remove the needless parameter of f2fs_wait_on_page_writebackYuan Zhong2014-01-146-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "boo sync" parameter is never referenced in f2fs_wait_on_page_writeback. We should remove this parameter. Signed-off-by: Yuan Zhong <yuan.mark.zhong@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: add a sysfs entry to control max_victim_searchJaegeuk Kim2014-01-084-3/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously during SSR and GC, the maximum number of retrials to find a victim segment was hard-coded by MAX_VICTIM_SEARCH, 4096 by default. This number makes an effect on IO locality, when SSR mode is activated, which results in performance fluctuation on some low-end devices. If max_victim_search = 4, the victim will be searched like below. ("D" represents a dirty segment, and "*" indicates a selected victim segment.) D1 D2 D3 D4 D5 D6 D7 D8 D9 [ * ] [ * ] [ * ] [ ....] This patch adds a sysfs entry to control the number dynamically through: /sys/fs/f2fs/$dev/max_victim_search Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: improve write performance under frequent fsync callsJaegeuk Kim2014-01-084-10/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When considering a bunch of data writes with very frequent fsync calls, we are able to think the following performance regression. N: Node IO, D: Data IO, IO scheduler: cfq Issue pending IOs D1 D2 D3 D4 D1 D2 D3 D4 N1 D2 D3 D4 N1 N2 N1 D3 D4 N2 D1 --> N1 can be selected by cfq becase of the same priority of N and D. Then D3 and D4 would be delayed, resuling in performance degradation. So, when processing the fsync call, it'd better give higher priority to data IOs than node IOs by assigning WRITE and WRITE_SYNC respectively. This patch improves the random wirte performance with frequent fsync calls by up to 10%. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: avoid to read inline data except first pageChao Yu2014-01-061-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here is a case which could read inline page data not from first page. 1. write inline data 2. lseek to offset 4096 3. read 4096 bytes from offset 4096 (read_inline_data read inline data page to non-first page, And previously VFS has add this page to page cache) 4. ftruncate offset 8192 5. read 4096 bytes from offset 4096 (we meet this updated page with inline data in cache) So we should leave this page with inited data and uptodate flag for this case. Change log from v1: o fix a deadlock bug Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: avoid to left uninitialized data in page when read inline dataChao Yu2014-01-061-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change log from v1: o reduce unneeded memset in __f2fs_convert_inline_data >From 58796be2bd2becbe8d52305210fb2a64e7dd80b6 Mon Sep 17 00:00:00 2001 From: Chao Yu <chao2.yu@samsung.com> Date: Mon, 30 Dec 2013 09:21:33 +0800 Subject: [PATCH] f2fs: avoid to left uninitialized data in page when read inline data We left uninitialized data in the tail of page when we read an inline data page. So let's initialize left part of the page excluding inline data region. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: fix truncate_partial_nodes bugshifei10.ge2014-01-061-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The truncate_partial_nodes puts pages incorrectly in the following two cases. Note that the value for argc 'depth' can only be 2 or 3. Please see truncate_inode_blocks() and truncate_partial_nodes(). 1) An err is occurred in the first 'for' loop When err is occurred with depth = 2, pages[0] is invalid, so this page doesn't need to be put. There is no problem, however, when depth is 3, it doesn't put the pages correctly where pages[0] is valid and pages[1] is invalid. In this case, depth is set to 2 (ref to statemnt depth = i + 1), and then 'goto fail'. In label 'fail', for (i = depth - 3; i >= 0; i--) cannot meet the condition because i = -1, so pages[0] cann't be put. 2) An err happened in the second 'for' loop Now we've got pages[0] with depth = 2, or we've got pages[0] and pages[1] with depth = 3. When an err is detected, we need 'goto fail' to put such the pages. When depth is 2, in label 'fail', for (i = depth - 3; i >= 0; i--) cann't meet the condition because i = -1, so pages[0] cann't be put. When depth is 3, in label 'fail', for (i = depth - 3; i >= 0; i--) can only put pages[0], pages[1] also cann't be put. Note that 'depth' has been changed before first 'goto fail' (ref to statemnt depth = i + 1), so passing this modified 'depth' to the tracepoint, trace_f2fs_truncate_partial_nodes, is also incorrect. Signed-off-by: Shifei Ge <shifei10.ge@samsung.com> [Jaegeuk Kim: modify the description and fix one bug] Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: handle errors correctly during f2fs_reserve_blockJaegeuk Kim2014-01-063-16/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The get_dnode_of_data nullifies inode and node page when error is occurred. There are two cases that passes inode page into get_dnode_of_data(). 1. make_empty_dir() -> get_new_data_page() -> f2fs_reserve_block(ipage) -> get_dnode_of_data() 2. f2fs_convert_inline_data() -> __f2fs_convert_inline_data() -> f2fs_reserve_block(ipage) -> get_dnode_of_data() This patch adds correct error handling codes when get_dnode_of_data() returns an error. At first, f2fs_reserve_block() calls f2fs_put_dnode() whenever reserve_new_block returns an error. So, the rule of f2fs_reserve_block() is to nullify inode page when there is any error internally. Finally, two callers of f2fs_reserve_block() should call f2fs_put_dnode() appropriately if they got an error since successful f2fs_reserve_block(). Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: add inline_data recovery routineJaegeuk Kim2014-01-064-3/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a inline_data recovery routine with the following policy. [prev.] [next] of inline_data flag o o -> recover inline_data o x -> remove inline_data, and then recover data blocks x o -> remove inline_data, and then recover inline_data x x -> recover data blocks Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: add the number of inline_data files to status infoJaegeuk Kim2014-01-064-7/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the number of inline_data files into the status information. Note that the number is reset whenever the filesystem is newly mounted. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: refactor f2fs_convert_inline_dataJaegeuk Kim2014-01-064-41/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change log from v1: o handle NULL pointer of grab_cache_page_write_begin() pointed by Chao Yu. This patch refactors f2fs_convert_inline_data to check a couple of conditions internally for deciding whether it needs to convert inline_data or not. So, the new f2fs_convert_inline_data initially checks: 1) f2fs_has_inline_data(), and 2) the data size to be changed. If the inode has inline_data but the size to fill is less than MAX_INLINE_DATA, then we don't need to convert the inline_data with data allocation. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: call f2fs_put_page at the error caseJaegeuk Kim2014-01-061-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In f2fs_write_begin(), if f2fs_conver_inline_data() returns an error like -ENOSPC, f2fs should call f2fs_put_page(). Otherwise, it is remained as a locked page, resulting in the following bug. [<ffffffff8114657e>] sleep_on_page+0xe/0x20 [<ffffffff81146567>] __lock_page+0x67/0x70 [<ffffffff81157d08>] truncate_inode_pages_range+0x368/0x5d0 [<ffffffff81157ff5>] truncate_inode_pages+0x15/0x20 [<ffffffff8115804b>] truncate_pagecache+0x4b/0x70 [<ffffffff81158082>] truncate_setsize+0x12/0x20 [<ffffffffa02a1842>] f2fs_setattr+0x72/0x270 [f2fs] [<ffffffff811cdae3>] notify_change+0x213/0x400 [<ffffffff811ab376>] do_truncate+0x66/0xa0 [<ffffffff811ab541>] vfs_truncate+0x191/0x1b0 [<ffffffff811ab5bc>] do_sys_truncate+0x5c/0xa0 [<ffffffff811ab78e>] SyS_truncate+0xe/0x10 [<ffffffff81756052>] system_call_fastpath+0x16/0x1b [<ffffffffffffffff>] 0xffffffffffffffff Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: convert inline_data for punch_holeJaegeuk Kim2014-01-061-19/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the punch_hole(), let's convert inline_data all the time for simplicity and to avoid potential deadlock conditions. It is pretty much not a big deal to do this. Reviewed-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: don't need to get f2fs_lock_op for the inline_data testJaegeuk Kim2013-12-271-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch locates checking the inline_data prior to calling f2fs_lock_op() in truncate_blocks(), since getting the lock is unnecessary. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: handle inline data operationsHuajun Li2013-12-262-8/+86
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Hook inline data read/write, truncate, fallocate, setattr, etc. Files need meet following 2 requirement to inline: 1) file size is not greater than MAX_INLINE_DATA; 2) file doesn't pre-allocate data blocks by fallocate(). FI_INLINE_DATA will not be set while creating a new regular inode because most of the files are bigger than ~3.4K. Set FI_INLINE_DATA only when data is submitted to block layer, ranther than set it while creating a new inode, this also avoids converting data from inline to normal data block and vice versa. While writting inline data to inode block, the first data block should be released if the file has a block indexed by i_addr[0]. On the other hand, when a file operation is appied to a file with inline data, we need to test if this file can remain inline by doing this operation, otherwise it should be convert into normal file by reserving a new data block, copying inline data to this new block and clear FI_INLINE_DATA flag. Because reserve a new data block here will make use of i_addr[0], if we save inline data in i_addr[0..872], then the first 4 bytes would be overwriten. This problem can be avoided simply by not using i_addr[0] for inline data. Signed-off-by: Huajun Li <huajun.li@intel.com> Signed-off-by: Haicheng Li <haicheng.li@linux.intel.com> Signed-off-by: Weihong Xu <weihong.xu@intel.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: key functions to handle inline dataHuajun Li2013-12-263-1/+185
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Functions to implement inline data read/write, and move inline data to normal data block when file size exceeds inline data limitation. Signed-off-by: Huajun Li <huajun.li@intel.com> Signed-off-by: Haicheng Li <haicheng.li@linux.intel.com> Signed-off-by: Weihong Xu <weihong.xu@intel.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: convert max_orphans to a field of f2fs_sb_infoGu Zheng2013-12-262-10/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, we need to calculate the max orphan num when we try to acquire an orphan inode, but it's a stable value since the super block was inited. So converting it to a field of f2fs_sb_info and use it directly when needed seems a better choose. Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: check the blocksize before calling generic_direct_IO pathJaegeuk Kim2013-12-261-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The f2fs supports 4KB block size. If user requests dwrite with under 4KB data, it allocates a new 4KB data block. However, f2fs doesn't add zero data into the untouched data area inside the newly allocated data block. This incurs an error during the xfstest #263 test as follow. 263 12s ... [failed, exit status 1] - output mismatch (see 263.out.bad) --- 263.out 2013-03-09 03:37:15.043967603 +0900 +++ 263.out.bad 2013-12-27 04:20:39.230203114 +0900 @@ -1,3 +1,976 @@ QA output created by 263 fsx -N 10000 -o 8192 -l 500000 -r PSIZE -t BSIZE -w BSIZE -Z -fsx -N 10000 -o 128000 -l 500000 -r PSIZE -t BSIZE -w BSIZE -Z +fsx -N 10000 -o 8192 -l 500000 -r PSIZE -t BSIZE -w BSIZE -Z +truncating to largest ever: 0x12a00 +truncating to largest ever: 0x75400 +fallocating to largest ever: 0x79cbf ... (Run 'diff -u 263.out 263.out.bad' to see the entire diff) Ran: 263 Failures: 263 Failed 1 of 1 tests It turns out that, when the test tries to write 2KB data with dio, the new dio path allocates 4KB data block without filling zero data inside the remained 2KB area. Finally, the output file contains a garbage data for that region. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: should put the dnode when NEW_ADDR is detectedJaegeuk Kim2013-12-261-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When get_dnode_of_data() in get_data_block() returns a successful dnode, we should put the dnode. But, previously, if its data block address is equal to NEW_ADDR, we didn't do that, resulting in a deadlock condition. So, this patch splits original error conditions with this case, and then calls f2fs_put_dnode before finishing the function. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: introduce F2FS_INODE macro to get f2fs_inodeJaegeuk Kim2013-12-265-29/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces F2FS_INODE that returns struct f2fs_inode * from the inode page. By using this macro, we can remove unnecessary casting codes like below. struct f2fs_inode *ri = &F2FS_NODE(inode_page)->i; -> struct f2fs_inode *ri = F2FS_INODE(inode_page); Reviewed-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: check filename length in recover_dentryChao Yu2013-12-262-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In current flow, we will get Null return value of f2fs_find_entry in recover_dentry when name.len is bigger than F2FS_NAME_LEN, and then we still add this inode into its dir entry. To avoid this situation, we must check filename length before we use it. Another point is that we could remove the code of checking filename length In f2fs_find_entry, because f2fs_lookup will be called previously to ensure of validity of filename length. V2: o add WARN_ON() as Jaegeuk Kim suggested. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: avoid to set wrong pino of inode when rename dirChao Yu2013-12-232-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we rename a dir to new name which is not exist previous, we will set pino of parent inode with ino of child inode in f2fs_set_link. It destroy consistency of pino, it should be fixed. Thanks for previous work of Shu Tan. Signed-off-by: Shu Tan <shu.tan@samsung.com> Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: update several commentsChao Yu2013-12-235-14/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Update several comments: 1. use f2fs_{un}lock_op install of mutex_{un}lock_op. 2. update comment of get_data_block(). 3. update description of node offset. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: remove the rw_flag domain from f2fs_io_infoGu Zheng2013-12-235-16/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When using the f2fs_io_info in the low level, we still need to merge the rw and rw_flag, so use the rw to hold all the io flags directly, and remove the rw_flag field. ps.It is based on the previous patch: f2fs: move all the bio initialization into __bio_alloc Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: move all the bio initialization into __bio_allocGu Zheng2013-12-231-50/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move all the bio initialization into __bio_alloc, and some minor cleanups are also added. v3: Use 'bool' rather than 'int' as Kim suggested. v2: Use 'is_read' rather than 'rw' as Yu Chao suggested. Remove the needless initialization of bio->bi_private. Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: write dirty meta pages collectivelyJaegeuk Kim2013-12-231-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch enhances writing dirty meta pages collectively in background. During the file data writes, it'd better avoid to write small dirty meta pages frequently. So let's give a chance to collect a number of dirty meta pages for a while. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: introduce a new direct_IO write pathJaegeuk Kim2013-12-233-48/+129
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, f2fs doesn't support direct IOs with high performance, which throws every write requests via the buffered write path, resulting in highly performance degradation due to memory opeations like copy_from_user. This patch introduces a new direct IO path in which every write requests are processed by generic blockdev_direct_IO() with enhanced get_block function. The get_data_block() in f2fs handles: 1. if original data blocks are allocates, then give them to blockdev. 2. otherwise, a. preallocate requested block addresses b. do not use extent cache for better performance c. give the block addresses to blockdev This policy induces that: - new allocated data are sequentially written to the disk - updated data are randomly written to the disk. - f2fs gives consistency on its file meta, not file data. Reviewed-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
| * | | f2fs: introduce sysfs entry to control in-place-update policyJaegeuk Kim2013-12-234-6/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces new sysfs entries for users to control the policy of in-place-updates, namely IPU, in f2fs. Sometimes f2fs suffers from performance degradation due to its out-of-place update policy that produces many additional node block writes. If the storage performance is very dependant on the amount of data writes instead of IO patterns, we'd better drop this out-of-place update policy. This patch suggests 5 polcies and their triggering conditions as follows. [sysfs entry name = ipu_policy] 0: F2FS_IPU_FORCE all the time, 1: F2FS_IPU_SSR if SSR mode is activated, 2: F2FS_IPU_UTIL if FS utilization is over threashold, 3: F2FS_IPU_SSR_UTIL if SSR mode is activated and FS utilization is over threashold, 4: F2FS_IPU_DISABLE disable IPU. (=default option) [sysfs entry name = min_ipu_util] This parameter controls the threshold to trigger in-place-updates. The number indicates percentage of the filesystem utilization, and used by F2FS_IPU_UTIL and F2FS_IPU_SSR_UTIL policies. For more details, see need_inplace_update() in segment.h. Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
OpenPOWER on IntegriCloud