From 800d8c63b2e989c2e349632d1648119bf5862f01 Mon Sep 17 00:00:00 2001 From: "Kirill A. Shutemov" Date: Tue, 26 Jul 2016 15:26:18 -0700 Subject: shmem: add huge pages support Here's basic implementation of huge pages support for shmem/tmpfs. It's all pretty streight-forward: - shmem_getpage() allcoates huge page if it can and try to inserd into radix tree with shmem_add_to_page_cache(); - shmem_add_to_page_cache() puts the page onto radix-tree if there's space for it; - shmem_undo_range() removes huge pages, if it fully within range. Partial truncate of huge pages zero out this part of THP. This have visible effect on fallocate(FALLOC_FL_PUNCH_HOLE) behaviour. As we don't really create hole in this case, lseek(SEEK_HOLE) may have inconsistent results depending what pages happened to be allocated. - no need to change shmem_fault: core-mm will map an compound page as huge if VMA is suitable; Link: http://lkml.kernel.org/r/1466021202-61880-30-git-send-email-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- include/linux/shmem_fs.h | 3 +++ 1 file changed, 3 insertions(+) (limited to 'include/linux/shmem_fs.h') diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index ff2de4bab61f..94eaaa2c6ad9 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -71,6 +71,9 @@ static inline struct page *shmem_read_mapping_page( mapping_gfp_mask(mapping)); } +extern bool shmem_charge(struct inode *inode, long pages); +extern void shmem_uncharge(struct inode *inode, long pages); + #ifdef CONFIG_TMPFS extern int shmem_add_seals(struct file *file, unsigned int seals); -- cgit v1.2.1