summaryrefslogtreecommitdiffstats
path: root/lib
Commit message (Collapse)AuthorAgeFilesLines
* memblock: remove _virt from APIs returning virtual addressMike Rapoport2018-10-311-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The conversion is done using sed -i 's@memblock_virt_alloc@memblock_alloc@g' \ $(git grep -l memblock_virt_alloc) Link: http://lkml.kernel.org/r/1536927045-23536-8-git-send-email-rppt@linux.vnet.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@suse.com> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Serge Semin <fancer.lancer@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* mm: remove CONFIG_HAVE_MEMBLOCKMike Rapoport2018-10-311-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All architecures use memblock for early memory management. There is no need for the CONFIG_HAVE_MEMBLOCK configuration option. [rppt@linux.vnet.ibm.com: of/fdt: fixup #ifdefs] Link: http://lkml.kernel.org/r/20180919103457.GA20545@rapoport-lnx [rppt@linux.vnet.ibm.com: csky: fixups after bootmem removal] Link: http://lkml.kernel.org/r/20180926112744.GC4628@rapoport-lnx [rppt@linux.vnet.ibm.com: remove stale #else and the code it protects] Link: http://lkml.kernel.org/r/1538067825-24835-1-git-send-email-rppt@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1536927045-23536-4-git-send-email-rppt@linux.vnet.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com> Acked-by: Michal Hocko <mhocko@suse.com> Tested-by: Jonathan Cameron <jonathan.cameron@huawei.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Serge Semin <fancer.lancer@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* lib/lz4: update LZ4 decompressor moduleGao Xiang2018-10-312-141/+349
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Update the LZ4 compression module based on LZ4 v1.8.3 in order for the erofs file system to use the newest LZ4_decompress_safe_partial() which can now decode exactly the nb of bytes requested [1] to take place of the open hacked code in the erofs file system itself. Currently, apart from the erofs file system, no other users use LZ4_decompress_safe_partial, so no worry about the interface. In addition, LZ4 v1.8.x boosts up decompression speed compared to the current code which is based on LZ4 v1.7.3, mainly due to shortcut optimization for the specific common LZ4-sequences [2]. lzbench testdata (tested in kirin710, 8 cores, 4 big cores at 2189Mhz, 2GB DDR RAM at 1622Mhz, with enwik8 testdata [3]): Compressor name Compress. Decompress. Compr. size Ratio Filename memcpy 5004 MB/s 4924 MB/s 100000000 100.00 enwik8 lz4hc 1.7.3 -9 12 MB/s 653 MB/s 42203253 42.20 enwik8 lz4hc 1.8.0 -9 12 MB/s 908 MB/s 42203096 42.20 enwik8 lz4hc 1.8.3 -9 11 MB/s 965 MB/s 42203094 42.20 enwik8 [1] https://github.com/lz4/lz4/issues/566 https://github.com/lz4/lz4/commit/08d347b5b217b011ff7487130b79480d8cfdaeb8 [2] v1.8.1 perf: slightly faster compression and decompression speed https://github.com/lz4/lz4/commit/a31b7058cb97e4393da55e78a77a1c6f0c9ae038 v1.8.2 perf: slightly faster HC compression and decompression speed https://github.com/lz4/lz4/commit/45f8603aae389d34c689d3ff7427b314071ccd2c https://github.com/lz4/lz4/commit/1a191b3f8d26b50a7c1d41590b529ec308d768cd [3] http://mattmahoney.net/dc/textdata.html http://mattmahoney.net/dc/enwik8.zip Link: http://lkml.kernel.org/r/1537181207-21932-1-git-send-email-gaoxiang25@huawei.com Signed-off-by: Gao Xiang <gaoxiang25@huawei.com> Tested-by: Guo Xuenan <guoxuenan@huawei.com> Cc: Colin Ian King <colin.king@canonical.com> Cc: Yann Collet <yann.collet.73@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Fang Wei <fangwei1@huawei.com> Cc: Chao Yu <yuchao0@huawei.com> Cc: Miao Xie <miaoxie@huawei.com> Cc: Sven Schmidt <4sschmid@informatik.uni-hamburg.de> Cc: Kyungsik Lee <kyungsik.lee@lge.com> Cc: <weidu.du@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* lib/kstrtox.c: delete unnecessary castsAlexey Dobriyan2018-10-311-8/+8
| | | | | | | | | | Implicit casts to the same type are done by the language if necessary. Link: http://lkml.kernel.org/r/20181014223934.GA18107@avx2 Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* lib/sg_pool.c: remove unnecessary null check when freeing objectzhong jiang2018-10-311-4/+3
| | | | | | | | | | mempool_destroy(NULL) and kmem_cache_destroy(NULL) are legal Link: http://lkml.kernel.org/r/1533054107-35657-1-git-send-email-zhongjiang@huawei.com Signed-off-by: zhong jiang <zhongjiang@huawei.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* lib/zlib_inflate/inflate.c: remove fall through warningsCorentin Labbe2018-10-311-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | This patch remove all following fall through warnings by adding /* fall through */ markers. Note that we cannot add "__attribute__ ((fallthrough));" due to it is GCC7 only arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:384:25: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:391:25: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:393:16: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:430:25: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:556:25: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:595:25: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:602:25: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:627:25: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:646:25: warning: this statement may fall through [-Wimplicit-fallthrough=] arch/arm/boot/compressed/../../../../lib/zlib_inflate/inflate.c:696:25: warning: this statement may fall through [-Wimplicit-fallthrough=] It is easy to see that thoses fall through are needed since in each case state->mode are set to the case value just below. Link: http://lkml.kernel.org/r/1536215920-19955-1-git-send-email-clabbe@baylibre.com Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Cc: David Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* lib/parser.c: switch match_number() over to use match_strdup()Eric Biggers2018-10-311-4/+1
| | | | | | | | | | This simplifies the code. No change in behavior. Link: http://lkml.kernel.org/r/20180830194727.191555-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* lib/parser.c: switch match_u64int() over to use match_strdup()Eric Biggers2018-10-311-4/+1
| | | | | | | | | | This simplifies the code. No change in behavior. Link: http://lkml.kernel.org/r/20180830194814.192880-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* lib/parser.c: switch match_strdup() over to use kmemdup_nul()Eric Biggers2018-10-311-5/+1
| | | | | | | | | | This simplifies the code. No change in behavior. Link: http://lkml.kernel.org/r/20180830194436.188867-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* lib/bitmap.c: simplify bitmap_print_to_pagebuf()Rasmus Villemoes2018-10-311-5/+2
| | | | | | | | | | | | | | | | | | | len is guaranteed to lie in [1, PAGE_SIZE]. If scnprintf is called with a buffer size of 1, it is guaranteed to return 0. So in the extremely unlikely case of having just one byte remaining in the page, let's just call scnprintf anyway. The only difference is that this will write a '\0' to that final byte in the page, but that's an improvement: We now guarantee that after the call, buf is a properly terminated C string of length exactly the return value. Link: http://lkml.kernel.org/r/20180818131623.8755-8-linux@rasmusvillemoes.dk Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Yury Norov <ynorov@caviumnetworks.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* lib/bitmap.c: fix remaining space computation in bitmap_print_to_pagebufRasmus Villemoes2018-10-311-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For various alignments of buf, the current expression computes 4096 ok 4095 ok 8190 8189 ... 4097 i.e., if the caller has already written two bytes into the page buffer, len is 8190 rather than 4094, because PTR_ALIGN aligns up to the next boundary. So if the printed version of the bitmap is huge, scnprintf() ends up writing beyond the page boundary. I don't think any current callers actually write anything before bitmap_print_to_pagebuf, but the API seems to be designed to allow it. [akpm@linux-foundation.org: use offset_in_page(), per Andy] [akpm@linux-foundation.org: include mm.h for offset_in_page()] Link: http://lkml.kernel.org/r/20180818131623.8755-7-linux@rasmusvillemoes.dk Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Yury Norov <ynorov@caviumnetworks.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* lib/bitmap.c: remove wrong documentationRasmus Villemoes2018-10-311-5/+0
| | | | | | | | | | | | | | | | This promise is violated in a number of places, e.g. already in the second function below this paragraph. Since I don't think anybody relies on this being true, and since actually honouring it would hurt performance and code size in various places, just remove the paragraph. Link: http://lkml.kernel.org/r/20180818131623.8755-2-linux@rasmusvillemoes.dk Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Yury Norov <ynorov@caviumnetworks.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge tag 'vla-v4.20-rc1' of ↵Linus Torvalds2018-10-281-0/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull VLA removal from Kees Cook: "Globally warn on VLA use. This turns on "-Wvla" globally now that the last few trees with their VLA removals have landed (crypto, block, net, and powerpc). Arnd mentioned that there may be a couple more VLAs hiding in hard-to-find randconfigs, but nothing big has shaken out in the last month or so in linux-next. We should be basically VLA-free now! Wheee. :) Summary: - Remove unused fallback for BUILD_BUG_ON (which technically contains a VLA) - Lift -Wvla to the top-level Makefile" * tag 'vla-v4.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: Makefile: Globally enable VLA warning compiler.h: give up __compiletime_assert_fallback()
| * Makefile: Globally enable VLA warningKees Cook2018-10-111-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that Variable Length Arrays (VLAs) have been entirely removed[1] from the kernel, enable the VLA warning globally. The only exceptions to this are the KASan an UBSan tests which are explicitly checking that VLAs trigger their respective tests. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: David Airlie <airlied@linux.ie> Cc: linux-kbuild@vger.kernel.org Cc: intel-gfx@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: Kees Cook <keescook@chromium.org>
* | Merge branch 'xarray' of git://git.infradead.org/users/willy/linux-daxLinus Torvalds2018-10-287-942/+3578
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull XArray conversion from Matthew Wilcox: "The XArray provides an improved interface to the radix tree data structure, providing locking as part of the API, specifying GFP flags at allocation time, eliminating preloading, less re-walking the tree, more efficient iterations and not exposing RCU-protected pointers to its users. This patch set 1. Introduces the XArray implementation 2. Converts the pagecache to use it 3. Converts memremap to use it The page cache is the most complex and important user of the radix tree, so converting it was most important. Converting the memremap code removes the only other user of the multiorder code, which allows us to remove the radix tree code that supported it. I have 40+ followup patches to convert many other users of the radix tree over to the XArray, but I'd like to get this part in first. The other conversions haven't been in linux-next and aren't suitable for applying yet, but you can see them in the xarray-conv branch if you're interested" * 'xarray' of git://git.infradead.org/users/willy/linux-dax: (90 commits) radix tree: Remove multiorder support radix tree test: Convert multiorder tests to XArray radix tree tests: Convert item_delete_rcu to XArray radix tree tests: Convert item_kill_tree to XArray radix tree tests: Move item_insert_order radix tree test suite: Remove multiorder benchmarking radix tree test suite: Remove __item_insert memremap: Convert to XArray xarray: Add range store functionality xarray: Move multiorder_check to in-kernel tests xarray: Move multiorder_shrink to kernel tests xarray: Move multiorder account test in-kernel radix tree test suite: Convert iteration test to XArray radix tree test suite: Convert tag_tagged_items to XArray radix tree: Remove radix_tree_clear_tags radix tree: Remove radix_tree_maybe_preload_order radix tree: Remove split/join code radix tree: Remove radix_tree_update_node_t page cache: Finish XArray conversion dax: Convert page fault handlers to XArray ...
| * | radix tree: Remove multiorder supportMatthew Wilcox2018-10-212-206/+13
| | | | | | | | | | | | | | | | | | | | | All users have now been converted to the XArray. Removing the support reduces code size and ensures new users will use the XArray instead. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Add range store functionalityMatthew Wilcox2018-10-212-2/+129
| | | | | | | | | | | | | | | | | | | | | | | | | | | This version of xa_store_range() really only supports load and store. Our only user only needs basic load and store functionality, so there's no need to do the extra work to support marking and overlapping stores correctly yet. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Move multiorder_check to in-kernel testsMatthew Wilcox2018-10-211-0/+44
| | | | | | | | | | | | | | | | | | | | | | | | This version is a little less thorough in order to be a little quicker, but tests the important edge cases. Also test adding a multiorder entry at a non-canonical index, and erasing it. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Move multiorder_shrink to kernel testsMatthew Wilcox2018-10-211-0/+37
| | | | | | | | | | | | | | | | | | | | | | | | Test this functionality inside the kernel as well as in userspace. Also remove insert_bug() as there's no comparable thing to test in the XArray code. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Move multiorder account test in-kernelMatthew Wilcox2018-10-211-0/+32
| | | | | | | | | | | | | | | | | | | | | Move this test to the in-kernel test suite, and enhance it to test several different orders. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | radix tree test suite: Convert tag_tagged_items to XArrayMatthew Wilcox2018-10-211-12/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The tag_tagged_items() function is supposed to test the page-writeback tagging code. Since that has been converted to the XArray, there's not much point in testing the radix tree's tagging code. This requires using the pthread mutex embedded in the xarray instead of an external lock, so remove the pthread mutexes which protect xarrays/radix trees. Also remove radix_tree_iter_tag_set() as this was the last user. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | radix tree: Remove radix_tree_clear_tagsMatthew Wilcox2018-10-212-13/+40
| | | | | | | | | | | | | | | | | | | | | | | | The page cache was the only user of this interface and it has now been converted to the XArray. Transform the test into a test of xas_init_marks(). Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | radix tree: Remove radix_tree_maybe_preload_orderMatthew Wilcox2018-10-211-74/+0
| | | | | | | | | | | | | | | | | | | | | This function was only used by the page cache which is now converted to the XArray. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | radix tree: Remove split/join codeMatthew Wilcox2018-10-211-169/+2
| | | | | | | | | | | | | | | | | | | | | | | | radix_tree_split and radix_tree_join were never used upstream. Remove them; if they're needed in future they will be replaced by XArray equivalents. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | radix tree: Remove radix_tree_update_node_tMatthew Wilcox2018-10-212-35/+9
| | | | | | | | | | | | | | | | | | | | | | | | The only user of this functionality was the workingset code, and it's now been converted to the XArray. Remove __radix_tree_delete_node() entirely as it was also only used by the workingset code. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | shmem: Convert shmem_alloc_hugepage to XArrayMatthew Wilcox2018-10-211-43/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | xa_find() is a slightly easier API to use than radix_tree_gang_lookup_slot() because it contains its own RCU locking. This commit removes the last user of radix_tree_gang_lookup_slot() so remove the function too. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | shmem: Convert find_swap_entry to XArrayMatthew Wilcox2018-10-211-0/+56
| | | | | | | | | | | | | | | | | | | | | | | | This is a 1:1 conversion. The major part of this patch is converting the test framework from userspace to kernel space and mirroring the algorithm now used in find_swap_entry(). Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | mm: Convert workingset to XArrayMatthew Wilcox2018-10-211-0/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | We construct an XA_STATE and use it to delete the node with xas_store() rather than adding a special function for this unique use case. Includes a test that simulates this usage for the test suite. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | page cache: Add and replace pages using the XArrayMatthew Wilcox2018-10-211-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | Use the XArray APIs to add and replace pages in the page cache. This removes two uses of the radix tree preload API and is significantly shorter code. It also removes the last user of __radix_tree_create() outside radix-tree.c itself, so make it static. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | ida: Convert to XArrayMatthew Wilcox2018-10-212-247/+203
| | | | | | | | | | | | | | | | | | | | | | | | | | | Use the XA_TRACK_FREE ability to track which entries have a free bit, similarly to how it uses the radix tree's IDR_FREE tag. This eliminates the per-cpu ida_bitmap preload, and fixes the memory consumption regression I introduced when making the IDR able to store any pointer. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Track free entries in an XArrayMatthew Wilcox2018-10-212-4/+145
| | | | | | | | | | | | | | | | | | | | | Add the optional ability to track which entries in an XArray are free and provide xa_alloc() to replace most of the functionality of the IDR. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Add xa_reserve and xa_releaseMatthew Wilcox2018-10-212-0/+87
| | | | | | | | | | | | | | | | | | | | | | | | This function reserves a slot in the XArray for users which need to acquire multiple locks before storing their entry in the tree and so cannot use a plain xa_store(). Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Add xas_create_rangeMatthew Wilcox2018-10-212-0/+169
| | | | | | | | | | | | | | | | | | | | | This hopefully temporary function is useful for users who have not yet been converted to multi-index entries. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Add xas_for_each_conflictMatthew Wilcox2018-10-212-0/+129
| | | | | | | | | | | | | | | | | | | | | | | | | | | This iterator iterates over each entry that is stored in the index or indices specified by the xa_state. This is intended for use for a conditional store of a multiindex entry, or to allow entries which are about to be removed from the xarray to be disposed of properly. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Step through an XArrayMatthew Wilcox2018-10-212-0/+189
| | | | | | | | | | | | | | | | | | | | | | | | | | | The xas_next and xas_prev functions move the xas index by one position, and adjust the rest of the iterator state to match it. This is more efficient than calling xas_set() as it keeps the iterator at the leaves of the tree instead of walking the iterator from the root each time. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Destroy an XArrayMatthew Wilcox2018-10-212-0/+62
| | | | | | | | | | | | | | | | | | | | | This function frees all the internal memory allocated to the xarray and reinitialises it to be empty. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Extract entries from an XArrayMatthew Wilcox2018-10-211-0/+80
| | | | | | | | | | | | | | | | | | | | | | | | The xa_extract function combines the functionality of radix_tree_gang_lookup() and radix_tree_gang_lookup_tagged(). It extracts entries matching the specified filter into a normal array. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Add XArray iteratorsMatthew Wilcox2018-10-212-0/+475
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The xa_for_each iterator allows the user to efficiently walk a range of the array, executing the loop body once for each entry in that range that matches the filter. This commit also includes xa_find() and xa_find_after() which are helper functions for xa_for_each() but may also be useful in their own right. In the xas family of functions, we have xas_for_each(), xas_find(), xas_next_entry(), xas_for_each_tagged(), xas_find_tagged(), xas_next_tagged() and xas_pause(). Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Add XArray conditional store operationsMatthew Wilcox2018-10-212-0/+91
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Like cmpxchg(), xa_cmpxchg will only store to the index if the current entry matches the old entry. It returns the current entry, which is usually more useful than the errno returned by radix_tree_insert(). For the users who really only want the errno, the xa_insert() wrapper provides a more convenient calling convention. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Add XArray unconditional store operationsMatthew Wilcox2018-10-213-6/+868
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | xa_store() differs from radix_tree_insert() in that it will overwrite an existing element in the array rather than returning an error. This is the behaviour which most users want, and those that want more complex behaviour generally want to use the xas family of routines anyway. For memory allocation, xa_store() will first attempt to request memory from the slab allocator; if memory is not immediately available, it will drop the xa_lock and allocate memory, keeping a pointer in the xa_state. It does not use the per-CPU cache, although those will continue to exist until all radix tree users are converted to the xarray. This patch also includes xa_erase() and __xa_erase() for a streamlined way to store NULL. Since there is no need to allocate memory in order to store a NULL in the XArray, we do not need to trouble the user with deciding what memory allocation flags to use. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Add XArray marksMatthew Wilcox2018-10-212-2/+264
| | | | | | | | | | | | | | | | | | | | | | | | XArray marks are like the radix tree tags, only slightly more strongly typed. They are renamed in order to distinguish them from tagged pointers. This commit adds the basic get/set/clear operations. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Add XArray load operationMatthew Wilcox2018-10-215-43/+286
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The xa_load function brings with it a lot of infrastructure; xa_empty(), xa_is_err(), and large chunks of the XArray advanced API that are used to implement xa_load. As the test-suite demonstrates, it is possible to use the XArray functions on a radix tree. The radix tree functions depend on the GFP flags being stored in the root of the tree, so it's not possible to use the radix tree functions on an XArray. Signed-off-by: Matthew Wilcox <willy@infradead.org>
| * | xarray: Define struct xa_nodeMatthew Wilcox2018-10-211-24/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | This is a direct replacement for struct radix_tree_node. A couple of struct members have changed name, so convert those. Use a #define so that radix tree users continue to work without change. Signed-off-by: Matthew Wilcox <willy@infradead.org> Reviewed-by: Josef Bacik <jbacik@fb.com>
| * | xarray: Add definition of struct xarrayMatthew Wilcox2018-10-214-41/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | | This is a direct replacement for struct radix_tree_root. Some of the struct members have changed name; convert those, and use a #define so that radix_tree users continue to work without change. Signed-off-by: Matthew Wilcox <willy@infradead.org> Reviewed-by: Josef Bacik <jbacik@fb.com>
| * | xarray: Change definition of sibling entriesMatthew Wilcox2018-09-292-45/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of storing a pointer to the slot containing the canonical entry, store the offset of the slot. Produces slightly more efficient code (~300 bytes) and simplifies the implementation. Signed-off-by: Matthew Wilcox <willy@infradead.org> Reviewed-by: Josef Bacik <jbacik@fb.com>
| * | xarray: Replace exceptional entriesMatthew Wilcox2018-09-292-47/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce xarray value entries and tagged pointers to replace radix tree exceptional entries. This is a slight change in encoding to allow the use of an extra bit (we can now store BITS_PER_LONG - 1 bits in a value entry). It is also a change in emphasis; exceptional entries are intimidating and different. As the comment explains, you can choose to store values or pointers in the xarray and they are both first-class citizens. Signed-off-by: Matthew Wilcox <willy@infradead.org> Reviewed-by: Josef Bacik <jbacik@fb.com>
| * | idr: Permit any valid kernel pointer to be storedMatthew Wilcox2018-09-292-10/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An upcoming change to the encoding of internal entries will set the bottom two bits to 0b10. Unfortunately, m68k only aligns some data structures to 2 bytes, so the IDR will interpret them as internal entries and things will go badly wrong. Change the radix tree so that it stops either when the node indicates that it's the bottom of the tree (shift == 0) or when the entry is not an internal entry. This means we cannot insert an arbitrary kernel pointer as a multiorder entry, but the IDR does not permit multiorder entries. Annoyingly, this means the IDR can no longer take advantage of the radix tree's ability to store a single entry at offset 0 without allocating memory. A pointer which is 2-byte aligned cannot be stored directly in the root as it would be indistinguishable from a node, so we must allocate a node in order to store a 2-byte pointer at index 0. The idr_replace() function does not take a GFP flags argument, so cannot allocate memory. If a user inserts a 4-byte aligned pointer at index 0 and then replaces it with a 2-byte aligned pointer, we must be able to store it. Arbitrary pointer values are still not permitted; pointers of the form 2 + (i * 4) for values of i between 0 and 1023 are reserved for the implementation. These are not valid kernel pointers as they would point into the zero page. This change does cause a runtime memory consumption regression for the IDA. I will recover that later. Signed-off-by: Matthew Wilcox <willy@infradead.org> Tested-by: Guenter Roeck <linux@roeck-us.net>
* | | Merge branch 'akpm' (patches from Andrew)Linus Torvalds2018-10-261-0/+70
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Merge updates from Andrew Morton: - a few misc things - ocfs2 updates - most of MM * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (132 commits) hugetlbfs: dirty pages as they are added to pagecache mm: export add_swap_extent() mm: split SWP_FILE into SWP_ACTIVATED and SWP_FS tools/testing/selftests/vm/map_fixed_noreplace.c: add test for MAP_FIXED_NOREPLACE mm: thp: relocate flush_cache_range() in migrate_misplaced_transhuge_page() mm: thp: fix mmu_notifier in migrate_misplaced_transhuge_page() mm: thp: fix MADV_DONTNEED vs migrate_misplaced_transhuge_page race condition mm/kasan/quarantine.c: make quarantine_lock a raw_spinlock_t mm/gup: cache dev_pagemap while pinning pages Revert "x86/e820: put !E820_TYPE_RAM regions into memblock.reserved" mm: return zero_resv_unavail optimization mm: zero remaining unavailable struct pages tools/testing/selftests/vm/gup_benchmark.c: add MAP_HUGETLB option tools/testing/selftests/vm/gup_benchmark.c: add MAP_SHARED option tools/testing/selftests/vm/gup_benchmark.c: allow user specified file tools/testing/selftests/vm/gup_benchmark.c: fix 'write' flag usage mm/gup_benchmark.c: add additional pinning methods mm/gup_benchmark.c: time put_page() mm: don't raise MEMCG_OOM event due to failed high-order allocation mm/page-writeback.c: fix range_cyclic writeback vs writepages deadlock ...
| * | | lib/test_kasan.c: add tests for several string/memory API functionsAndrey Ryabinin2018-10-261-0/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Arch code may have asm implementation of string/memory API functions instead of using generic one from lib/string.c. KASAN don't see memory accesses in asm code, thus can miss many bugs. E.g. on ARM64 KASAN don't see bugs in memchr(), memcmp(), str[r]chr(), str[n]cmp(), str[n]len(). Add tests for these functions to be sure that we notice the problem on other architectures. Link: http://lkml.kernel.org/r/20180920135631.23833-3-aryabinin@virtuozzo.com Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Alexander Potapenko <glider@google.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Kyeongdon Kim <kyeongdon.kim@lge.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | | Merge tag 'devicetree-for-4.20' of ↵Linus Torvalds2018-10-261-1/+6
|\ \ \ \ | |/ / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux Pull Devicetree updates from Rob Herring: "A bit bigger than normal as I've been busy this cycle. There's a few things with dependencies and a few things subsystem maintainers didn't pick up, so I'm taking them thru my tree. The fixes from Johan didn't get into linux-next, but they've been waiting for some time now and they are what's left of what subsystem maintainers didn't pick up. Summary: - Sync dtc with upstream version v1.4.7-14-gc86da84d30e4 - Work to get rid of direct accesses to struct device_node name and type pointers in preparation for removing them. New helpers for parsing DT cpu nodes and conversions to use the helpers. printk conversions to %pOFn for printing DT node names. Most went thru subystem trees, so this is the remainder. - Fixes to DT child node lookups to actually be restricted to child nodes instead of treewide. - Refactoring of dtb targets out of arch code. This makes the support more uniform and enables building all dtbs on c6x, microblaze, and powerpc. - Various DT binding updates for Renesas r8a7744 SoC - Vendor prefixes for Facebook, OLPC - Restructuring of some ARM binding docs moving some peripheral bindings out of board/SoC binding files - New "secure-chosen" binding for secure world settings on ARM - Dual licensing of 2 DT IRQ binding headers" * tag 'devicetree-for-4.20' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux: (78 commits) ARM: dt: relicense two DT binding IRQ headers power: supply: twl4030-charger: fix OF sibling-node lookup NFC: nfcmrvl_uart: fix OF child-node lookup net: stmmac: dwmac-sun8i: fix OF child-node lookup net: bcmgenet: fix OF child-node lookup drm/msm: fix OF child-node lookup drm/mediatek: fix OF sibling-node lookup of: Add missing exports of node name compare functions dt-bindings: Add OLPC vendor prefix dt-bindings: misc: bk4: Add device tree binding for Liebherr's BK4 SPI bus dt-bindings: thermal: samsung: Add SPDX license identifier dt-bindings: clock: samsung: Add SPDX license identifiers dt-bindings: timer: ostm: Add R7S9210 support dt-bindings: phy: rcar-gen2: Add r8a7744 support dt-bindings: can: rcar_can: Add r8a7744 support dt-bindings: timer: renesas, cmt: Document r8a7744 CMT support dt-bindings: watchdog: renesas-wdt: Document r8a7744 support dt-bindings: thermal: rcar: Add device tree support for r8a7744 Documentation: dt: Add binding for /secure-chosen/stdout-path dt-bindings: arm: zte: Move sysctrl bindings to their own doc ...
OpenPOWER on IntegriCloud