summaryrefslogtreecommitdiffstats
path: root/arch/arm/crypto
Commit message (Collapse)AuthorAgeFilesLines
* crypto: arm/chacha20 - always use vrev for 16-bit rotatesEric Biggers2018-08-031-6/+4
| | | | | | | | | | The 4-way ChaCha20 NEON code implements 16-bit rotates with vrev32.16, but the one-way code (used on remainder blocks) implements it with vshl + vsri, which is slower. Switch the one-way code to vrev32.16 too. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linuxHerbert Xu2018-08-031-2/+4
|\ | | | | | | | | Merge mainline to pick up c7513c2a2714 ("crypto/arm64: aes-ce-gcm - add missing kernel_neon_begin/end pair").
| * crypto: arm/speck - fix building in Thumb2 modeEric Biggers2018-07-011-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Building the kernel with CONFIG_THUMB2_KERNEL=y and CONFIG_CRYPTO_SPECK_NEON set fails with the following errors: arch/arm/crypto/speck-neon-core.S: Assembler messages: arch/arm/crypto/speck-neon-core.S:419: Error: r13 not allowed here -- `bic sp,#0xf' arch/arm/crypto/speck-neon-core.S:423: Error: r13 not allowed here -- `bic sp,#0xf' arch/arm/crypto/speck-neon-core.S:427: Error: r13 not allowed here -- `bic sp,#0xf' arch/arm/crypto/speck-neon-core.S:431: Error: r13 not allowed here -- `bic sp,#0xf' The problem is that the 'bic' instruction can't operate on the 'sp' register in Thumb2 mode. Fix it by using a temporary register. This isn't in the main loop, so the performance difference is negligible. This also matches what aes-neonbs-core.S does. Reported-by: Stefan Agner <stefan@agner.ch> Fixes: ede9622162fa ("crypto: arm/speck - add NEON-accelerated implementation of Speck-XTS") Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: ahash - remove useless setting of cra_typeEric Biggers2018-07-091-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some ahash algorithms set .cra_type = &crypto_ahash_type. But this is redundant with the C structure type ('struct ahash_alg'), and crypto_register_ahash() already sets the .cra_type automatically. Apparently the useless assignment has just been copy+pasted around. So, remove the useless assignment from all the ahash algorithms. This patch shouldn't change any actual behavior. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: ahash - remove useless setting of type flagsEric Biggers2018-07-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many ahash algorithms set .cra_flags = CRYPTO_ALG_TYPE_AHASH. But this is redundant with the C structure type ('struct ahash_alg'), and crypto_register_ahash() already sets the type flag automatically, clearing any type flag that was already there. Apparently the useless assignment has just been copy+pasted around. So, remove the useless assignment from all the ahash algorithms. This patch shouldn't change any actual behavior. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: shash - remove useless setting of type flagsEric Biggers2018-07-099-14/+1
|/ | | | | | | | | | | | | | | Many shash algorithms set .cra_flags = CRYPTO_ALG_TYPE_SHASH. But this is redundant with the C structure type ('struct shash_alg'), and crypto_register_shash() already sets the type flag automatically, clearing any type flag that was already there. Apparently the useless assignment has just been copy+pasted around. So, remove the useless assignment from all the shash algorithms. This patch shouldn't change any actual behavior. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: clarify licensing of OpenSSL asm codeAdam Langley2018-05-315-8/+46
| | | | | | | | | | | | | | | | | | | | | | | | | Several source files have been taken from OpenSSL. In some of them a comment that "permission to use under GPL terms is granted" was included below a contradictory license statement. In several cases, there was no indication that the license of the code was compatible with the GPLv2. This change clarifies the licensing for all of these files. I've confirmed with the author (Andy Polyakov) that a) he has licensed the files with the GPLv2 comment under that license and b) that he's also happy to license the other files under GPLv2 too. In one case, the file is already contained in his CRYPTOGAMS bundle, which has a GPLv2 option, and so no special measures are needed. In all cases, the license status of code has been clarified by making the GPLv2 license prominent. The .S files have been regenerated from the updated .pl files. This is a comment-only change. No code is changed. Signed-off-by: Adam Langley <agl@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* kbuild: mark $(targets) as .SECONDARY and remove .PRECIOUS markersMasahiro Yamada2018-04-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GNU Make automatically deletes intermediate files that are updated in a chain of pattern rules. Example 1) %.dtb.o <- %.dtb.S <- %.dtb <- %.dts Example 2) %.o <- %.c <- %.c_shipped A couple of makefiles mark such targets as .PRECIOUS to prevent Make from deleting them, but the correct way is to use .SECONDARY. .SECONDARY Prerequisites of this special target are treated as intermediate files but are never automatically deleted. .PRECIOUS When make is interrupted during execution, it may delete the target file it is updating if the file was modified since make started. If you mark the file as precious, make will never delete the file if interrupted. Both can avoid deletion of intermediate files, but the difference is the behavior when Make is interrupted; .SECONDARY deletes the target, but .PRECIOUS does not. The use of .PRECIOUS is relatively rare since we do not want to keep partially constructed (possibly corrupted) targets. Another difference is that .PRECIOUS works with pattern rules whereas .SECONDARY does not. .PRECIOUS: $(obj)/%.lex.c works, but .SECONDARY: $(obj)/%.lex.c has no effect. However, for the reason above, I do not want to use .PRECIOUS which could cause obscure build breakage. The targets specified as .SECONDARY must be explicit. $(targets) contains all targets that need to include .*.cmd files. So, the intermediates you want to keep are mostly in there. Therefore, mark $(targets) as .SECONDARY. It means primary targets are also marked as .SECONDARY, but I do not see any drawback for this. I replaced some .SECONDARY / .PRECIOUS markers with 'targets'. This will make Kbuild search for non-existing .*.cmd files, but this is not a noticeable performance issue. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Frank Rowand <frowand.list@gmail.com> Acked-by: Ingo Molnar <mingo@kernel.org>
* crypto: arm,arm64 - Fix random regeneration of S_shippedLeonard Crestez2018-03-231-0/+2
| | | | | | | | | | | | | | | | | | | | | | The decision to rebuild .S_shipped is made based on the relative timestamps of .S_shipped and .pl files but git makes this essentially random. This means that the perl script might run anyway (usually at most once per checkout), defeating the whole purpose of _shipped. Fix by skipping the rule unless explicit make variables are provided: REGENERATE_ARM_CRYPTO or REGENERATE_ARM64_CRYPTO. This can produce nasty occasional build failures downstream, for example for toolchains with broken perl. The solution is minimally intrusive to make it easier to push into stable. Another report on a similar issue here: https://lkml.org/lkml/2018/3/8/1379 Signed-off-by: Leonard Crestez <leonard.crestez@nxp.com> Cc: <stable@vger.kernel.org> Reviewed-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/speck - add NEON-accelerated implementation of Speck-XTSEric Biggers2018-02-224-0/+728
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add an ARM NEON-accelerated implementation of Speck-XTS. It operates on 128-byte chunks at a time, i.e. 8 blocks for Speck128 or 16 blocks for Speck64. Each 128-byte chunk goes through XTS preprocessing, then is encrypted/decrypted (doing one cipher round for all the blocks, then the next round, etc.), then goes through XTS postprocessing. The performance depends on the processor but can be about 3 times faster than the generic code. For example, on an ARMv7 processor we observe the following performance with Speck128/256-XTS: xts-speck128-neon: Encryption 107.9 MB/s, Decryption 108.1 MB/s xts(speck128-generic): Encryption 32.1 MB/s, Decryption 36.6 MB/s In comparison to AES-256-XTS without the Cryptography Extensions: xts-aes-neonbs: Encryption 41.2 MB/s, Decryption 36.7 MB/s xts(aes-asm): Encryption 31.7 MB/s, Decryption 30.8 MB/s xts(aes-generic): Encryption 21.2 MB/s, Decryption 20.9 MB/s Speck64/128-XTS is even faster: xts-speck64-neon: Encryption 138.6 MB/s, Decryption 139.1 MB/s Note that as with the generic code, only the Speck128 and Speck64 variants are supported. Also, for now only the XTS mode of operation is supported, to target the disk and file encryption use cases. The NEON code also only handles the portion of the data that is evenly divisible into 128-byte chunks, with any remainder handled by a C fallback. Of course, other modes of operation could be added later if needed, and/or the NEON code could be updated to handle other buffer sizes. The XTS specification is only defined for AES which has a 128-bit block size, so for the GF(2^64) math needed for Speck64-XTS we use the reducing polynomial 'x^64 + x^4 + x^3 + x + 1' given by the original XEX paper. Of course, when possible users should use Speck128-XTS, but even that may be too slow on some processors; Speck64-XTS can be faster. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes-cipher - move S-box to .rodata sectionJinbum Park2018-02-221-9/+10
| | | | | | | | | Move the AES inverse S-box to the .rodata section where it is safe from abuse by speculation. Signed-off-by: Jinbum Park <jinb.park7@gmail.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - annotate algorithms taking optional keyEric Biggers2018-01-121-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | We need to consistently enforce that keyed hashes cannot be used without setting the key. To do this we need a reliable way to determine whether a given hash algorithm is keyed or not. AF_ALG currently does this by checking for the presence of a ->setkey() method. However, this is actually slightly broken because the CRC-32 algorithms implement ->setkey() but can also be used without a key. (The CRC-32 "key" is not actually a cryptographic key but rather represents the initial state. If not overridden, then a default initial state is used.) Prepare to fix this by introducing a flag CRYPTO_ALG_OPTIONAL_KEY which indicates that the algorithm has a ->setkey() method, but it is not required to be called. Then set it on all the CRC-32 algorithms. The same also applies to the Adler-32 implementation in Lustre. Also, the cryptd and mcryptd templates have to pass through the flag from their underlying algorithm. Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes-neonbs - Use PTR_ERR_OR_ZERO()Gomonovych, Vasyl2017-12-111-6/+4
| | | | | | | | | | | | | Fix ptr_ret.cocci warnings: arch/arm/crypto/aes-neonbs-glue.c:184:1-3: WARNING: PTR_ERR_OR_ZERO can be used arch/arm/crypto/aes-neonbs-glue.c:261:1-3: WARNING: PTR_ERR_OR_ZERO can be used Use PTR_ERR_OR_ZERO rather than if(IS_ERR(...)) + PTR_ERR Generated by: scripts/coccinelle/api/ptr_ret.cocci Signed-off-by: Vasyl Gomonovych <gomonovych@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* License cleanup: add SPDX GPL-2.0 license identifier to files with no licenseGreg Kroah-Hartman2017-11-025-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: arm/aes - avoid expanded lookup tables in the final roundArd Biesheuvel2017-08-041-23/+65
| | | | | | | | | | | | | | | | | For the final round, avoid the expanded and padded lookup tables exported by the generic AES driver. Instead, for encryption, we can perform byte loads from the same table we used for the inner rounds, which will still be hot in the caches. For decryption, use the inverse AES Sbox directly, which is 4x smaller than the inverse lookup table exported by the generic driver. This should significantly reduce the Dcache footprint of our code, which makes the code more robust against timing attacks. It does not introduce any additional module dependencies, given that we already rely on the core AES module for the shared key expansion routines. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/ghash - add NEON accelerated fallback for vmull.p64Ard Biesheuvel2017-08-043-48/+215
| | | | | | | | | | | | | | | | | Implement a NEON fallback for systems that do support NEON but have no support for the optional 64x64->128 polynomial multiplication instruction that is part of the ARMv8 Crypto Extensions. It is based on the paper "Fast Software Polynomial Multiplication on ARM Processors Using the NEON Engine" by Danilo Camara, Conrado Gouvea, Julio Lopez and Ricardo Dahab (https://hal.inria.fr/hal-01506572) On a 32-bit guest executing under KVM on a Cortex-A57, the new code is not only 4x faster than the generic table based GHASH driver, it is also time invariant. (Note that the existing vmull.p64 code is 16x faster on this core). Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: algapi - make crypto_xor() take separate dst and src argumentsArd Biesheuvel2017-08-042-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | There are quite a number of occurrences in the kernel of the pattern if (dst != src) memcpy(dst, src, walk.total % AES_BLOCK_SIZE); crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE); or crypto_xor(keystream, src, nbytes); memcpy(dst, keystream, nbytes); where crypto_xor() is preceded or followed by a memcpy() invocation that is only there because crypto_xor() uses its output parameter as one of the inputs. To avoid having to add new instances of this pattern in the arm64 code, which will be refactored to implement non-SIMD fallbacks, add an alternative implementation called crypto_xor_cpy(), taking separate input and output arguments. This removes the need for the separate memcpy(). Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/crc32 - enable module autoloading based on CPU feature bitsArd Biesheuvel2017-06-011-0/+6
| | | | | | | | | Make the module autoloadable by tying it to the CPU feature bits that describe whether the optional instructions it relies on are implemented by the current CPU. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/sha2-ce - enable module autoloading based on CPU feature bitsArd Biesheuvel2017-06-011-3/+2
| | | | | | | | | Make the module autoloadable by tying it to the CPU feature bit that describes whether the optional instructions it relies on are implemented by the current CPU. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/sha1-ce - enable module autoloading based on CPU feature bitsArd Biesheuvel2017-06-011-3/+2
| | | | | | | | | Make the module autoloadable by tying it to the CPU feature bit that describes whether the optional instructions it relies on are implemented by the current CPU. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/ghash-ce - enable module autoloading based on CPU feature bitsArd Biesheuvel2017-06-011-4/+2
| | | | | | | | | Make the module autoloadable by tying it to the CPU feature bit that describes whether the optional instructions it relies on are implemented by the current CPU. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes-ce - enable module autoloading based on CPU feature bitsArd Biesheuvel2017-06-011-4/+2
| | | | | | | | | Make the module autoloadable by tying it to the CPU feature bit that describes whether the optional instructions it relies on are implemented by the current CPU. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes-neonbs - resolve fallback cipher at runtimeArd Biesheuvel2017-03-092-16/+46
| | | | | | | | | | | | | | | | | | | | | | | | Currently, the bit sliced NEON AES code for ARM has a link time dependency on the scalar ARM asm implementation, which it uses as a fallback to perform CBC encryption and the encryption of the initial XTS tweak. The bit sliced NEON code is both fast and time invariant, which makes it a reasonable default on hardware that supports it. However, the ARM asm code it pulls in is not time invariant, and due to the way it is linked in, cannot be overridden by the new generic time invariant driver. In fact, it will not be used at all, given that the ARM asm code registers itself as a cipher with a priority that exceeds the priority of the fixed time cipher. So remove the link time dependency, and allocate the fallback cipher via the crypto API. Note that this requires this driver's module_init call to be replaced with late_initcall, so that the (possibly generic) fallback cipher is guaranteed to be available when the builtin test is performed at registration time. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/crc32 - add build time test for CRC instruction supportArd Biesheuvel2017-03-011-1/+11
| | | | | | | | | | | | | | | | | | | The accelerated CRC32 module for ARM may use either the scalar CRC32 instructions, the NEON 64x64 to 128 bit polynomial multiplication (vmull.p64) instruction, or both, depending on what the current CPU supports. However, this also requires support in binutils, and as it turns out, versions of binutils exist that support the vmull.p64 instruction but not the crc32 instructions. So refactor the Makefile logic so that this module only gets built if binutils has support for both. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Jon Hunter <jonathanh@nvidia.com> Tested-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/crc32 - fix build error with outdated binutilsArd Biesheuvel2017-03-011-1/+1
| | | | | | | | | Annotate a vmov instruction with an explicit element size of 32 bits. This is inferred by recent toolchains, but apparently, older versions need some help figuring this out. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes - don't use IV buffer to return final keystream blockArd Biesheuvel2017-02-032-11/+14
| | | | | | | | | | | | | | | | | | | The ARM bit sliced AES core code uses the IV buffer to pass the final keystream block back to the glue code if the input is not a multiple of the block size, so that the asm code does not have to deal with anything except 16 byte blocks. This is done under the assumption that the outgoing IV is meaningless anyway in this case, given that chaining is no longer possible under these circumstances. However, as it turns out, the CCM driver does expect the IV to retain a value that is equal to the original IV except for the counter value, and even interprets byte zero as a length indicator, which may result in memory corruption if the IV is overwritten with something else. So use a separate buffer to return the final keystream block. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/chacha20 - remove cra_alignmaskArd Biesheuvel2017-02-031-1/+0
| | | | | | | | | Remove the unnecessary alignmask: it is much more efficient to deal with the misalignment in the core algorithm than relying on the crypto API to copy the data to a suitably aligned buffer. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes-ce - remove cra_alignmaskArd Biesheuvel2017-02-032-52/+47
| | | | | | | | | Remove the unnecessary alignmask: it is much more efficient to deal with the misalignment in the core algorithm than relying on the crypto API to copy the data to a suitably aligned buffer. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes-neonbs - fix issue with v2.22 and older assemblerArd Biesheuvel2017-01-231-4/+4
| | | | | | | | | | | | | | | | | | | | | | The GNU assembler for ARM version 2.22 or older fails to infer the element size from the vmov instructions, and aborts the build in the following way; .../aes-neonbs-core.S: Assembler messages: .../aes-neonbs-core.S:817: Error: bad type for scalar -- `vmov q1h[1],r10' .../aes-neonbs-core.S:817: Error: bad type for scalar -- `vmov q1h[0],r9' .../aes-neonbs-core.S:817: Error: bad type for scalar -- `vmov q1l[1],r8' .../aes-neonbs-core.S:817: Error: bad type for scalar -- `vmov q1l[0],r7' .../aes-neonbs-core.S:818: Error: bad type for scalar -- `vmov q2h[1],r10' .../aes-neonbs-core.S:818: Error: bad type for scalar -- `vmov q2h[0],r9' .../aes-neonbs-core.S:818: Error: bad type for scalar -- `vmov q2l[1],r8' .../aes-neonbs-core.S:818: Error: bad type for scalar -- `vmov q2l[0],r7' Fix this by setting the element size explicitly, by replacing vmov with vmov.32. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes - avoid reserved 'tt' mnemonic in asm codeArd Biesheuvel2017-01-131-5/+5
| | | | | | | | | | The ARMv8-M architecture introduces 'tt' and 'ttt' instructions, which means we can no longer use 'tt' as a register alias on recent versions of binutils for ARM. So replace the alias with 'ttab'. Fixes: 81edb4262975 ("crypto: arm/aes - replace scalar AES cipher") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes - replace bit-sliced OpenSSL NEON codeArd Biesheuvel2017-01-139-6499/+1429
| | | | | | | | | | | | | | | | | | | | | This replaces the unwieldy generated implementation of bit-sliced AES in CBC/CTR/XTS modes that originated in the OpenSSL project with a new version that is heavily based on the OpenSSL implementation, but has a number of advantages over the old version: - it does not rely on the scalar AES cipher that also originated in the OpenSSL project and contains redundant lookup tables and key schedule generation routines (which we already have in crypto/aes_generic.) - it uses the same expanded key schedule for encryption and decryption, reducing the size of the per-key data structure by 1696 bytes - it adds an implementation of AES in ECB mode, which can be wrapped by other generic chaining mode implementations - it moves the handling of corner cases that are non critical to performance to the glue layer written in C - it was written directly in assembler rather than generated from a Perl script Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes - replace scalar AES cipherArd Biesheuvel2017-01-135-119/+256
| | | | | | | | | | | | | | | | | This replaces the scalar AES cipher that originates in the OpenSSL project with a new implementation that is ~15% (*) faster (on modern cores), and reuses the lookup tables and the key schedule generation routines from the generic C implementation (which is usually compiled in anyway due to networking and other subsystems depending on it). Note that the bit sliced NEON code for AES still depends on the scalar cipher that this patch replaces, so it is not removed entirely yet. * On Cortex-A57, the performance increases from 17.0 to 14.9 cycles per byte for 128-bit keys. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/chacha20 - implement NEON version based on SSE3 codeArd Biesheuvel2017-01-134-0/+659
| | | | | | | | | This is a straight port to ARM/NEON of the x86 SSE3 implementation of the ChaCha20 stream cipher. It uses the new skcipher walksize attribute to process the input in strides of 4x the block size. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Revert "crypto: arm64/ARM: NEON accelerated ChaCha20"Herbert Xu2016-12-284-667/+0
| | | | | | | | | | | | | | This patch reverts the following commits: 8621caa0d45e731f2e9f5889ff5bb384fcd6e059 8096667273477e735b0072b11a6d617ccee45e5f I should not have applied them because they had already been obsoleted by a subsequent patch series. They also cause a build failure because of the subsequent commit 9ae433bc79f9. Fixes: 9ae433bc79f ("crypto: chacha20 - convert generic and...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/chacha20 - implement NEON version based on SSE3 codeArd Biesheuvel2016-12-274-0/+667
| | | | | | | | This is a straight port to ARM/NEON of the x86 SSE3 implementation of the ChaCha20 stream cipher. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/crc32 - accelerated support based on x86 SSE implementationArd Biesheuvel2016-12-074-0/+555
| | | | | | | | | | | | | | | | This is a combination of the the Intel algorithm implemented using SSE and PCLMULQDQ instructions from arch/x86/crypto/crc32-pclmul_asm.S, and the new CRC32 extensions introduced for both 32-bit and 64-bit ARM in version 8 of the architecture. Two versions of the above combo are provided, one for CRC32 and one for CRC32C. The PMULL/NEON algorithm is faster, but operates on blocks of at least 64 bytes, and on multiples of 16 bytes only. For the remaining input, or for all input on systems that lack the PMULL 64x64->128 instructions, the CRC32 instructions will be used. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/crct10dif - port x86 SSE implementation to ARMArd Biesheuvel2016-12-074-0/+535
| | | | | | | | | | This is a transliteration of the Intel algorithm implemented using SSE and PCLMULQDQ instructions that resides in the file arch/x86/crypto/crct10dif-pcl-asm_64.S, but simplified to only operate on buffers that are 16 byte aligned (but of any size) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aes-ce - Make aes_simd_algs staticHerbert Xu2016-12-011-1/+1
| | | | | | | | | | The variable aes_simd_algs should be static. In fact if it isn't it causes build errors when multiple copies of aes-ce-glue.c are built into the kernel. Fixes: da40e7a4ba4d ("crypto: aes-ce - Convert to skcipher") Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aesbs - fix brokenness after skcipher conversionArd Biesheuvel2016-11-301-1/+1
| | | | | | | | The CBC encryption routine should use the encryption round keys, not the decryption round keys. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes - Add missing SIMD select for aesbsHerbert Xu2016-11-301-3/+3
| | | | | | | | | This patch adds one more missing SIMD select for AES_ARM_BS. It also changes selects on ALGAPI to BLKCIPHER. Fixes: 211f41af534a ("crypto: aesbs - Convert to skcipher") Reported-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes - Select SIMD in KconfigHerbert Xu2016-11-291-1/+1
| | | | | | | | | | The skcipher conversion for ARM missed the select on CRYPTO_SIMD, causing build failures if SIMD was not otherwise enabled. Fixes: da40e7a4ba4d ("crypto: aes-ce - Convert to skcipher") Fixes: 211f41af534a ("crypto: aesbs - Convert to skcipher") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aesbs - Convert to skcipherHerbert Xu2016-11-281-228/+152
| | | | | | This patch converts aesbs over to the skcipher interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aes-ce - Convert to skcipherHerbert Xu2016-11-281-233/+157
| | | | | | This patch converts aes-ce over to the skcipher interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes-ce - fix for big endianArd Biesheuvel2016-10-211-0/+5
| | | | | | | | | | | The AES key schedule generation is mostly endian agnostic, with the exception of the rotation and the incorporation of the round constant at the start of each round. So implement a big endian specific version of that part to make the whole routine big endian compatible. Fixes: 86464859cc77 ("crypto: arm - AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Herbert Xu2016-10-101-1/+1
|\ | | | | | | Merge the crypto tree to pull in vmx ghash fix.
| * crypto: arm/aes-ctr - fix NULL dereference in tail processingArd Biesheuvel2016-09-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The AES-CTR glue code avoids calling into the blkcipher API for the tail portion of the walk, by comparing the remainder of walk.nbytes modulo AES_BLOCK_SIZE with the residual nbytes, and jumping straight into the tail processing block if they are equal. This tail processing block checks whether nbytes != 0, and does nothing otherwise. However, in case of an allocation failure in the blkcipher layer, we may enter this code with walk.nbytes == 0, while nbytes > 0. In this case, we should not dereference the source and destination pointers, since they may be NULL. So instead of checking for nbytes != 0, check for (walk.nbytes % AES_BLOCK_SIZE) != 0, which implies the former in non-error conditions. Fixes: 86464859cc77 ("crypto: arm - AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions") Cc: stable@vger.kernel.org Reported-by: xiakaixu <xiakaixu@huawei.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: arm/ghash - change internal cra_name to "__ghash"Ard Biesheuvel2016-09-071-1/+1
| | | | | | | | | | | | | | | | | | | | The fact that the internal synchrous hash implementation is called "ghash" like the publicly visible one is causing the testmgr code to misidentify it as an algorithm that requires testing at boottime. So rename it to "__ghash" to prevent this. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: arm/ghash-ce - add missing async import/exportArd Biesheuvel2016-09-071-0/+24
| | | | | | | | | | | | | | | | | | | | | | Since commit 8996eafdcbad ("crypto: ahash - ensure statesize is non-zero"), all ahash drivers are required to implement import()/export(), and must have a non-zero statesize. Fix this for the ARM Crypto Extensions GHASH implementation. Fixes: 8996eafdcbad ("crypto: ahash - ensure statesize is non-zero") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: arm/sha1-neon - add support for building in Thumb2 modeArd Biesheuvel2016-09-071-1/+0
|/ | | | | | | | | | The ARMv7 NEON module is explicitly built in ARM mode, which is not supported by the Thumb2 kernel. So remove the explicit override, and leave it up to the build environment to decide whether the core SHA1 routines are assembled as ARM or as Thumb2 code. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ghash-ce - Fix cryptd reorderingHerbert Xu2016-06-231-23/+17
| | | | | | | | | | | | | | This patch fixes an old bug where requests can be reordered because some are processed by cryptd while others are processed directly in softirq context. The fix is to always postpone to cryptd if there are currently requests outstanding from the same tfm. This patch also removes the redundant use of cryptd in the async init function as init never touches the FPU. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
OpenPOWER on IntegriCloud