| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Signed-off-by: Cong Wang <amwang@redhat.com>
|
|
|
|
|
|
|
|
| |
The report functions use NLA_PUT so we need to ensure that NET
is enabled.
Reported-by: Luis Henriques <henrix@camandro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
| |
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ensure kmap_atomic() usage is strictly nested
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Reviewed-by: Rik van Riel <riel@redhat.com>
Acked-by: Chris Metcalf <cmetcalf@tilera.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Miller <davem@davemloft.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When an skcipher constructed through crypto_givcipher_default fails
its selftest, we'll loop forever trying to construct new skcipher
objects but failing because it already exists.
The crux of the issue is that once a givcipher fails the selftest,
we'll ignore it on the next run through crypto_skcipher_lookup and
attempt to construct a new givcipher.
We should instead return an error to the caller if we find a
givcipher that has failed the test.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we get left-over bits from a slow walk, it means that the
underlying cipher has gone troppo. However, as we're handling
that case we should ensure that the caller terminates the walk.
This patch does this by setting walk->nbytes to zero.
Reported-by: Roel Kluin <roel.kluin@gmail.com>
Reported-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
This patch moves the default IV generators into their own modules
in order to break a dependency loop between cryptomgr, rng, and
blkcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
For compatibility with dm-crypt initramfs setups it is useful to merge
chainiv/seqiv into the crypto_blkcipher module. Since they're required
by most algorithms anyway this is an acceptable trade-off.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch makes crypto_alloc_ablkcipher/crypto_grab_skcipher always
return algorithms that are capable of generating their own IVs through
givencrypt and givdecrypt. Each algorithm may specify its default IV
generator through the geniv field.
For algorithms that do not set the geniv field, the blkcipher layer will
pick a default. Currently it's chainiv for synchronous algorithms and
eseqiv for asynchronous algorithms. Note that if these wrappers do not
work on an algorithm then that algorithm must specify its own geniv or
it can't be used at all.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
This patch creates the infrastructure to help the construction of givcipher
templates that wrap around existing blkcipher/ablkcipher algorithms by adding
an IV generator to them.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
This patch introduces the geniv field which indicates the default IV
generator for each algorithm. It should point to a string that is not
freed as long as the algorithm is registered.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
The scatterwalk infrastructure is used by algorithms so it needs to
move out of crypto for future users that may live in drivers/crypto
or asm/*/crypto.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Up until now we have ablkcipher algorithms have been identified as
type BLKCIPHER with the ASYNC bit set. This is suboptimal because
ablkcipher refers to two things. On the one hand it refers to the
top-level ablkcipher interface with requests. On the other hand it
refers to and algorithm type underneath.
As it is you cannot request a synchronous block cipher algorithm
with the ablkcipher interface on top. This is a problem because
we want to be able to eventually phase out the blkcipher top-level
interface.
This patch fixes this by making ABLKCIPHER its own type, just as
we have distinct types for HASH and DIGEST. The type it associated
with the algorithm implementation only.
Which top-level interface is used for synchronous block ciphers is
then determined by the mask that's used. If it's a specific mask
then the old blkcipher interface is given, otherwise we go with the
new ablkcipher interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
| |
This patch adds the helper blkcipher_walk_virt_block which is similar to
blkcipher_walk_virt but uses a supplied block size instead of the block
size of the block cipher. This is useful for CTR where the block size is
1 but we still want to walk by the block size of the underlying cipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
Now that the block size is no longer a multiple of the alignment, we need to
increase the kmalloc amount in blkcipher_next_slow to use the aligned block
size.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
| |
Previously we assumed for convenience that the block size is a multiple of
the algorithm's required alignment. With the pending addition of CTR this
will no longer be the case as the block size will be 1 due to it being a
stream cipher. However, the alignment requirement will be that of the
underlying implementation which will most likely be greater than 1.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
| |
Use max in blkcipher_get_spot() instead of open coding it.
Signed-off-by: Ingo Oeser <ioe-lkml@rameria.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
This patch ensures that kernel.h and slab.h are included for
the setkey_unaligned function. It also breaks a couple of
long lines.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
| |
The previous patch had the conditional inverted. This patch fixes it
so that we return the original position if it does not straddle a page.
Thanks to Bob Gilligan for spotting this.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function blkcipher_get_spot tries to return a buffer of
the specified length that does not straddle a page. It has
an off-by-one bug so it may advance a page unnecessarily.
What's worse, one of its callers doesn't provide a buffer
that's sufficiently long for this operation.
This patch fixes both problems. Thanks to Bob Gilligan for
diagnosing this problem and providing a fix.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
| |
setkey_unaligned() commited in ca7c39385ce1a7b44894a4b225a4608624e90730
overwrites unallocated memory in the following memset() because
I used the wrong buffer length.
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
| |
setkey() in {cipher,blkcipher,ablkcipher,hash}.c does not respect the
requested alignment by the algorithm. This patch fixes it. The extra
memory is allocated by kmalloc() with GFP_ATOMIC flag.
Signed-off-by: Sebastian Siewior <linux-crypto@ml.breakpoint.cc>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds the frontend interface for asynchronous block ciphers.
In addition to the usual block cipher parameters, there is a callback
function pointer and a data pointer. The callback will be invoked only
if the encrypt/decrypt handlers return -EINPROGRESS. In other words,
if the return value of zero the completion handler (or the equivalent
code) needs to be invoked by the caller.
The request structure is allocated and freed by the caller. Its size
is determined by calling crypto_ablkcipher_reqsize(). The helpers
ablkcipher_request_alloc/ablkcipher_request_free can be used to manage
the memory for a request.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
| |
The proc functions were incorrectly marked as used rather than unused.
They may be unused if proc is disabled.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
This patch adds support for multiple frontend types for each backend
algorithm by passing the type and mask through to the backend type
init function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
| |
Using blkcipher/hash crypto operations in hard IRQ context can lead
to random memory corruption due to the reuse of kmap_atomic slots.
Since crypto operations were never meant to be used in hard IRQ
contexts, this patch checks for such usage and returns an error
before kmap_atomic is performed.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
| |
Remove useless includes of linux/io.h, don't even try to build iomap_copy
on uml (it doesn't have readb() et.al., so...)
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Acked-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
This patch adds the new type of block ciphers. Unlike current cipher
algorithms which operate on a single block at a time, block ciphers
operate on an arbitrarily long linear area of data. As it is block-based,
it will skip any data remaining at the end which cannot form a block.
The block cipher has one major difference when compared to the existing
block cipher implementation. The sg walking is now performed by the
algorithm rather than the cipher mid-layer. This is needed for drivers
that directly support sg lists. It also improves performance for all
algorithms as it reduces the total number of indirect calls by one.
In future the existing cipher algorithm will be converted to only have
a single-block interface. This will be done after all existing users
have switched over to the new block cipher type.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|