aboutsummaryrefslogtreecommitdiffstats
path: root/include/crypto
Commit message (Collapse)AuthorAgeFilesLines
* net: remove mm.h inclusion from netdevice.hAlexey Dobriyan2011-06-211-0/+1
| | | | | | | | | | | | | | | | | Remove linux/mm.h inclusion from netdevice.h -- it's unused (I've checked manually). To prevent mm.h inclusion via other channels also extract "enum dma_data_direction" definition into separate header. This tiny piece is what gluing netdevice.h with mm.h via "netdevice.h => dmaengine.h => dma-mapping.h => scatterlist.h => mm.h". Removal of mm.h from scatterlist.h was tried and was found not feasible on most archs, so the link was cutoff earlier. Hope people are OK with tiny include file. Note, that mm_types.h is still dragged in, but it is a separate story. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* crypto: padlock - Move padlock.h into include/cryptoHerbert Xu2011-01-071-0/+29
| | | | | | | This patch moves padlock.h from drivers/crypto into include/crypto so that it may be used by the via-rng driver. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: scatterwalk - Add scatterwalk_crypto_chain helperSteffen Klassert2010-12-021-0/+15
| | | | | | | | | A lot of crypto algorithms implement their own chaining function. So add a generic one that can be used from all the algorithms that need scatterlist chaining. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: af_alg - User-space interface for Crypto APIHerbert Xu2010-11-191-0/+92
| | | | | | | | | | | | | | | | | | | | | | | | | This patch creates the backbone of the user-space interface for the Crypto API, through a new socket family AF_ALG. Each session corresponds to one or more connections obtained from that socket. The number depends on the number of inputs/outputs of that particular type of operation. For most types there will be a s ingle connection/file descriptor that is used for both input and output. AEAD is one of the few that require two inputs. Each algorithm type will provide its own implementation that plugs into af_alg. They're keyed using a string such as "skcipher" or "hash". IOW this patch only contains the boring bits that is required to hold everything together. Thakns to Miloslav Trmac for reviewing this and contributing fixes and improvements. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: David S. Miller <davem@davemloft.net> Tested-by: Martin Willi <martin@strongswan.org>
* Merge branch 'for-next' of ↵Linus Torvalds2010-10-241-2/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial * 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (39 commits) Update broken web addresses in arch directory. Update broken web addresses in the kernel. Revert "drivers/usb: Remove unnecessary return's from void functions" for musb gadget Revert "Fix typo: configuation => configuration" partially ida: document IDA_BITMAP_LONGS calculation ext2: fix a typo on comment in ext2/inode.c drivers/scsi: Remove unnecessary casts of private_data drivers/s390: Remove unnecessary casts of private_data net/sunrpc/rpc_pipe.c: Remove unnecessary casts of private_data drivers/infiniband: Remove unnecessary casts of private_data drivers/gpu/drm: Remove unnecessary casts of private_data kernel/pm_qos_params.c: Remove unnecessary casts of private_data fs/ecryptfs: Remove unnecessary casts of private_data fs/seq_file.c: Remove unnecessary casts of private_data arm: uengine.c: remove C99 comments arm: scoop.c: remove C99 comments Fix typo configue => configure in comments Fix typo: configuation => configuration Fix typo interrest[ing|ed] => interest[ing|ed] Fix various typos of valid in comments ... Fix up trivial conflicts in: drivers/char/ipmi/ipmi_si_intf.c drivers/usb/gadget/rndis.c net/irda/irnet/irnet_ppp.c
| * Update broken web addresses in the kernel.Justin P. Mattock2010-10-181-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The patch below updates broken web addresses in the kernel Signed-off-by: Justin P. Mattock <justinmattock@gmail.com> Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Finn Thain <fthain@telegraphics.com.au> Cc: Randy Dunlap <rdunlap@xenotime.net> Cc: Matt Turner <mattst88@gmail.com> Cc: Dimitry Torokhov <dmitry.torokhov@gmail.com> Cc: Mike Frysinger <vapier.adi@gmail.com> Acked-by: Ben Pfaff <blp@cs.stanford.edu> Acked-by: Hans J. Koch <hjk@linutronix.de> Reviewed-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* | crypto: cryptd - Adding the AEAD interface type support to cryptdAdrian Hoban2010-09-201-0/+24
|/ | | | | | | | | | | | | | This patch adds AEAD support into the cryptd framework. Having AEAD support in cryptd enables crypto drivers that use the AEAD interface type (such as the patch for AEAD based RFC4106 AES-GCM implementation using Intel New Instructions) to leverage cryptd for asynchronous processing. Signed-off-by: Adrian Hoban <adrian.hoban@intel.com> Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Gabriele Paoloni <gabriele.paoloni@intel.com> Signed-off-by: Aidan O'Mahony <aidan.o.mahony@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - Add ablkcipher_walk interfacesDavid S. Miller2010-05-191-0/+40
| | | | | | | | | | | | | | | | | | | | | | These are akin to the blkcipher_walk helpers. The main differences in the async variant are: 1) Only physical walking is supported. We can't hold on to kmap mappings across the async operation to support virtual ablkcipher_walk operations anyways. 2) Bounce buffers used for async more need to be persistent and freed at a later point in time when the async op completes. Therefore we maintain a list of writeback buffers and require that the ablkcipher_walk user call the 'complete' operation so we can copy the bounce buffers out to the real buffers and free up the bounce buffer chunks. These interfaces will be used by the new Niagara2 crypto driver. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: md5 - Add export supportMax Vozeler2010-01-171-0/+17
| | | | | | | | | | This patch adds export/import support to md5. The exported type is defined by struct md5_state. This is modeled after the equivalent change to sha1_generic. Signed-off-by: Max Vozeler <max@hinterhof.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: pcrypt - Add pcrypt crypto parallelization wrapperSteffen Klassert2010-01-071-0/+51
| | | | | | | | | This patch adds a parallel crypto template that takes a crypto algorithm and converts it to process the crypto transforms in parallel. For the moment only aead algorithms are supported. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Remove legacy hash/digest codeBenjamin Gilbert2009-10-191-1/+0
| | | | | | | | | 6941c3a0 disabled compilation of the legacy digest code but didn't actually remove it. Rectify this. Also, remove the crypto_hash_type extern declaration from algapi.h now that the struct is gone. Signed-off-by: Benjamin Gilbert <bgilbert@cs.cmu.edu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ghash - Add PCLMULQDQ accelerated implementationHuang Ying2009-10-191-0/+1
| | | | | | | | | | | | | | | | PCLMULQDQ is used to accelerate the most time-consuming part of GHASH, carry-less multiplication. More information about PCLMULQDQ can be found at: http://software.intel.com/en-us/articles/carry-less-multiplication-and-its-usage-for-computing-the-gcm-mode/ Because PCLMULQDQ changes XMM state, its usage must be enclosed with kernel_fpu_begin/end, which can be used only in process context, the acceleration is implemented as crypto_ahash. That is, request in soft IRQ context will be defered to the cryptd kernel thread. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds2009-09-116-63/+366
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (102 commits) crypto: sha-s390 - Fix warnings in import function crypto: vmac - New hash algorithm for intel_txt support crypto: api - Do not displace newly registered algorithms crypto: ansi_cprng - Fix module initialization crypto: xcbc - Fix alignment calculation of xcbc_tfm_ctx crypto: fips - Depend on ansi_cprng crypto: blkcipher - Do not use eseqiv on stream ciphers crypto: ctr - Use chainiv on raw counter mode Revert crypto: fips - Select CPRNG crypto: rng - Fix typo crypto: talitos - add support for 36 bit addressing crypto: talitos - align locks on cache lines crypto: talitos - simplify hmac data size calculation crypto: mv_cesa - Add support for Orion5X crypto engine crypto: cryptd - Add support to access underlaying shash crypto: gcm - Use GHASH digest algorithm crypto: ghash - Add GHASH digest algorithm for GCM crypto: authenc - Convert to ahash crypto: api - Fix aligned ctx helper crypto: hmac - Prehash ipad/opad ...
| * crypto: vmac - New hash algorithm for intel_txt supportShane Wang2009-09-021-0/+61
| | | | | | | | | | | | | | | | This patch adds VMAC (a fast MAC) support into crypto framework. Signed-off-by: Shane Wang <shane.wang@intel.com> Signed-off-by: Joseph Cihula <joseph.cihula@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: cryptd - Add support to access underlaying shashHuang Ying2009-08-061-0/+17
| | | | | | | | | | | | | | | | | | cryptd_alloc_ahash() will allocate a cryptd-ed ahash for specified algorithm name. The new allocated one is guaranteed to be cryptd-ed ahash, so the shash underlying can be gotten via cryptd_ahash_child(). Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Fix aligned ctx helperHerbert Xu2009-07-241-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | The aligned ctx helper was using a bogus alignment value thas was one off the correct value. Fortunately the current users do not require anything beyond the natural alignment of the platform so this hasn't caused a problem. This patch fixes that and also removes the unnecessary minimum check since if the alignment is less than the natural alignment then the subsequent ALIGN operation should be a noop. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: sha512_generic - Use 64-bit countersHerbert Xu2009-07-221-3/+3
| | | | | | | | | | | | | | | | This patch replaces the 32-bit counters in sha512_generic with 64-bit counters. It also switches the bit count to the simpler byte count. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: sha512 - Export struct sha512_stateHerbert Xu2009-07-221-0/+6
| | | | | | | | | | | | | | | | This patch renames struct sha512_ctx and exports it as struct sha512_state so that other sha512 implementations can use it as the reference structure for exporting their state. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: shash - Fix digest size offsetHerbert Xu2009-07-151-1/+2
| | | | | | | | | | | | | | | | | | | | | | When an shash algorithm is exported as ahash, ahash will access its digest size through hash_alg_common. That's why the shash layout needs to match hash_alg_common. This wasn't the case because the alignment weren't identical. This patch fixes the problem. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: ahash - Add unaligned handling and default operationsHerbert Xu2009-07-152-15/+14
| | | | | | | | | | | | | | | | | | | | This patch exports the finup operation where available and adds a default finup operation for ahash. The operations final, finup and digest also will now deal with unaligned result pointers by copying it. Finally export/import operations are will now be exported too. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: ahash - Remove old_ahash_algHerbert Xu2009-07-142-8/+1
| | | | | | | | | | | | | | Now that all ahash implementations have been converted to the new ahash type, we can remove old_ahash_alg and its associated support. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: crypto4xx - Switch to new style ahashHerbert Xu2009-07-141-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | This patch changes crypto4xx to use the new style ahash type. In particular, we now use ahash_alg to define ahash algorithms instead of crypto_alg. This is achieved by introducing a union that encapsulates the new type and the existing crypto_alg structure. They're told apart through a u32 field containing the type value. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: cryptd - Switch to template create APIHerbert Xu2009-07-141-0/+3
| | | | | | | | | | | | | | | | This patch changes cryptd to use the template->create function instead of alloc in anticipation for the switch to new style ahash algorithms. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: hash - Add helpers to free spawnsHerbert Xu2009-07-141-0/+10
| | | | | | | | | | | | | | This patch adds the helpers crypto_drop_ahash and crypto_drop_shash so that these spawns can be dropped without ugly casts. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: ahash - Add instance/spawn supportHerbert Xu2009-07-141-0/+51
| | | | | | | | | | | | | | This patch adds support for creating ahash instances and using ahash as spawns. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: ahash - Convert to new style algorithmsHerbert Xu2009-07-142-34/+86
| | | | | | | | | | | | | | | | | | This patch converts crypto_ahash to the new style. The old ahash algorithm type is retained until the existing ahash implementations are also converted. All ahash users will automatically get the new crypto_ahash type. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Remove frontend argument from extsize/init_tfmHerbert Xu2009-07-141-4/+2
| | | | | | | | | | | | | | As the extsize and init_tfm functions belong to the frontend the frontend argument is superfluous. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: ahash - Add crypto_ahash_set_reqsizeHerbert Xu2009-07-141-0/+6
| | | | | | | | | | | | | | This patch adds the helper crypto_ahash_set_reqsize so that implementations do not directly access the crypto_ahash structure. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: shash - Export async functionsHerbert Xu2009-07-141-0/+3
| | | | | | | | | | | | | | This patch exports the async functions so that they can be reused by cryptd when it switches over to using shash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: shash - Make descsize a run-time attributeHerbert Xu2009-07-142-2/+3
| | | | | | | | | | | | | | This patch changes descsize to a run-time attribute so that implementations can change it in their init functions. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: async - Use kzfree for requestsHerbert Xu2009-07-121-1/+1
| | | | | | | | | | | | | | | | This patch changes the kfree call to kzfree for async requests. As the request may contain sensitive data it needs to be zeroed before it can be reallocated by others. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: sha256_generic - Add export/import supportHerbert Xu2009-07-111-0/+6
| | | | | | | | | | | | | | | | This patch adds export/import support to sha256_generic. The exported type is defined by struct sha256_state, which is basically the entire descriptor state of sha256_generic. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: sha1_generic - Add export/import supportHerbert Xu2009-07-111-0/+8
| | | | | | | | | | | | | | | | This patch adds export/import support to sha1_generic. The exported type is defined by struct sha1_state, which is basically the entire descriptor state of sha1_generic. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: shash - Export/import hash state onlyHerbert Xu2009-07-111-4/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch replaces the full descriptor export with an export of the partial hash state. This allows the use of a consistent export format across all implementations of a given algorithm. This is useful because a number of cases require the use of the partial hash state, e.g., PadLock can use the SHA1 hash state to get around the fact that it can only hash contiguous data chunks. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: shash - Add shash_instance_ctxHerbert Xu2009-07-091-0/+5
| | | | | | | | | | | | | | This patch adds the helper shash_instance_ctx which is the shash analogue of crypto_instance_ctx. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: shash - Add __crypto_shash_castHerbert Xu2009-07-081-0/+5
| | | | | | | | | | | | | | | | | | | | | | This patch adds __crypto_shash_cast which turns a crypto_tfm into crypto_shash. It's analogous to the other __crypto_*_cast functions. It hasn't been needed until now since no existing shash algorithms have had an init function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: shash - Add crypto_shash_ctx_alignedHerbert Xu2009-07-081-0/+5
| | | | | | | | | | | | | | This patch adds crypto_shash_ctx_aligned which will be needed by hmac after its conversion to shash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: shash - Add shash_register_instanceHerbert Xu2009-07-081-1/+2
| | | | | | | | | | | | | | | | This patch adds shash_register_instance so that shash instances can be registered without bypassing the shash checks applied to normal algorithms. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: shash - Add shash_attr_alg2 helperHerbert Xu2009-07-081-0/+2
| | | | | | | | | | | | | | This patch adds the helper shash_attr_alg2 which locates a shash algorithm based on the information in the given attribute. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Add crypto_attr_alg2 helperHerbert Xu2009-07-081-1/+10
| | | | | | | | | | | | | | | | This patch adds the helper crypto_attr_alg2 which is similar to crypto_attr_alg but takes an extra frontend argument. This is intended to be used by new style algorithm types such as shash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: shash - Add spawn supportHerbert Xu2009-07-081-0/+14
| | | | | | | | | | | | | | This patch adds the functions needed to create and use shash spawns, i.e., to use shash algorithms in a template. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Add new style spawn supportHerbert Xu2009-07-081-0/+6
| | | | | | | | | | | | | | | | | | This patch modifies the spawn infrastructure to support new style algorithms like shash. In particular, this means storing the frontend type in the spawn and using crypto_create_tfm to allocate the tfm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: shash - Add shash_instanceHerbert Xu2009-07-081-0/+26
| | | | | | | | | | | | | | | | | | This patch adds shash_instance and the associated alloc/free functions. This is meant to be an instance that with a shash algorithm under it. Note that the instance itself doesn't have to be shash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Add crypto_alloc_instance2Herbert Xu2009-07-071-0/+2
| | | | | | | | | | | | | | | | | | | | | | This patch adds a new argument to crypto_alloc_instance which sets aside some space before the instance for use by algorithms such as shash that place type-specific data before crypto_alg. For compatibility the function has been renamed so that existing users aren't affected. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Add new template create functionHerbert Xu2009-07-071-0/+1
| | | | | | | | | | | | | | | | | | | | | | This patch introduces the template->create function intended to replace the existing alloc function. The intention is for create to handle the registration directly, whereas currently the caller of alloc has to handle the registration. This allows type-specific code to be run prior to registration. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: skcipher - Fix skcipher_dequeue_givcrypt NULL testHerbert Xu2009-08-292-2/+3
|/ | | | | | | | | | | | | | As struct skcipher_givcrypt_request includes struct crypto_request at a non-zero offset, testing for NULL after converting the pointer returned by crypto_dequeue_request does not work. This can result in IPsec crashes when the queue is depleted. This patch fixes it by doing the pointer conversion only when the return value is non-NULL. In particular, we create a new function __crypto_dequeue_request that does the pointer conversion. Reported-by: Brad Bosch <bradbosch@comcast.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: zlib - New zlib crypto module, using pcompGeert Uytterhoeven2009-03-041-0/+20
| | | | | | Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Cc: James Morris <jmorris@namei.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: compress - Add pcomp interfaceGeert Uytterhoeven2009-03-042-0/+153
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current "comp" crypto interface supports one-shot (de)compression only, i.e. the whole data buffer to be (de)compressed must be passed at once, and the whole (de)compressed data buffer will be received at once. In several use-cases (e.g. compressed file systems that store files in big compressed blocks), this workflow is not suitable. Furthermore, the "comp" type doesn't provide for the configuration of (de)compression parameters, and always allocates workspace memory for both compression and decompression, which may waste memory. To solve this, add a "pcomp" partial (de)compression interface that provides the following operations: - crypto_compress_{init,update,final}() for compression, - crypto_decompress_{init,update,final}() for decompression, - crypto_{,de}compress_setup(), to configure (de)compression parameters (incl. allocating workspace memory). The (de)compression methods take a struct comp_request, which was mimicked after the z_stream object in zlib, and contains buffer pointer and length pairs for input and output. The setup methods take an opaque parameter pointer and length pair. Parameters are supposed to be encoded using netlink attributes, whose meanings depend on the actual (name of the) (de)compression algorithm. Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Use dedicated workqueue for crypto subsystemHuang Ying2009-02-191-0/+7
| | | | | | | | | | | Use dedicated workqueue for crypto subsystem A dedicated workqueue named kcrypto_wq is created to be used by crypto subsystem. The system shared keventd_wq is not suitable for encryption/decryption, because of potential starvation problem. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - Add crypto_shash_blocksizeHerbert Xu2009-02-181-0/+5
| | | | | | | | This function is needed by algorithms that don't know their own block size, e.g., in s390 where the code is common between multiple versions of SHA. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>