cipher-aes-0.2.11/0000755000000000000000000000000012541525177012012 5ustar0000000000000000cipher-aes-0.2.11/cipher-aes.cabal0000644000000000000000000000620212541525177015016 0ustar0000000000000000Name: cipher-aes Version: 0.2.11 Description: Fast AES cipher implementation with advanced mode of operations. . The modes of operations available are ECB (Electronic code book), CBC (Cipher block chaining), CTR (Counter), XTS (XEX with ciphertext stealing), GCM (Galois Counter Mode). . The AES implementation uses AES-NI when available (on x86 and x86-64 architecture), but fallback gracefully to a software C implementation. . The software implementation uses S-Boxes, which might suffer for cache timing issues. However do notes that most other known software implementations, including very popular one (openssl, gnutls) also uses similar implementation. If it matters for your case, you should make sure you have AES-NI available, or you'll need to use a different implementation. . License: BSD3 License-file: LICENSE Copyright: Vincent Hanquez Author: Vincent Hanquez Maintainer: Vincent Hanquez Synopsis: Fast AES cipher implementation with advanced mode of operations Category: Cryptography Build-Type: Simple Homepage: https://github.com/vincenthz/hs-cipher-aes Cabal-Version: >=1.8 Extra-Source-Files: Tests/*.hs cbits/*.h cbits/aes_x86ni_impl.c Flag support_aesni Description: allow compilation with AESNI on system and architecture that supports it Default: True Library Build-Depends: base >= 4 && < 5 , bytestring , byteable , securemem >= 0.1.2 , crypto-cipher-types >= 0.0.6 && < 0.1 Exposed-modules: Crypto.Cipher.AES ghc-options: -Wall -optc-O3 -fno-cse -fwarn-tabs C-sources: cbits/aes_generic.c cbits/aes.c cbits/gf.c cbits/cpu.c if flag(support_aesni) && (os(linux) || os(freebsd)) && (arch(i386) || arch(x86_64)) CC-options: -mssse3 -maes -mpclmul -DWITH_AESNI C-sources: cbits/aes_x86ni.c Test-Suite test-cipher-aes type: exitcode-stdio-1.0 hs-source-dirs: Tests Main-Is: Tests.hs Build-depends: base >= 4 && < 5 , cipher-aes , crypto-cipher-types >= 0.0.6 , crypto-cipher-tests >= 0.0.8 , bytestring , byteable , QuickCheck >= 2 , test-framework >= 0.3.3 , test-framework-quickcheck2 >= 0.2.9 Benchmark bench-cipher-aes hs-source-dirs: Benchmarks Main-Is: Benchmarks.hs type: exitcode-stdio-1.0 Build-depends: base >= 4 && < 5 , bytestring , cipher-aes , crypto-cipher-types >= 0.0.6 , crypto-cipher-benchmarks >= 0.0.4 , criterion , mtl source-repository head type: git location: https://github.com/vincenthz/hs-cipher-aes cipher-aes-0.2.11/LICENSE0000644000000000000000000000273112541525177013022 0ustar0000000000000000Copyright (c) 2008-2013 Vincent Hanquez All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the author nor the names of his contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. cipher-aes-0.2.11/Setup.hs0000644000000000000000000000005612541525177013447 0ustar0000000000000000import Distribution.Simple main = defaultMain cipher-aes-0.2.11/Benchmarks/0000755000000000000000000000000012541525177014067 5ustar0000000000000000cipher-aes-0.2.11/Benchmarks/Benchmarks.hs0000644000000000000000000000033712541525177016503 0ustar0000000000000000import Crypto.Cipher.Benchmarks import Crypto.Cipher.AES (AES128, AES192, AES256) main = defaultMain [GBlockCipher (undefined :: AES128) ,GBlockCipher (undefined :: AES192) ,GBlockCipher (undefined :: AES256)] cipher-aes-0.2.11/cbits/0000755000000000000000000000000012541525177013116 5ustar0000000000000000cipher-aes-0.2.11/cbits/aes.c0000644000000000000000000006020312541525177014033 0ustar0000000000000000/* * Copyright (c) 2012 Vincent Hanquez * * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of his contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "cpu.h" #include "aes.h" #include "aes_generic.h" #include "bitfn.h" #include #include #include "gf.h" #include "aes_x86ni.h" void aes_generic_encrypt_ecb(aes_block *output, aes_key *key, aes_block *input, uint32_t nb_blocks); void aes_generic_decrypt_ecb(aes_block *output, aes_key *key, aes_block *input, uint32_t nb_blocks); void aes_generic_encrypt_cbc(aes_block *output, aes_key *key, aes_block *iv, aes_block *input, uint32_t nb_blocks); void aes_generic_decrypt_cbc(aes_block *output, aes_key *key, aes_block *iv, aes_block *input, uint32_t nb_blocks); void aes_generic_encrypt_ctr(uint8_t *output, aes_key *key, aes_block *iv, uint8_t *input, uint32_t length); void aes_generic_encrypt_xts(aes_block *output, aes_key *k1, aes_key *k2, aes_block *dataunit, uint32_t spoint, aes_block *input, uint32_t nb_blocks); void aes_generic_decrypt_xts(aes_block *output, aes_key *k1, aes_key *k2, aes_block *dataunit, uint32_t spoint, aes_block *input, uint32_t nb_blocks); void aes_generic_gcm_encrypt(uint8_t *output, aes_gcm *gcm, aes_key *key, uint8_t *input, uint32_t length); void aes_generic_gcm_decrypt(uint8_t *output, aes_gcm *gcm, aes_key *key, uint8_t *input, uint32_t length); void aes_generic_ocb_encrypt(uint8_t *output, aes_ocb *ocb, aes_key *key, uint8_t *input, uint32_t length); void aes_generic_ocb_decrypt(uint8_t *output, aes_ocb *ocb, aes_key *key, uint8_t *input, uint32_t length); enum { /* init */ INIT_128, INIT_192, INIT_256, /* single block */ ENCRYPT_BLOCK_128, ENCRYPT_BLOCK_192, ENCRYPT_BLOCK_256, DECRYPT_BLOCK_128, DECRYPT_BLOCK_192, DECRYPT_BLOCK_256, /* ecb */ ENCRYPT_ECB_128, ENCRYPT_ECB_192, ENCRYPT_ECB_256, DECRYPT_ECB_128, DECRYPT_ECB_192, DECRYPT_ECB_256, /* cbc */ ENCRYPT_CBC_128, ENCRYPT_CBC_192, ENCRYPT_CBC_256, DECRYPT_CBC_128, DECRYPT_CBC_192, DECRYPT_CBC_256, /* ctr */ ENCRYPT_CTR_128, ENCRYPT_CTR_192, ENCRYPT_CTR_256, /* xts */ ENCRYPT_XTS_128, ENCRYPT_XTS_192, ENCRYPT_XTS_256, DECRYPT_XTS_128, DECRYPT_XTS_192, DECRYPT_XTS_256, /* gcm */ ENCRYPT_GCM_128, ENCRYPT_GCM_192, ENCRYPT_GCM_256, DECRYPT_GCM_128, DECRYPT_GCM_192, DECRYPT_GCM_256, /* ocb */ ENCRYPT_OCB_128, ENCRYPT_OCB_192, ENCRYPT_OCB_256, DECRYPT_OCB_128, DECRYPT_OCB_192, DECRYPT_OCB_256, }; void *branch_table[] = { /* INIT */ [INIT_128] = aes_generic_init, [INIT_192] = aes_generic_init, [INIT_256] = aes_generic_init, /* BLOCK */ [ENCRYPT_BLOCK_128] = aes_generic_encrypt_block, [ENCRYPT_BLOCK_192] = aes_generic_encrypt_block, [ENCRYPT_BLOCK_256] = aes_generic_encrypt_block, [DECRYPT_BLOCK_128] = aes_generic_decrypt_block, [DECRYPT_BLOCK_192] = aes_generic_decrypt_block, [DECRYPT_BLOCK_256] = aes_generic_decrypt_block, /* ECB */ [ENCRYPT_ECB_128] = aes_generic_encrypt_ecb, [ENCRYPT_ECB_192] = aes_generic_encrypt_ecb, [ENCRYPT_ECB_256] = aes_generic_encrypt_ecb, [DECRYPT_ECB_128] = aes_generic_decrypt_ecb, [DECRYPT_ECB_192] = aes_generic_decrypt_ecb, [DECRYPT_ECB_256] = aes_generic_decrypt_ecb, /* CBC */ [ENCRYPT_CBC_128] = aes_generic_encrypt_cbc, [ENCRYPT_CBC_192] = aes_generic_encrypt_cbc, [ENCRYPT_CBC_256] = aes_generic_encrypt_cbc, [DECRYPT_CBC_128] = aes_generic_decrypt_cbc, [DECRYPT_CBC_192] = aes_generic_decrypt_cbc, [DECRYPT_CBC_256] = aes_generic_decrypt_cbc, /* CTR */ [ENCRYPT_CTR_128] = aes_generic_encrypt_ctr, [ENCRYPT_CTR_192] = aes_generic_encrypt_ctr, [ENCRYPT_CTR_256] = aes_generic_encrypt_ctr, /* XTS */ [ENCRYPT_XTS_128] = aes_generic_encrypt_xts, [ENCRYPT_XTS_192] = aes_generic_encrypt_xts, [ENCRYPT_XTS_256] = aes_generic_encrypt_xts, [DECRYPT_XTS_128] = aes_generic_decrypt_xts, [DECRYPT_XTS_192] = aes_generic_decrypt_xts, [DECRYPT_XTS_256] = aes_generic_decrypt_xts, /* GCM */ [ENCRYPT_GCM_128] = aes_generic_gcm_encrypt, [ENCRYPT_GCM_192] = aes_generic_gcm_encrypt, [ENCRYPT_GCM_256] = aes_generic_gcm_encrypt, [DECRYPT_GCM_128] = aes_generic_gcm_decrypt, [DECRYPT_GCM_192] = aes_generic_gcm_decrypt, [DECRYPT_GCM_256] = aes_generic_gcm_decrypt, /* OCB */ [ENCRYPT_OCB_128] = aes_generic_ocb_encrypt, [ENCRYPT_OCB_192] = aes_generic_ocb_encrypt, [ENCRYPT_OCB_256] = aes_generic_ocb_encrypt, [DECRYPT_OCB_128] = aes_generic_ocb_decrypt, [DECRYPT_OCB_192] = aes_generic_ocb_decrypt, [DECRYPT_OCB_256] = aes_generic_ocb_decrypt, }; typedef void (*init_f)(aes_key *, uint8_t *, uint8_t); typedef void (*ecb_f)(aes_block *output, aes_key *key, aes_block *input, uint32_t nb_blocks); typedef void (*cbc_f)(aes_block *output, aes_key *key, aes_block *iv, aes_block *input, uint32_t nb_blocks); typedef void (*ctr_f)(uint8_t *output, aes_key *key, aes_block *iv, uint8_t *input, uint32_t length); typedef void (*xts_f)(aes_block *output, aes_key *k1, aes_key *k2, aes_block *dataunit, uint32_t spoint, aes_block *input, uint32_t nb_blocks); typedef void (*gcm_crypt_f)(uint8_t *output, aes_gcm *gcm, aes_key *key, uint8_t *input, uint32_t length); typedef void (*ocb_crypt_f)(uint8_t *output, aes_ocb *ocb, aes_key *key, uint8_t *input, uint32_t length); typedef void (*block_f)(aes_block *output, aes_key *key, aes_block *input); #ifdef WITH_AESNI #define GET_INIT(strength) \ ((init_f) (branch_table[INIT_128 + strength])) #define GET_ECB_ENCRYPT(strength) \ ((ecb_f) (branch_table[ENCRYPT_ECB_128 + strength])) #define GET_ECB_DECRYPT(strength) \ ((ecb_f) (branch_table[DECRYPT_ECB_128 + strength])) #define GET_CBC_ENCRYPT(strength) \ ((cbc_f) (branch_table[ENCRYPT_CBC_128 + strength])) #define GET_CBC_DECRYPT(strength) \ ((cbc_f) (branch_table[DECRYPT_CBC_128 + strength])) #define GET_CTR_ENCRYPT(strength) \ ((ctr_f) (branch_table[ENCRYPT_CTR_128 + strength])) #define GET_XTS_ENCRYPT(strength) \ ((xts_f) (branch_table[ENCRYPT_XTS_128 + strength])) #define GET_XTS_DECRYPT(strength) \ ((xts_f) (branch_table[DECRYPT_XTS_128 + strength])) #define GET_GCM_ENCRYPT(strength) \ ((gcm_crypt_f) (branch_table[ENCRYPT_GCM_128 + strength])) #define GET_GCM_DECRYPT(strength) \ ((gcm_crypt_f) (branch_table[DECRYPT_GCM_128 + strength])) #define GET_OCB_ENCRYPT(strength) \ ((ocb_crypt_f) (branch_table[ENCRYPT_OCB_128 + strength])) #define GET_OCB_DECRYPT(strength) \ ((ocb_crypt_f) (branch_table[DECRYPT_OCB_128 + strength])) #define aes_encrypt_block(o,k,i) \ (((block_f) (branch_table[ENCRYPT_BLOCK_128 + k->strength]))(o,k,i)) #define aes_decrypt_block(o,k,i) \ (((block_f) (branch_table[DECRYPT_BLOCK_128 + k->strength]))(o,k,i)) #else #define GET_INIT(strength) aes_generic_init #define GET_ECB_ENCRYPT(strength) aes_generic_encrypt_ecb #define GET_ECB_DECRYPT(strength) aes_generic_decrypt_ecb #define GET_CBC_ENCRYPT(strength) aes_generic_encrypt_cbc #define GET_CBC_DECRYPT(strength) aes_generic_decrypt_cbc #define GET_CTR_ENCRYPT(strength) aes_generic_encrypt_ctr #define GET_XTS_ENCRYPT(strength) aes_generic_encrypt_xts #define GET_XTS_DECRYPT(strength) aes_generic_decrypt_xts #define GET_GCM_ENCRYPT(strength) aes_generic_gcm_encrypt #define GET_GCM_DECRYPT(strength) aes_generic_gcm_decrypt #define GET_OCB_ENCRYPT(strength) aes_generic_ocb_encrypt #define GET_OCB_DECRYPT(strength) aes_generic_ocb_decrypt #define aes_encrypt_block(o,k,i) aes_generic_encrypt_block(o,k,i) #define aes_decrypt_block(o,k,i) aes_generic_decrypt_block(o,k,i) #endif #if defined(ARCH_X86) && defined(WITH_AESNI) void initialize_table_ni(int aesni, int pclmul) { if (!aesni) return; branch_table[INIT_128] = aes_ni_init; branch_table[INIT_256] = aes_ni_init; branch_table[ENCRYPT_BLOCK_128] = aes_ni_encrypt_block128; branch_table[DECRYPT_BLOCK_128] = aes_ni_decrypt_block128; branch_table[ENCRYPT_BLOCK_256] = aes_ni_encrypt_block256; branch_table[DECRYPT_BLOCK_256] = aes_ni_decrypt_block256; /* ECB */ branch_table[ENCRYPT_ECB_128] = aes_ni_encrypt_ecb128; branch_table[DECRYPT_ECB_128] = aes_ni_decrypt_ecb128; branch_table[ENCRYPT_ECB_256] = aes_ni_encrypt_ecb256; branch_table[DECRYPT_ECB_256] = aes_ni_decrypt_ecb256; /* CBC */ branch_table[ENCRYPT_CBC_128] = aes_ni_encrypt_cbc128; branch_table[DECRYPT_CBC_128] = aes_ni_decrypt_cbc128; branch_table[ENCRYPT_CBC_256] = aes_ni_encrypt_cbc256; branch_table[DECRYPT_CBC_256] = aes_ni_decrypt_cbc256; /* CTR */ branch_table[ENCRYPT_CTR_128] = aes_ni_encrypt_ctr128; branch_table[ENCRYPT_CTR_256] = aes_ni_encrypt_ctr256; /* XTS */ branch_table[ENCRYPT_XTS_128] = aes_ni_encrypt_xts128; branch_table[ENCRYPT_XTS_256] = aes_ni_encrypt_xts256; /* GCM */ branch_table[ENCRYPT_GCM_128] = aes_ni_gcm_encrypt128; branch_table[ENCRYPT_GCM_256] = aes_ni_gcm_encrypt256; /* OCB */ /* branch_table[ENCRYPT_OCB_128] = aes_ni_ocb_encrypt128; branch_table[ENCRYPT_OCB_256] = aes_ni_ocb_encrypt256; */ } #endif void aes_initkey(aes_key *key, uint8_t *origkey, uint8_t size) { switch (size) { case 16: key->nbr = 10; key->strength = 0; break; case 24: key->nbr = 12; key->strength = 1; break; case 32: key->nbr = 14; key->strength = 2; break; } #if defined(ARCH_X86) && defined(WITH_AESNI) initialize_hw(initialize_table_ni); #endif init_f _init = GET_INIT(key->strength); _init(key, origkey, size); } void aes_encrypt_ecb(aes_block *output, aes_key *key, aes_block *input, uint32_t nb_blocks) { ecb_f e = GET_ECB_ENCRYPT(key->strength); e(output, key, input, nb_blocks); } void aes_decrypt_ecb(aes_block *output, aes_key *key, aes_block *input, uint32_t nb_blocks) { ecb_f d = GET_ECB_DECRYPT(key->strength); d(output, key, input, nb_blocks); } void aes_encrypt_cbc(aes_block *output, aes_key *key, aes_block *iv, aes_block *input, uint32_t nb_blocks) { cbc_f e = GET_CBC_ENCRYPT(key->strength); e(output, key, iv, input, nb_blocks); } void aes_decrypt_cbc(aes_block *output, aes_key *key, aes_block *iv, aes_block *input, uint32_t nb_blocks) { cbc_f d = GET_CBC_DECRYPT(key->strength); d(output, key, iv, input, nb_blocks); } void aes_gen_ctr(aes_block *output, aes_key *key, const aes_block *iv, uint32_t nb_blocks) { aes_block block; /* preload IV in block */ block128_copy(&block, iv); for ( ; nb_blocks-- > 0; output++, block128_inc_be(&block)) { aes_encrypt_block(output, key, &block); } } void aes_gen_ctr_cont(aes_block *output, aes_key *key, aes_block *iv, uint32_t nb_blocks) { aes_block block; /* preload IV in block */ block128_copy(&block, iv); for ( ; nb_blocks-- > 0; output++, block128_inc_be(&block)) { aes_encrypt_block(output, key, &block); } /* copy back the IV */ block128_copy(iv, &block); } void aes_encrypt_ctr(uint8_t *output, aes_key *key, aes_block *iv, uint8_t *input, uint32_t len) { ctr_f e = GET_CTR_ENCRYPT(key->strength); e(output, key, iv, input, len); } void aes_encrypt_xts(aes_block *output, aes_key *k1, aes_key *k2, aes_block *dataunit, uint32_t spoint, aes_block *input, uint32_t nb_blocks) { xts_f e = GET_XTS_ENCRYPT(k1->strength); e(output, k1, k2, dataunit, spoint, input, nb_blocks); } void aes_decrypt_xts(aes_block *output, aes_key *k1, aes_key *k2, aes_block *dataunit, uint32_t spoint, aes_block *input, uint32_t nb_blocks) { aes_generic_decrypt_xts(output, k1, k2, dataunit, spoint, input, nb_blocks); } void aes_gcm_encrypt(uint8_t *output, aes_gcm *gcm, aes_key *key, uint8_t *input, uint32_t length) { gcm_crypt_f e = GET_GCM_ENCRYPT(key->strength); e(output, gcm, key, input, length); } void aes_gcm_decrypt(uint8_t *output, aes_gcm *gcm, aes_key *key, uint8_t *input, uint32_t length) { gcm_crypt_f d = GET_GCM_DECRYPT(key->strength); d(output, gcm, key, input, length); } void aes_ocb_encrypt(uint8_t *output, aes_ocb *ocb, aes_key *key, uint8_t *input, uint32_t length) { ocb_crypt_f e = GET_OCB_ENCRYPT(key->strength); e(output, ocb, key, input, length); } void aes_ocb_decrypt(uint8_t *output, aes_ocb *ocb, aes_key *key, uint8_t *input, uint32_t length) { ocb_crypt_f d = GET_OCB_DECRYPT(key->strength); d(output, ocb, key, input, length); } static void gcm_ghash_add(aes_gcm *gcm, block128 *b) { block128_xor(&gcm->tag, b); gf_mul(&gcm->tag, &gcm->h); } void aes_gcm_init(aes_gcm *gcm, aes_key *key, uint8_t *iv, uint32_t len) { gcm->length_aad = 0; gcm->length_input = 0; block128_zero(&gcm->h); block128_zero(&gcm->tag); block128_zero(&gcm->iv); /* prepare H : encrypt_K(0^128) */ aes_encrypt_block(&gcm->h, key, &gcm->h); if (len == 12) { block128_copy_bytes(&gcm->iv, iv, 12); gcm->iv.b[15] = 0x01; } else { uint32_t origlen = len << 3; int i; for (; len >= 16; len -= 16, iv += 16) { block128_xor(&gcm->iv, (block128 *) iv); gf_mul(&gcm->iv, &gcm->h); } if (len > 0) { block128_xor_bytes(&gcm->iv, iv, len); gf_mul(&gcm->iv, &gcm->h); } for (i = 15; origlen; --i, origlen >>= 8) gcm->iv.b[i] ^= (uint8_t) origlen; gf_mul(&gcm->iv, &gcm->h); } block128_copy(&gcm->civ, &gcm->iv); } void aes_gcm_aad(aes_gcm *gcm, uint8_t *input, uint32_t length) { gcm->length_aad += length; for (; length >= 16; input += 16, length -= 16) { gcm_ghash_add(gcm, (block128 *) input); } if (length > 0) { aes_block tmp; block128_zero(&tmp); block128_copy_bytes(&tmp, input, length); gcm_ghash_add(gcm, &tmp); } } void aes_gcm_finish(uint8_t *tag, aes_gcm *gcm, aes_key *key) { aes_block lblock; int i; /* tag = (tag-1 xor (lenbits(a) | lenbits(c)) ) . H */ lblock.q[0] = cpu_to_be64(gcm->length_aad << 3); lblock.q[1] = cpu_to_be64(gcm->length_input << 3); gcm_ghash_add(gcm, &lblock); aes_encrypt_block(&lblock, key, &gcm->iv); block128_xor(&gcm->tag, &lblock); for (i = 0; i < 16; i++) { tag[i] = gcm->tag.b[i]; } } static inline void ocb_block_double(block128 *d, block128 *s) { unsigned int i; uint8_t tmp = s->b[0]; for (i=0; i<15; i++) d->b[i] = (s->b[i] << 1) | (s->b[i+1] >> 7); d->b[15] = (s->b[15] << 1) ^ ((tmp >> 7) * 0x87); } static void ocb_get_L_i(block128 *l, block128 *lis, unsigned int i) { #define L_CACHED 4 i = bitfn_ntz(i); if (i < L_CACHED) { block128_copy(l, &lis[i]); } else { i -= (L_CACHED - 1); block128_copy(l, &lis[L_CACHED - 1]); while (i--) { ocb_block_double(l, l); } } #undef L_CACHED } void aes_ocb_init(aes_ocb *ocb, aes_key *key, uint8_t *iv, uint32_t len) { block128 tmp, nonce, ktop; unsigned char stretch[24]; unsigned bottom, byteshift, bitshift, i; /* we don't accept more than 15 bytes, any bytes higher will be ignored. */ if (len > 15) { len = 15; } /* create L*, and L$,L0,L1,L2,L3 */ block128_zero(&tmp); aes_encrypt_block(&ocb->lstar, key, &tmp); ocb_block_double(&ocb->ldollar, &ocb->lstar); ocb_block_double(&ocb->li[0], &ocb->ldollar); ocb_block_double(&ocb->li[1], &ocb->li[0]); ocb_block_double(&ocb->li[2], &ocb->li[1]); ocb_block_double(&ocb->li[3], &ocb->li[2]); /* create strech from the nonce */ block128_zero(&nonce); memcpy(nonce.b + 4, iv, 12); nonce.b[0] = (unsigned char)(((16 * 8) % 128) << 1); nonce.b[16-12-1] |= 0x01; bottom = nonce.b[15] & 0x3F; nonce.b[15] &= 0xC0; aes_encrypt_block(&ktop, key, &nonce); memcpy(stretch, ktop.b, 16); memcpy(tmp.b, ktop.b + 1, 8); block128_xor(&tmp, &ktop); memcpy(stretch + 16, tmp.b, 8); /* initialize the encryption offset from stretch */ byteshift = bottom / 8; bitshift = bottom % 8; if (bitshift != 0) for (i = 0; i < 16; i++) ocb->offset_enc.b[i] = (stretch[i+byteshift] << bitshift) | (stretch[i+byteshift+1] >> (8-bitshift)); else for (i = 0; i < 16; i++) ocb->offset_enc.b[i] = stretch[i+byteshift]; /* initialize checksum for aad and encryption, and the aad offset */ block128_zero(&ocb->sum_aad); block128_zero(&ocb->sum_enc); block128_zero(&ocb->offset_aad); } void aes_ocb_aad(aes_ocb *ocb, aes_key *key, uint8_t *input, uint32_t length) { block128 tmp; unsigned int i; for (i=1; i<= length/16; i++, input=input+16) { ocb_get_L_i(&tmp, ocb->li, i); block128_xor(&ocb->offset_aad, &tmp); block128_vxor(&tmp, &ocb->offset_aad, (block128 *) input); aes_encrypt_block(&tmp, key, &tmp); block128_xor(&ocb->sum_aad, &tmp); } length = length % 16; /* Bytes in final block */ if (length > 0) { block128_xor(&ocb->offset_aad, &ocb->lstar); block128_zero(&tmp); block128_copy_bytes(&tmp, input, length); tmp.b[length] = 0x80; block128_xor(&tmp, &ocb->offset_aad); aes_encrypt_block(&tmp, key, &tmp); block128_xor(&ocb->sum_aad, &tmp); } } void aes_ocb_finish(uint8_t *tag, aes_ocb *ocb, aes_key *key) { block128 tmp; block128_vxor(&tmp, &ocb->sum_enc, &ocb->offset_enc); block128_xor(&tmp, &ocb->ldollar); aes_encrypt_block((block128 *) tag, key, &tmp); block128_xor((block128 *) tag, &ocb->sum_aad); } void aes_generic_encrypt_ecb(aes_block *output, aes_key *key, aes_block *input, uint32_t nb_blocks) { for ( ; nb_blocks-- > 0; input++, output++) { aes_generic_encrypt_block(output, key, input); } } void aes_generic_decrypt_ecb(aes_block *output, aes_key *key, aes_block *input, uint32_t nb_blocks) { for ( ; nb_blocks-- > 0; input++, output++) { aes_generic_decrypt_block(output, key, input); } } void aes_generic_encrypt_cbc(aes_block *output, aes_key *key, aes_block *iv, aes_block *input, uint32_t nb_blocks) { aes_block block; /* preload IV in block */ block128_copy(&block, iv); for ( ; nb_blocks-- > 0; input++, output++) { block128_xor(&block, (block128 *) input); aes_generic_encrypt_block(&block, key, &block); block128_copy((block128 *) output, &block); } } void aes_generic_decrypt_cbc(aes_block *output, aes_key *key, aes_block *ivini, aes_block *input, uint32_t nb_blocks) { aes_block block, blocko; aes_block iv; /* preload IV in block */ block128_copy(&iv, ivini); for ( ; nb_blocks-- > 0; input++, output++) { block128_copy(&block, (block128 *) input); aes_generic_decrypt_block(&blocko, key, &block); block128_vxor((block128 *) output, &blocko, &iv); block128_copy(&iv, &block); } } void aes_generic_encrypt_ctr(uint8_t *output, aes_key *key, aes_block *iv, uint8_t *input, uint32_t len) { aes_block block, o; uint32_t nb_blocks = len / 16; int i; /* preload IV in block */ block128_copy(&block, iv); for ( ; nb_blocks-- > 0; block128_inc_be(&block), output += 16, input += 16) { aes_encrypt_block(&o, key, &block); block128_vxor((block128 *) output, &o, (block128 *) input); } if ((len % 16) != 0) { aes_encrypt_block(&o, key, &block); for (i = 0; i < (len % 16); i++) { *output = ((uint8_t *) &o)[i] ^ *input; output++; input++; } } } void aes_generic_encrypt_xts(aes_block *output, aes_key *k1, aes_key *k2, aes_block *dataunit, uint32_t spoint, aes_block *input, uint32_t nb_blocks) { aes_block block, tweak; /* load IV and encrypt it using k2 as the tweak */ block128_copy(&tweak, dataunit); aes_encrypt_block(&tweak, k2, &tweak); /* TO OPTIMISE: this is really inefficient way to do that */ while (spoint-- > 0) gf_mulx(&tweak); for ( ; nb_blocks-- > 0; input++, output++, gf_mulx(&tweak)) { block128_vxor(&block, input, &tweak); aes_encrypt_block(&block, k1, &block); block128_vxor(output, &block, &tweak); } } void aes_generic_decrypt_xts(aes_block *output, aes_key *k1, aes_key *k2, aes_block *dataunit, uint32_t spoint, aes_block *input, uint32_t nb_blocks) { aes_block block, tweak; /* load IV and encrypt it using k2 as the tweak */ block128_copy(&tweak, dataunit); aes_encrypt_block(&tweak, k2, &tweak); /* TO OPTIMISE: this is really inefficient way to do that */ while (spoint-- > 0) gf_mulx(&tweak); for ( ; nb_blocks-- > 0; input++, output++, gf_mulx(&tweak)) { block128_vxor(&block, input, &tweak); aes_decrypt_block(&block, k1, &block); block128_vxor(output, &block, &tweak); } } void aes_generic_gcm_encrypt(uint8_t *output, aes_gcm *gcm, aes_key *key, uint8_t *input, uint32_t length) { aes_block out; gcm->length_input += length; for (; length >= 16; input += 16, output += 16, length -= 16) { block128_inc_be(&gcm->civ); aes_encrypt_block(&out, key, &gcm->civ); block128_xor(&out, (block128 *) input); gcm_ghash_add(gcm, &out); block128_copy((block128 *) output, &out); } if (length > 0) { aes_block tmp; int i; block128_inc_be(&gcm->civ); /* create e(civ) in out */ aes_encrypt_block(&out, key, &gcm->civ); /* initialize a tmp as input and xor it to e(civ) */ block128_zero(&tmp); block128_copy_bytes(&tmp, input, length); block128_xor_bytes(&tmp, out.b, length); gcm_ghash_add(gcm, &tmp); for (i = 0; i < length; i++) { output[i] = tmp.b[i]; } } } void aes_generic_gcm_decrypt(uint8_t *output, aes_gcm *gcm, aes_key *key, uint8_t *input, uint32_t length) { aes_block out; gcm->length_input += length; for (; length >= 16; input += 16, output += 16, length -= 16) { block128_inc_be(&gcm->civ); aes_encrypt_block(&out, key, &gcm->civ); gcm_ghash_add(gcm, (block128 *) input); block128_xor(&out, (block128 *) input); block128_copy((block128 *) output, &out); } if (length > 0) { aes_block tmp; int i; block128_inc_be(&gcm->civ); block128_zero(&tmp); block128_copy_bytes(&tmp, input, length); gcm_ghash_add(gcm, &tmp); aes_encrypt_block(&out, key, &gcm->civ); block128_xor_bytes(&tmp, out.b, length); for (i = 0; i < length; i++) { output[i] = tmp.b[i]; } } } static void ocb_generic_crypt(uint8_t *output, aes_ocb *ocb, aes_key *key, uint8_t *input, uint32_t length, int encrypt) { block128 tmp, pad; unsigned int i; for (i = 1; i <= length/16; i++, input += 16, output += 16) { /* Offset_i = Offset_{i-1} xor L_{ntz(i)} */ ocb_get_L_i(&tmp, ocb->li, i); block128_xor(&ocb->offset_enc, &tmp); block128_vxor(&tmp, &ocb->offset_enc, (block128 *) input); if (encrypt) { aes_encrypt_block(&tmp, key, &tmp); block128_vxor((block128 *) output, &ocb->offset_enc, &tmp); block128_xor(&ocb->sum_enc, (block128 *) input); } else { aes_decrypt_block(&tmp, key, &tmp); block128_vxor((block128 *) output, &ocb->offset_enc, &tmp); block128_xor(&ocb->sum_enc, (block128 *) output); } } /* process the last partial block if any */ length = length % 16; if (length > 0) { block128_xor(&ocb->offset_enc, &ocb->lstar); aes_encrypt_block(&pad, key, &ocb->offset_enc); if (encrypt) { block128_zero(&tmp); block128_copy_bytes(&tmp, input, length); tmp.b[length] = 0x80; block128_xor(&ocb->sum_enc, &tmp); block128_xor(&pad, &tmp); memcpy(output, pad.b, length); output += length; } else { block128_copy(&tmp, &pad); block128_copy_bytes(&tmp, input, length); block128_xor(&tmp, &pad); tmp.b[length] = 0x80; memcpy(output, tmp.b, length); block128_xor(&ocb->sum_enc, &tmp); input += length; } } } void aes_generic_ocb_encrypt(uint8_t *output, aes_ocb *ocb, aes_key *key, uint8_t *input, uint32_t length) { ocb_generic_crypt(output, ocb, key, input, length, 1); } void aes_generic_ocb_decrypt(uint8_t *output, aes_ocb *ocb, aes_key *key, uint8_t *input, uint32_t length) { ocb_generic_crypt(output, ocb, key, input, length, 0); } cipher-aes-0.2.11/cbits/aes.h0000644000000000000000000001016712541525177014044 0ustar0000000000000000/* * Copyright (C) 2008 Vincent Hanquez * * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of his contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * AES implementation */ #ifndef AES_H #define AES_H #include #include "block128.h" typedef block128 aes_block; /* size = 456 */ typedef struct { uint8_t nbr; /* number of rounds: 10 (128), 12 (192), 14 (256) */ uint8_t strength; /* 128 = 0, 192 = 1, 256 = 2 */ uint8_t _padding[6]; uint8_t data[16*14*2]; } aes_key; /* size = 4*16+2*8= 80 */ typedef struct { aes_block tag; aes_block h; aes_block iv; aes_block civ; uint64_t length_aad; uint64_t length_input; } aes_gcm; typedef struct { block128 offset_aad; block128 offset_enc; block128 sum_aad; block128 sum_enc; block128 lstar; block128 ldollar; block128 li[4]; } aes_ocb; /* in bytes: either 16,24,32 */ void aes_initkey(aes_key *ctx, uint8_t *key, uint8_t size); void aes_encrypt(aes_block *output, aes_key *key, aes_block *input); void aes_decrypt(aes_block *output, aes_key *key, aes_block *input); void aes_encrypt_ecb(aes_block *output, aes_key *key, aes_block *input, uint32_t nb_blocks); void aes_decrypt_ecb(aes_block *output, aes_key *key, aes_block *input, uint32_t nb_blocks); void aes_encrypt_cbc(aes_block *output, aes_key *key, aes_block *iv, aes_block *input, uint32_t nb_blocks); void aes_decrypt_cbc(aes_block *output, aes_key *key, aes_block *iv, aes_block *input, uint32_t nb_blocks); void aes_gen_ctr(aes_block *output, aes_key *key, const aes_block *iv, uint32_t nb_blocks); void aes_gen_ctr_cont(aes_block *output, aes_key *key, aes_block *iv, uint32_t nb_blocks); void aes_encrypt_xts(aes_block *output, aes_key *key, aes_key *key2, aes_block *sector, uint32_t spoint, aes_block *input, uint32_t nb_blocks); void aes_decrypt_xts(aes_block *output, aes_key *key, aes_key *key2, aes_block *sector, uint32_t spoint, aes_block *input, uint32_t nb_blocks); void aes_gcm_init(aes_gcm *gcm, aes_key *key, uint8_t *iv, uint32_t len); void aes_gcm_aad(aes_gcm *gcm, uint8_t *input, uint32_t length); void aes_gcm_encrypt(uint8_t *output, aes_gcm *gcm, aes_key *key, uint8_t *input, uint32_t length); void aes_gcm_decrypt(uint8_t *output, aes_gcm *gcm, aes_key *key, uint8_t *input, uint32_t length); void aes_gcm_finish(uint8_t *tag, aes_gcm *gcm, aes_key *key); void aes_ocb_init(aes_ocb *ocb, aes_key *key, uint8_t *iv, uint32_t len); void aes_ocb_aad(aes_ocb *ocb, aes_key *key, uint8_t *input, uint32_t length); void aes_ocb_encrypt(uint8_t *output, aes_ocb *ocb, aes_key *key, uint8_t *input, uint32_t length); void aes_ocb_decrypt(uint8_t *output, aes_ocb *ocb, aes_key *key, uint8_t *input, uint32_t length); void aes_ocb_finish(uint8_t *tag, aes_ocb *ocb, aes_key *key); #endif cipher-aes-0.2.11/cbits/aes_generic.c0000644000000000000000000005003512541525177015531 0ustar0000000000000000/* * Copyright (C) 2008 Vincent Hanquez * * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of his contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * AES implementation */ #include #include #include "aes.h" #include "bitfn.h" static uint8_t sbox[256] = { 0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5, 0x30, 0x01, 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76, 0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0, 0xad, 0xd4, 0xa2, 0xaf, 0x9c, 0xa4, 0x72, 0xc0, 0xb7, 0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc, 0x34, 0xa5, 0xe5, 0xf1, 0x71, 0xd8, 0x31, 0x15, 0x04, 0xc7, 0x23, 0xc3, 0x18, 0x96, 0x05, 0x9a, 0x07, 0x12, 0x80, 0xe2, 0xeb, 0x27, 0xb2, 0x75, 0x09, 0x83, 0x2c, 0x1a, 0x1b, 0x6e, 0x5a, 0xa0, 0x52, 0x3b, 0xd6, 0xb3, 0x29, 0xe3, 0x2f, 0x84, 0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b, 0x6a, 0xcb, 0xbe, 0x39, 0x4a, 0x4c, 0x58, 0xcf, 0xd0, 0xef, 0xaa, 0xfb, 0x43, 0x4d, 0x33, 0x85, 0x45, 0xf9, 0x02, 0x7f, 0x50, 0x3c, 0x9f, 0xa8, 0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5, 0xbc, 0xb6, 0xda, 0x21, 0x10, 0xff, 0xf3, 0xd2, 0xcd, 0x0c, 0x13, 0xec, 0x5f, 0x97, 0x44, 0x17, 0xc4, 0xa7, 0x7e, 0x3d, 0x64, 0x5d, 0x19, 0x73, 0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a, 0x90, 0x88, 0x46, 0xee, 0xb8, 0x14, 0xde, 0x5e, 0x0b, 0xdb, 0xe0, 0x32, 0x3a, 0x0a, 0x49, 0x06, 0x24, 0x5c, 0xc2, 0xd3, 0xac, 0x62, 0x91, 0x95, 0xe4, 0x79, 0xe7, 0xc8, 0x37, 0x6d, 0x8d, 0xd5, 0x4e, 0xa9, 0x6c, 0x56, 0xf4, 0xea, 0x65, 0x7a, 0xae, 0x08, 0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6, 0xb4, 0xc6, 0xe8, 0xdd, 0x74, 0x1f, 0x4b, 0xbd, 0x8b, 0x8a, 0x70, 0x3e, 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e, 0x61, 0x35, 0x57, 0xb9, 0x86, 0xc1, 0x1d, 0x9e, 0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9, 0x8e, 0x94, 0x9b, 0x1e, 0x87, 0xe9, 0xce, 0x55, 0x28, 0xdf, 0x8c, 0xa1, 0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68, 0x41, 0x99, 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16 }; static uint8_t rsbox[256] = { 0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, 0xbf, 0x40, 0xa3, 0x9e, 0x81, 0xf3, 0xd7, 0xfb, 0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87, 0x34, 0x8e, 0x43, 0x44, 0xc4, 0xde, 0xe9, 0xcb, 0x54, 0x7b, 0x94, 0x32, 0xa6, 0xc2, 0x23, 0x3d, 0xee, 0x4c, 0x95, 0x0b, 0x42, 0xfa, 0xc3, 0x4e, 0x08, 0x2e, 0xa1, 0x66, 0x28, 0xd9, 0x24, 0xb2, 0x76, 0x5b, 0xa2, 0x49, 0x6d, 0x8b, 0xd1, 0x25, 0x72, 0xf8, 0xf6, 0x64, 0x86, 0x68, 0x98, 0x16, 0xd4, 0xa4, 0x5c, 0xcc, 0x5d, 0x65, 0xb6, 0x92, 0x6c, 0x70, 0x48, 0x50, 0xfd, 0xed, 0xb9, 0xda, 0x5e, 0x15, 0x46, 0x57, 0xa7, 0x8d, 0x9d, 0x84, 0x90, 0xd8, 0xab, 0x00, 0x8c, 0xbc, 0xd3, 0x0a, 0xf7, 0xe4, 0x58, 0x05, 0xb8, 0xb3, 0x45, 0x06, 0xd0, 0x2c, 0x1e, 0x8f, 0xca, 0x3f, 0x0f, 0x02, 0xc1, 0xaf, 0xbd, 0x03, 0x01, 0x13, 0x8a, 0x6b, 0x3a, 0x91, 0x11, 0x41, 0x4f, 0x67, 0xdc, 0xea, 0x97, 0xf2, 0xcf, 0xce, 0xf0, 0xb4, 0xe6, 0x73, 0x96, 0xac, 0x74, 0x22, 0xe7, 0xad, 0x35, 0x85, 0xe2, 0xf9, 0x37, 0xe8, 0x1c, 0x75, 0xdf, 0x6e, 0x47, 0xf1, 0x1a, 0x71, 0x1d, 0x29, 0xc5, 0x89, 0x6f, 0xb7, 0x62, 0x0e, 0xaa, 0x18, 0xbe, 0x1b, 0xfc, 0x56, 0x3e, 0x4b, 0xc6, 0xd2, 0x79, 0x20, 0x9a, 0xdb, 0xc0, 0xfe, 0x78, 0xcd, 0x5a, 0xf4, 0x1f, 0xdd, 0xa8, 0x33, 0x88, 0x07, 0xc7, 0x31, 0xb1, 0x12, 0x10, 0x59, 0x27, 0x80, 0xec, 0x5f, 0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d, 0x2d, 0xe5, 0x7a, 0x9f, 0x93, 0xc9, 0x9c, 0xef, 0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0, 0xc8, 0xeb, 0xbb, 0x3c, 0x83, 0x53, 0x99, 0x61, 0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26, 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d }; static uint8_t Rcon[] = { 0x8d, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36, 0x6c, 0xd8, 0xab, 0x4d, 0x9a, 0x2f, 0x5e, 0xbc, 0x63, 0xc6, 0x97, 0x35, 0x6a, 0xd4, 0xb3, 0x7d, 0xfa, 0xef, 0xc5, 0x91, 0x39, 0x72, 0xe4, 0xd3, 0xbd, 0x61, 0xc2, 0x9f, 0x25, 0x4a, 0x94, 0x33, 0x66, 0xcc, 0x83, 0x1d, 0x3a, 0x74, 0xe8, 0xcb, }; #define G(a,b,c,d,e,f) { a,b,c,d,e,f } static uint8_t gmtab[256][6] = { G(0x00, 0x00, 0x00, 0x00, 0x00, 0x00), G(0x02, 0x03, 0x09, 0x0b, 0x0d, 0x0e), G(0x04, 0x06, 0x12, 0x16, 0x1a, 0x1c), G(0x06, 0x05, 0x1b, 0x1d, 0x17, 0x12), G(0x08, 0x0c, 0x24, 0x2c, 0x34, 0x38), G(0x0a, 0x0f, 0x2d, 0x27, 0x39, 0x36), G(0x0c, 0x0a, 0x36, 0x3a, 0x2e, 0x24), G(0x0e, 0x09, 0x3f, 0x31, 0x23, 0x2a), G(0x10, 0x18, 0x48, 0x58, 0x68, 0x70), G(0x12, 0x1b, 0x41, 0x53, 0x65, 0x7e), G(0x14, 0x1e, 0x5a, 0x4e, 0x72, 0x6c), G(0x16, 0x1d, 0x53, 0x45, 0x7f, 0x62), G(0x18, 0x14, 0x6c, 0x74, 0x5c, 0x48), G(0x1a, 0x17, 0x65, 0x7f, 0x51, 0x46), G(0x1c, 0x12, 0x7e, 0x62, 0x46, 0x54), G(0x1e, 0x11, 0x77, 0x69, 0x4b, 0x5a), G(0x20, 0x30, 0x90, 0xb0, 0xd0, 0xe0), G(0x22, 0x33, 0x99, 0xbb, 0xdd, 0xee), G(0x24, 0x36, 0x82, 0xa6, 0xca, 0xfc), G(0x26, 0x35, 0x8b, 0xad, 0xc7, 0xf2), G(0x28, 0x3c, 0xb4, 0x9c, 0xe4, 0xd8), G(0x2a, 0x3f, 0xbd, 0x97, 0xe9, 0xd6), G(0x2c, 0x3a, 0xa6, 0x8a, 0xfe, 0xc4), G(0x2e, 0x39, 0xaf, 0x81, 0xf3, 0xca), G(0x30, 0x28, 0xd8, 0xe8, 0xb8, 0x90), G(0x32, 0x2b, 0xd1, 0xe3, 0xb5, 0x9e), G(0x34, 0x2e, 0xca, 0xfe, 0xa2, 0x8c), G(0x36, 0x2d, 0xc3, 0xf5, 0xaf, 0x82), G(0x38, 0x24, 0xfc, 0xc4, 0x8c, 0xa8), G(0x3a, 0x27, 0xf5, 0xcf, 0x81, 0xa6), G(0x3c, 0x22, 0xee, 0xd2, 0x96, 0xb4), G(0x3e, 0x21, 0xe7, 0xd9, 0x9b, 0xba), G(0x40, 0x60, 0x3b, 0x7b, 0xbb, 0xdb), G(0x42, 0x63, 0x32, 0x70, 0xb6, 0xd5), G(0x44, 0x66, 0x29, 0x6d, 0xa1, 0xc7), G(0x46, 0x65, 0x20, 0x66, 0xac, 0xc9), G(0x48, 0x6c, 0x1f, 0x57, 0x8f, 0xe3), G(0x4a, 0x6f, 0x16, 0x5c, 0x82, 0xed), G(0x4c, 0x6a, 0x0d, 0x41, 0x95, 0xff), G(0x4e, 0x69, 0x04, 0x4a, 0x98, 0xf1), G(0x50, 0x78, 0x73, 0x23, 0xd3, 0xab), G(0x52, 0x7b, 0x7a, 0x28, 0xde, 0xa5), G(0x54, 0x7e, 0x61, 0x35, 0xc9, 0xb7), G(0x56, 0x7d, 0x68, 0x3e, 0xc4, 0xb9), G(0x58, 0x74, 0x57, 0x0f, 0xe7, 0x93), G(0x5a, 0x77, 0x5e, 0x04, 0xea, 0x9d), G(0x5c, 0x72, 0x45, 0x19, 0xfd, 0x8f), G(0x5e, 0x71, 0x4c, 0x12, 0xf0, 0x81), G(0x60, 0x50, 0xab, 0xcb, 0x6b, 0x3b), G(0x62, 0x53, 0xa2, 0xc0, 0x66, 0x35), G(0x64, 0x56, 0xb9, 0xdd, 0x71, 0x27), G(0x66, 0x55, 0xb0, 0xd6, 0x7c, 0x29), G(0x68, 0x5c, 0x8f, 0xe7, 0x5f, 0x03), G(0x6a, 0x5f, 0x86, 0xec, 0x52, 0x0d), G(0x6c, 0x5a, 0x9d, 0xf1, 0x45, 0x1f), G(0x6e, 0x59, 0x94, 0xfa, 0x48, 0x11), G(0x70, 0x48, 0xe3, 0x93, 0x03, 0x4b), G(0x72, 0x4b, 0xea, 0x98, 0x0e, 0x45), G(0x74, 0x4e, 0xf1, 0x85, 0x19, 0x57), G(0x76, 0x4d, 0xf8, 0x8e, 0x14, 0x59), G(0x78, 0x44, 0xc7, 0xbf, 0x37, 0x73), G(0x7a, 0x47, 0xce, 0xb4, 0x3a, 0x7d), G(0x7c, 0x42, 0xd5, 0xa9, 0x2d, 0x6f), G(0x7e, 0x41, 0xdc, 0xa2, 0x20, 0x61), G(0x80, 0xc0, 0x76, 0xf6, 0x6d, 0xad), G(0x82, 0xc3, 0x7f, 0xfd, 0x60, 0xa3), G(0x84, 0xc6, 0x64, 0xe0, 0x77, 0xb1), G(0x86, 0xc5, 0x6d, 0xeb, 0x7a, 0xbf), G(0x88, 0xcc, 0x52, 0xda, 0x59, 0x95), G(0x8a, 0xcf, 0x5b, 0xd1, 0x54, 0x9b), G(0x8c, 0xca, 0x40, 0xcc, 0x43, 0x89), G(0x8e, 0xc9, 0x49, 0xc7, 0x4e, 0x87), G(0x90, 0xd8, 0x3e, 0xae, 0x05, 0xdd), G(0x92, 0xdb, 0x37, 0xa5, 0x08, 0xd3), G(0x94, 0xde, 0x2c, 0xb8, 0x1f, 0xc1), G(0x96, 0xdd, 0x25, 0xb3, 0x12, 0xcf), G(0x98, 0xd4, 0x1a, 0x82, 0x31, 0xe5), G(0x9a, 0xd7, 0x13, 0x89, 0x3c, 0xeb), G(0x9c, 0xd2, 0x08, 0x94, 0x2b, 0xf9), G(0x9e, 0xd1, 0x01, 0x9f, 0x26, 0xf7), G(0xa0, 0xf0, 0xe6, 0x46, 0xbd, 0x4d), G(0xa2, 0xf3, 0xef, 0x4d, 0xb0, 0x43), G(0xa4, 0xf6, 0xf4, 0x50, 0xa7, 0x51), G(0xa6, 0xf5, 0xfd, 0x5b, 0xaa, 0x5f), G(0xa8, 0xfc, 0xc2, 0x6a, 0x89, 0x75), G(0xaa, 0xff, 0xcb, 0x61, 0x84, 0x7b), G(0xac, 0xfa, 0xd0, 0x7c, 0x93, 0x69), G(0xae, 0xf9, 0xd9, 0x77, 0x9e, 0x67), G(0xb0, 0xe8, 0xae, 0x1e, 0xd5, 0x3d), G(0xb2, 0xeb, 0xa7, 0x15, 0xd8, 0x33), G(0xb4, 0xee, 0xbc, 0x08, 0xcf, 0x21), G(0xb6, 0xed, 0xb5, 0x03, 0xc2, 0x2f), G(0xb8, 0xe4, 0x8a, 0x32, 0xe1, 0x05), G(0xba, 0xe7, 0x83, 0x39, 0xec, 0x0b), G(0xbc, 0xe2, 0x98, 0x24, 0xfb, 0x19), G(0xbe, 0xe1, 0x91, 0x2f, 0xf6, 0x17), G(0xc0, 0xa0, 0x4d, 0x8d, 0xd6, 0x76), G(0xc2, 0xa3, 0x44, 0x86, 0xdb, 0x78), G(0xc4, 0xa6, 0x5f, 0x9b, 0xcc, 0x6a), G(0xc6, 0xa5, 0x56, 0x90, 0xc1, 0x64), G(0xc8, 0xac, 0x69, 0xa1, 0xe2, 0x4e), G(0xca, 0xaf, 0x60, 0xaa, 0xef, 0x40), G(0xcc, 0xaa, 0x7b, 0xb7, 0xf8, 0x52), G(0xce, 0xa9, 0x72, 0xbc, 0xf5, 0x5c), G(0xd0, 0xb8, 0x05, 0xd5, 0xbe, 0x06), G(0xd2, 0xbb, 0x0c, 0xde, 0xb3, 0x08), G(0xd4, 0xbe, 0x17, 0xc3, 0xa4, 0x1a), G(0xd6, 0xbd, 0x1e, 0xc8, 0xa9, 0x14), G(0xd8, 0xb4, 0x21, 0xf9, 0x8a, 0x3e), G(0xda, 0xb7, 0x28, 0xf2, 0x87, 0x30), G(0xdc, 0xb2, 0x33, 0xef, 0x90, 0x22), G(0xde, 0xb1, 0x3a, 0xe4, 0x9d, 0x2c), G(0xe0, 0x90, 0xdd, 0x3d, 0x06, 0x96), G(0xe2, 0x93, 0xd4, 0x36, 0x0b, 0x98), G(0xe4, 0x96, 0xcf, 0x2b, 0x1c, 0x8a), G(0xe6, 0x95, 0xc6, 0x20, 0x11, 0x84), G(0xe8, 0x9c, 0xf9, 0x11, 0x32, 0xae), G(0xea, 0x9f, 0xf0, 0x1a, 0x3f, 0xa0), G(0xec, 0x9a, 0xeb, 0x07, 0x28, 0xb2), G(0xee, 0x99, 0xe2, 0x0c, 0x25, 0xbc), G(0xf0, 0x88, 0x95, 0x65, 0x6e, 0xe6), G(0xf2, 0x8b, 0x9c, 0x6e, 0x63, 0xe8), G(0xf4, 0x8e, 0x87, 0x73, 0x74, 0xfa), G(0xf6, 0x8d, 0x8e, 0x78, 0x79, 0xf4), G(0xf8, 0x84, 0xb1, 0x49, 0x5a, 0xde), G(0xfa, 0x87, 0xb8, 0x42, 0x57, 0xd0), G(0xfc, 0x82, 0xa3, 0x5f, 0x40, 0xc2), G(0xfe, 0x81, 0xaa, 0x54, 0x4d, 0xcc), G(0x1b, 0x9b, 0xec, 0xf7, 0xda, 0x41), G(0x19, 0x98, 0xe5, 0xfc, 0xd7, 0x4f), G(0x1f, 0x9d, 0xfe, 0xe1, 0xc0, 0x5d), G(0x1d, 0x9e, 0xf7, 0xea, 0xcd, 0x53), G(0x13, 0x97, 0xc8, 0xdb, 0xee, 0x79), G(0x11, 0x94, 0xc1, 0xd0, 0xe3, 0x77), G(0x17, 0x91, 0xda, 0xcd, 0xf4, 0x65), G(0x15, 0x92, 0xd3, 0xc6, 0xf9, 0x6b), G(0x0b, 0x83, 0xa4, 0xaf, 0xb2, 0x31), G(0x09, 0x80, 0xad, 0xa4, 0xbf, 0x3f), G(0x0f, 0x85, 0xb6, 0xb9, 0xa8, 0x2d), G(0x0d, 0x86, 0xbf, 0xb2, 0xa5, 0x23), G(0x03, 0x8f, 0x80, 0x83, 0x86, 0x09), G(0x01, 0x8c, 0x89, 0x88, 0x8b, 0x07), G(0x07, 0x89, 0x92, 0x95, 0x9c, 0x15), G(0x05, 0x8a, 0x9b, 0x9e, 0x91, 0x1b), G(0x3b, 0xab, 0x7c, 0x47, 0x0a, 0xa1), G(0x39, 0xa8, 0x75, 0x4c, 0x07, 0xaf), G(0x3f, 0xad, 0x6e, 0x51, 0x10, 0xbd), G(0x3d, 0xae, 0x67, 0x5a, 0x1d, 0xb3), G(0x33, 0xa7, 0x58, 0x6b, 0x3e, 0x99), G(0x31, 0xa4, 0x51, 0x60, 0x33, 0x97), G(0x37, 0xa1, 0x4a, 0x7d, 0x24, 0x85), G(0x35, 0xa2, 0x43, 0x76, 0x29, 0x8b), G(0x2b, 0xb3, 0x34, 0x1f, 0x62, 0xd1), G(0x29, 0xb0, 0x3d, 0x14, 0x6f, 0xdf), G(0x2f, 0xb5, 0x26, 0x09, 0x78, 0xcd), G(0x2d, 0xb6, 0x2f, 0x02, 0x75, 0xc3), G(0x23, 0xbf, 0x10, 0x33, 0x56, 0xe9), G(0x21, 0xbc, 0x19, 0x38, 0x5b, 0xe7), G(0x27, 0xb9, 0x02, 0x25, 0x4c, 0xf5), G(0x25, 0xba, 0x0b, 0x2e, 0x41, 0xfb), G(0x5b, 0xfb, 0xd7, 0x8c, 0x61, 0x9a), G(0x59, 0xf8, 0xde, 0x87, 0x6c, 0x94), G(0x5f, 0xfd, 0xc5, 0x9a, 0x7b, 0x86), G(0x5d, 0xfe, 0xcc, 0x91, 0x76, 0x88), G(0x53, 0xf7, 0xf3, 0xa0, 0x55, 0xa2), G(0x51, 0xf4, 0xfa, 0xab, 0x58, 0xac), G(0x57, 0xf1, 0xe1, 0xb6, 0x4f, 0xbe), G(0x55, 0xf2, 0xe8, 0xbd, 0x42, 0xb0), G(0x4b, 0xe3, 0x9f, 0xd4, 0x09, 0xea), G(0x49, 0xe0, 0x96, 0xdf, 0x04, 0xe4), G(0x4f, 0xe5, 0x8d, 0xc2, 0x13, 0xf6), G(0x4d, 0xe6, 0x84, 0xc9, 0x1e, 0xf8), G(0x43, 0xef, 0xbb, 0xf8, 0x3d, 0xd2), G(0x41, 0xec, 0xb2, 0xf3, 0x30, 0xdc), G(0x47, 0xe9, 0xa9, 0xee, 0x27, 0xce), G(0x45, 0xea, 0xa0, 0xe5, 0x2a, 0xc0), G(0x7b, 0xcb, 0x47, 0x3c, 0xb1, 0x7a), G(0x79, 0xc8, 0x4e, 0x37, 0xbc, 0x74), G(0x7f, 0xcd, 0x55, 0x2a, 0xab, 0x66), G(0x7d, 0xce, 0x5c, 0x21, 0xa6, 0x68), G(0x73, 0xc7, 0x63, 0x10, 0x85, 0x42), G(0x71, 0xc4, 0x6a, 0x1b, 0x88, 0x4c), G(0x77, 0xc1, 0x71, 0x06, 0x9f, 0x5e), G(0x75, 0xc2, 0x78, 0x0d, 0x92, 0x50), G(0x6b, 0xd3, 0x0f, 0x64, 0xd9, 0x0a), G(0x69, 0xd0, 0x06, 0x6f, 0xd4, 0x04), G(0x6f, 0xd5, 0x1d, 0x72, 0xc3, 0x16), G(0x6d, 0xd6, 0x14, 0x79, 0xce, 0x18), G(0x63, 0xdf, 0x2b, 0x48, 0xed, 0x32), G(0x61, 0xdc, 0x22, 0x43, 0xe0, 0x3c), G(0x67, 0xd9, 0x39, 0x5e, 0xf7, 0x2e), G(0x65, 0xda, 0x30, 0x55, 0xfa, 0x20), G(0x9b, 0x5b, 0x9a, 0x01, 0xb7, 0xec), G(0x99, 0x58, 0x93, 0x0a, 0xba, 0xe2), G(0x9f, 0x5d, 0x88, 0x17, 0xad, 0xf0), G(0x9d, 0x5e, 0x81, 0x1c, 0xa0, 0xfe), G(0x93, 0x57, 0xbe, 0x2d, 0x83, 0xd4), G(0x91, 0x54, 0xb7, 0x26, 0x8e, 0xda), G(0x97, 0x51, 0xac, 0x3b, 0x99, 0xc8), G(0x95, 0x52, 0xa5, 0x30, 0x94, 0xc6), G(0x8b, 0x43, 0xd2, 0x59, 0xdf, 0x9c), G(0x89, 0x40, 0xdb, 0x52, 0xd2, 0x92), G(0x8f, 0x45, 0xc0, 0x4f, 0xc5, 0x80), G(0x8d, 0x46, 0xc9, 0x44, 0xc8, 0x8e), G(0x83, 0x4f, 0xf6, 0x75, 0xeb, 0xa4), G(0x81, 0x4c, 0xff, 0x7e, 0xe6, 0xaa), G(0x87, 0x49, 0xe4, 0x63, 0xf1, 0xb8), G(0x85, 0x4a, 0xed, 0x68, 0xfc, 0xb6), G(0xbb, 0x6b, 0x0a, 0xb1, 0x67, 0x0c), G(0xb9, 0x68, 0x03, 0xba, 0x6a, 0x02), G(0xbf, 0x6d, 0x18, 0xa7, 0x7d, 0x10), G(0xbd, 0x6e, 0x11, 0xac, 0x70, 0x1e), G(0xb3, 0x67, 0x2e, 0x9d, 0x53, 0x34), G(0xb1, 0x64, 0x27, 0x96, 0x5e, 0x3a), G(0xb7, 0x61, 0x3c, 0x8b, 0x49, 0x28), G(0xb5, 0x62, 0x35, 0x80, 0x44, 0x26), G(0xab, 0x73, 0x42, 0xe9, 0x0f, 0x7c), G(0xa9, 0x70, 0x4b, 0xe2, 0x02, 0x72), G(0xaf, 0x75, 0x50, 0xff, 0x15, 0x60), G(0xad, 0x76, 0x59, 0xf4, 0x18, 0x6e), G(0xa3, 0x7f, 0x66, 0xc5, 0x3b, 0x44), G(0xa1, 0x7c, 0x6f, 0xce, 0x36, 0x4a), G(0xa7, 0x79, 0x74, 0xd3, 0x21, 0x58), G(0xa5, 0x7a, 0x7d, 0xd8, 0x2c, 0x56), G(0xdb, 0x3b, 0xa1, 0x7a, 0x0c, 0x37), G(0xd9, 0x38, 0xa8, 0x71, 0x01, 0x39), G(0xdf, 0x3d, 0xb3, 0x6c, 0x16, 0x2b), G(0xdd, 0x3e, 0xba, 0x67, 0x1b, 0x25), G(0xd3, 0x37, 0x85, 0x56, 0x38, 0x0f), G(0xd1, 0x34, 0x8c, 0x5d, 0x35, 0x01), G(0xd7, 0x31, 0x97, 0x40, 0x22, 0x13), G(0xd5, 0x32, 0x9e, 0x4b, 0x2f, 0x1d), G(0xcb, 0x23, 0xe9, 0x22, 0x64, 0x47), G(0xc9, 0x20, 0xe0, 0x29, 0x69, 0x49), G(0xcf, 0x25, 0xfb, 0x34, 0x7e, 0x5b), G(0xcd, 0x26, 0xf2, 0x3f, 0x73, 0x55), G(0xc3, 0x2f, 0xcd, 0x0e, 0x50, 0x7f), G(0xc1, 0x2c, 0xc4, 0x05, 0x5d, 0x71), G(0xc7, 0x29, 0xdf, 0x18, 0x4a, 0x63), G(0xc5, 0x2a, 0xd6, 0x13, 0x47, 0x6d), G(0xfb, 0x0b, 0x31, 0xca, 0xdc, 0xd7), G(0xf9, 0x08, 0x38, 0xc1, 0xd1, 0xd9), G(0xff, 0x0d, 0x23, 0xdc, 0xc6, 0xcb), G(0xfd, 0x0e, 0x2a, 0xd7, 0xcb, 0xc5), G(0xf3, 0x07, 0x15, 0xe6, 0xe8, 0xef), G(0xf1, 0x04, 0x1c, 0xed, 0xe5, 0xe1), G(0xf7, 0x01, 0x07, 0xf0, 0xf2, 0xf3), G(0xf5, 0x02, 0x0e, 0xfb, 0xff, 0xfd), G(0xeb, 0x13, 0x79, 0x92, 0xb4, 0xa7), G(0xe9, 0x10, 0x70, 0x99, 0xb9, 0xa9), G(0xef, 0x15, 0x6b, 0x84, 0xae, 0xbb), G(0xed, 0x16, 0x62, 0x8f, 0xa3, 0xb5), G(0xe3, 0x1f, 0x5d, 0xbe, 0x80, 0x9f), G(0xe1, 0x1c, 0x54, 0xb5, 0x8d, 0x91), G(0xe7, 0x19, 0x4f, 0xa8, 0x9a, 0x83), G(0xe5, 0x1a, 0x46, 0xa3, 0x97, 0x8d), }; #undef G static void expand_key(uint8_t *expandedKey, uint8_t *key, int size, size_t expandedKeySize) { int csz; int i; uint8_t t[4] = { 0 }; for (i = 0; i < size; i++) expandedKey[i] = key[i]; csz = size; i = 1; while (csz < expandedKeySize) { t[0] = expandedKey[(csz - 4) + 0]; t[1] = expandedKey[(csz - 4) + 1]; t[2] = expandedKey[(csz - 4) + 2]; t[3] = expandedKey[(csz - 4) + 3]; if (csz % size == 0) { uint8_t tmp; tmp = t[0]; t[0] = sbox[t[1]] ^ Rcon[i++ % sizeof(Rcon)]; t[1] = sbox[t[2]]; t[2] = sbox[t[3]]; t[3] = sbox[tmp]; } if (size == 32 && ((csz % size) == 16)) { t[0] = sbox[t[0]]; t[1] = sbox[t[1]]; t[2] = sbox[t[2]]; t[3] = sbox[t[3]]; } expandedKey[csz] = expandedKey[csz - size] ^ t[0]; csz++; expandedKey[csz] = expandedKey[csz - size] ^ t[1]; csz++; expandedKey[csz] = expandedKey[csz - size] ^ t[2]; csz++; expandedKey[csz] = expandedKey[csz - size] ^ t[3]; csz++; } } static void shift_rows(uint8_t *state) { uint32_t *s32; int i; for (i = 0; i < 16; i++) state[i] = sbox[state[i]]; s32 = (uint32_t *) state; s32[1] = rol32_be(s32[1], 8); s32[2] = rol32_be(s32[2], 16); s32[3] = rol32_be(s32[3], 24); } static void add_round_key(uint8_t *state, uint8_t *rk) { uint32_t *s32, *r32; s32 = (uint32_t *) state; r32 = (uint32_t *) rk; s32[0] ^= r32[0]; s32[1] ^= r32[1]; s32[2] ^= r32[2]; s32[3] ^= r32[3]; } #define gm1(a) (a) #define gm2(a) gmtab[a][0] #define gm3(a) gmtab[a][1] #define gm9(a) gmtab[a][2] #define gm11(a) gmtab[a][3] #define gm13(a) gmtab[a][4] #define gm14(a) gmtab[a][5] static void mix_columns(uint8_t *state) { int i; uint8_t cpy[4]; for (i = 0; i < 4; i++) { cpy[0] = state[0 * 4 + i]; cpy[1] = state[1 * 4 + i]; cpy[2] = state[2 * 4 + i]; cpy[3] = state[3 * 4 + i]; state[i] = gm2(cpy[0]) ^ gm1(cpy[3]) ^ gm1(cpy[2]) ^ gm3(cpy[1]); state[4+i] = gm2(cpy[1]) ^ gm1(cpy[0]) ^ gm1(cpy[3]) ^ gm3(cpy[2]); state[8+i] = gm2(cpy[2]) ^ gm1(cpy[1]) ^ gm1(cpy[0]) ^ gm3(cpy[3]); state[12+i] = gm2(cpy[3]) ^ gm1(cpy[2]) ^ gm1(cpy[1]) ^ gm3(cpy[0]); } } static void create_round_key(uint8_t *expandedKey, uint8_t *rk) { int i,j; for (i = 0; i < 4; i++) for (j = 0; j < 4; j++) rk[i + j * 4] = expandedKey[i * 4 + j]; } static void aes_main(aes_key *key, uint8_t *state) { int i = 0; uint8_t rk[16]; create_round_key(key->data, rk); add_round_key(state, rk); for (i = 1; i < key->nbr; i++) { create_round_key(key->data + 16 * i, rk); shift_rows(state); mix_columns(state); add_round_key(state, rk); } create_round_key(key->data + 16 * key->nbr, rk); shift_rows(state); add_round_key(state, rk); } static void shift_rows_inv(uint8_t *state) { uint32_t *s32; int i; s32 = (uint32_t *) state; s32[1] = ror32_be(s32[1], 8); s32[2] = ror32_be(s32[2], 16); s32[3] = ror32_be(s32[3], 24); for (i = 0; i < 16; i++) state[i] = rsbox[state[i]]; } static void mix_columns_inv(uint8_t *state) { int i; uint8_t cpy[4]; for (i = 0; i < 4; i++) { cpy[0] = state[0 * 4 + i]; cpy[1] = state[1 * 4 + i]; cpy[2] = state[2 * 4 + i]; cpy[3] = state[3 * 4 + i]; state[i] = gm14(cpy[0]) ^ gm9(cpy[3]) ^ gm13(cpy[2]) ^ gm11(cpy[1]); state[4+i] = gm14(cpy[1]) ^ gm9(cpy[0]) ^ gm13(cpy[3]) ^ gm11(cpy[2]); state[8+i] = gm14(cpy[2]) ^ gm9(cpy[1]) ^ gm13(cpy[0]) ^ gm11(cpy[3]); state[12+i] = gm14(cpy[3]) ^ gm9(cpy[2]) ^ gm13(cpy[1]) ^ gm11(cpy[0]); } } static void aes_main_inv(aes_key *key, uint8_t *state) { int i = 0; uint8_t rk[16]; create_round_key(key->data + 16 * key->nbr, rk); add_round_key(state, rk); for (i = key->nbr - 1; i > 0; i--) { create_round_key(key->data + 16 * i, rk); shift_rows_inv(state); add_round_key(state, rk); mix_columns_inv(state); } create_round_key(key->data, rk); shift_rows_inv(state); add_round_key(state, rk); } /* Set the block values, for the block: * a0,0 a0,1 a0,2 a0,3 * a1,0 a1,1 a1,2 a1,3 -> a0,0 a1,0 a2,0 a3,0 a0,1 a1,1 ... a2,3 a3,3 * a2,0 a2,1 a2,2 a2,3 * a3,0 a3,1 a3,2 a3,3 */ #define swap_block(t, f) \ t[0] = f[0]; t[4] = f[1]; t[8] = f[2]; t[12] = f[3]; \ t[1] = f[4]; t[5] = f[5]; t[9] = f[6]; t[13] = f[7]; \ t[2] = f[8]; t[6] = f[9]; t[10] = f[10]; t[14] = f[11]; \ t[3] = f[12]; t[7] = f[13]; t[11] = f[14]; t[15] = f[15] void aes_generic_encrypt_block(aes_block *output, aes_key *key, aes_block *input) { uint8_t block[16]; uint8_t *iptr, *optr; iptr = (uint8_t *) input; optr = (uint8_t *) output; swap_block(block, iptr); aes_main(key, block); swap_block(optr, block); } void aes_generic_decrypt_block(aes_block *output, aes_key *key, aes_block *input) { uint8_t block[16]; uint8_t *iptr, *optr; iptr = (uint8_t *) input; optr = (uint8_t *) output; swap_block(block, iptr); aes_main_inv(key, block); swap_block(optr, block); } void aes_generic_init(aes_key *key, uint8_t *origkey, uint8_t size) { int esz; switch (size) { case 16: key->nbr = 10; esz = 176; break; case 24: key->nbr = 12; esz = 208; break; case 32: key->nbr = 14; esz = 240; break; default: return; } expand_key(key->data, origkey, size, esz); return; } cipher-aes-0.2.11/cbits/aes_generic.h0000644000000000000000000000344612541525177015542 0ustar0000000000000000/* * Copyright (c) 2012 Vincent Hanquez * * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of his contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include "aes.h" void aes_generic_encrypt_block(aes_block *output, aes_key *key, aes_block *input); void aes_generic_decrypt_block(aes_block *output, aes_key *key, aes_block *input); void aes_generic_init(aes_key *key, uint8_t *origkey, uint8_t size); cipher-aes-0.2.11/cbits/aes_x86ni.c0000644000000000000000000002572712541525177015103 0ustar0000000000000000/* * Copyright (c) 2012-2013 Vincent Hanquez * * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of his contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #ifdef WITH_AESNI #include #include #include #include "aes.h" #include "aes_x86ni.h" #include "block128.h" #include "cpu.h" #ifdef ARCH_X86 #define ALIGN_UP(addr, size) (((addr) + ((size) - 1)) & (~((size) - 1))) #define ALIGNMENT(n) __attribute__((aligned(n))) /* old GCC version doesn't cope with the shuffle parameters, that can take 2 values (0xff and 0xaa) * in our case, passed as argument despite being a immediate 8 bits constant anyway. * un-factorise aes_128_key_expansion into 2 version that have the shuffle parameter explicitly set */ static __m128i aes_128_key_expansion_ff(__m128i key, __m128i keygened) { keygened = _mm_shuffle_epi32(keygened, 0xff); key = _mm_xor_si128(key, _mm_slli_si128(key, 4)); key = _mm_xor_si128(key, _mm_slli_si128(key, 4)); key = _mm_xor_si128(key, _mm_slli_si128(key, 4)); return _mm_xor_si128(key, keygened); } static __m128i aes_128_key_expansion_aa(__m128i key, __m128i keygened) { keygened = _mm_shuffle_epi32(keygened, 0xaa); key = _mm_xor_si128(key, _mm_slli_si128(key, 4)); key = _mm_xor_si128(key, _mm_slli_si128(key, 4)); key = _mm_xor_si128(key, _mm_slli_si128(key, 4)); return _mm_xor_si128(key, keygened); } void aes_ni_init(aes_key *key, uint8_t *ikey, uint8_t size) { __m128i k[28]; uint64_t *out = (uint64_t *) key->data; int i; switch (size) { case 16: k[0] = _mm_loadu_si128((const __m128i*) ikey); #define AES_128_key_exp(K, RCON) aes_128_key_expansion_ff(K, _mm_aeskeygenassist_si128(K, RCON)) k[1] = AES_128_key_exp(k[0], 0x01); k[2] = AES_128_key_exp(k[1], 0x02); k[3] = AES_128_key_exp(k[2], 0x04); k[4] = AES_128_key_exp(k[3], 0x08); k[5] = AES_128_key_exp(k[4], 0x10); k[6] = AES_128_key_exp(k[5], 0x20); k[7] = AES_128_key_exp(k[6], 0x40); k[8] = AES_128_key_exp(k[7], 0x80); k[9] = AES_128_key_exp(k[8], 0x1B); k[10] = AES_128_key_exp(k[9], 0x36); /* generate decryption keys in reverse order. * k[10] is shared by last encryption and first decryption rounds * k[20] is shared by first encryption round (and is the original user key) */ k[11] = _mm_aesimc_si128(k[9]); k[12] = _mm_aesimc_si128(k[8]); k[13] = _mm_aesimc_si128(k[7]); k[14] = _mm_aesimc_si128(k[6]); k[15] = _mm_aesimc_si128(k[5]); k[16] = _mm_aesimc_si128(k[4]); k[17] = _mm_aesimc_si128(k[3]); k[18] = _mm_aesimc_si128(k[2]); k[19] = _mm_aesimc_si128(k[1]); for (i = 0; i < 20; i++) _mm_storeu_si128(((__m128i *) out) + i, k[i]); break; case 32: #define AES_256_key_exp_1(K1, K2, RCON) aes_128_key_expansion_ff(K1, _mm_aeskeygenassist_si128(K2, RCON)) #define AES_256_key_exp_2(K1, K2) aes_128_key_expansion_aa(K1, _mm_aeskeygenassist_si128(K2, 0x00)) k[0] = _mm_loadu_si128((const __m128i*) ikey); k[1] = _mm_loadu_si128((const __m128i*) (ikey+16)); k[2] = AES_256_key_exp_1(k[0], k[1], 0x01); k[3] = AES_256_key_exp_2(k[1], k[2]); k[4] = AES_256_key_exp_1(k[2], k[3], 0x02); k[5] = AES_256_key_exp_2(k[3], k[4]); k[6] = AES_256_key_exp_1(k[4], k[5], 0x04); k[7] = AES_256_key_exp_2(k[5], k[6]); k[8] = AES_256_key_exp_1(k[6], k[7], 0x08); k[9] = AES_256_key_exp_2(k[7], k[8]); k[10] = AES_256_key_exp_1(k[8], k[9], 0x10); k[11] = AES_256_key_exp_2(k[9], k[10]); k[12] = AES_256_key_exp_1(k[10], k[11], 0x20); k[13] = AES_256_key_exp_2(k[11], k[12]); k[14] = AES_256_key_exp_1(k[12], k[13], 0x40); k[15] = _mm_aesimc_si128(k[13]); k[16] = _mm_aesimc_si128(k[12]); k[17] = _mm_aesimc_si128(k[11]); k[18] = _mm_aesimc_si128(k[10]); k[19] = _mm_aesimc_si128(k[9]); k[20] = _mm_aesimc_si128(k[8]); k[21] = _mm_aesimc_si128(k[7]); k[22] = _mm_aesimc_si128(k[6]); k[23] = _mm_aesimc_si128(k[5]); k[24] = _mm_aesimc_si128(k[4]); k[25] = _mm_aesimc_si128(k[3]); k[26] = _mm_aesimc_si128(k[2]); k[27] = _mm_aesimc_si128(k[1]); for (i = 0; i < 28; i++) _mm_storeu_si128(((__m128i *) out) + i, k[i]); break; default: break; } } /* TO OPTIMISE: use pcmulqdq... or some faster code. * this is the lamest way of doing it, but i'm out of time. * this is basically a copy of gf_mulx in gf.c */ static __m128i gfmulx(__m128i v) { uint64_t v_[2] ALIGNMENT(16); const uint64_t gf_mask = 0x8000000000000000; _mm_store_si128((__m128i *) v_, v); uint64_t r = ((v_[1] & gf_mask) ? 0x87 : 0); v_[1] = (v_[1] << 1) | (v_[0] & gf_mask ? 1 : 0); v_[0] = (v_[0] << 1) ^ r; v = _mm_load_si128((__m128i *) v_); return v; } static void unopt_gf_mul(block128 *a, block128 *b) { uint64_t a0, a1, v0, v1; int i, j; a0 = a1 = 0; v0 = cpu_to_be64(a->q[0]); v1 = cpu_to_be64(a->q[1]); for (i = 0; i < 16; i++) for (j = 0x80; j != 0; j >>= 1) { uint8_t x = b->b[i] & j; a0 ^= x ? v0 : 0; a1 ^= x ? v1 : 0; x = (uint8_t) v1 & 1; v1 = (v1 >> 1) | (v0 << 63); v0 = (v0 >> 1) ^ (x ? (0xe1ULL << 56) : 0); } a->q[0] = cpu_to_be64(a0); a->q[1] = cpu_to_be64(a1); } static __m128i ghash_add(__m128i tag, __m128i h, __m128i m) { aes_block _t, _h; tag = _mm_xor_si128(tag, m); _mm_store_si128((__m128i *) &_t, tag); _mm_store_si128((__m128i *) &_h, h); unopt_gf_mul(&_t, &_h); tag = _mm_load_si128((__m128i *) &_t); return tag; } #define PRELOAD_ENC_KEYS128(k) \ __m128i K0 = _mm_loadu_si128(((__m128i *) k)+0); \ __m128i K1 = _mm_loadu_si128(((__m128i *) k)+1); \ __m128i K2 = _mm_loadu_si128(((__m128i *) k)+2); \ __m128i K3 = _mm_loadu_si128(((__m128i *) k)+3); \ __m128i K4 = _mm_loadu_si128(((__m128i *) k)+4); \ __m128i K5 = _mm_loadu_si128(((__m128i *) k)+5); \ __m128i K6 = _mm_loadu_si128(((__m128i *) k)+6); \ __m128i K7 = _mm_loadu_si128(((__m128i *) k)+7); \ __m128i K8 = _mm_loadu_si128(((__m128i *) k)+8); \ __m128i K9 = _mm_loadu_si128(((__m128i *) k)+9); \ __m128i K10 = _mm_loadu_si128(((__m128i *) k)+10); #define PRELOAD_ENC_KEYS256(k) \ PRELOAD_ENC_KEYS128(k) \ __m128i K11 = _mm_loadu_si128(((__m128i *) k)+11); \ __m128i K12 = _mm_loadu_si128(((__m128i *) k)+12); \ __m128i K13 = _mm_loadu_si128(((__m128i *) k)+13); \ __m128i K14 = _mm_loadu_si128(((__m128i *) k)+14); #define DO_ENC_BLOCK128(m) \ m = _mm_xor_si128(m, K0); \ m = _mm_aesenc_si128(m, K1); \ m = _mm_aesenc_si128(m, K2); \ m = _mm_aesenc_si128(m, K3); \ m = _mm_aesenc_si128(m, K4); \ m = _mm_aesenc_si128(m, K5); \ m = _mm_aesenc_si128(m, K6); \ m = _mm_aesenc_si128(m, K7); \ m = _mm_aesenc_si128(m, K8); \ m = _mm_aesenc_si128(m, K9); \ m = _mm_aesenclast_si128(m, K10); #define DO_ENC_BLOCK256(m) \ m = _mm_xor_si128(m, K0); \ m = _mm_aesenc_si128(m, K1); \ m = _mm_aesenc_si128(m, K2); \ m = _mm_aesenc_si128(m, K3); \ m = _mm_aesenc_si128(m, K4); \ m = _mm_aesenc_si128(m, K5); \ m = _mm_aesenc_si128(m, K6); \ m = _mm_aesenc_si128(m, K7); \ m = _mm_aesenc_si128(m, K8); \ m = _mm_aesenc_si128(m, K9); \ m = _mm_aesenc_si128(m, K10); \ m = _mm_aesenc_si128(m, K11); \ m = _mm_aesenc_si128(m, K12); \ m = _mm_aesenc_si128(m, K13); \ m = _mm_aesenclast_si128(m, K14); /* load K0 at K9 from index 'at' */ #define PRELOAD_DEC_KEYS_AT(k, at) \ __m128i K0 = _mm_loadu_si128(((__m128i *) k)+at+0); \ __m128i K1 = _mm_loadu_si128(((__m128i *) k)+at+1); \ __m128i K2 = _mm_loadu_si128(((__m128i *) k)+at+2); \ __m128i K3 = _mm_loadu_si128(((__m128i *) k)+at+3); \ __m128i K4 = _mm_loadu_si128(((__m128i *) k)+at+4); \ __m128i K5 = _mm_loadu_si128(((__m128i *) k)+at+5); \ __m128i K6 = _mm_loadu_si128(((__m128i *) k)+at+6); \ __m128i K7 = _mm_loadu_si128(((__m128i *) k)+at+7); \ __m128i K8 = _mm_loadu_si128(((__m128i *) k)+at+8); \ __m128i K9 = _mm_loadu_si128(((__m128i *) k)+at+9); \ #define PRELOAD_DEC_KEYS128(k) \ PRELOAD_DEC_KEYS_AT(k, 10) \ __m128i K10 = _mm_loadu_si128(((__m128i *) k)+0); #define PRELOAD_DEC_KEYS256(k) \ PRELOAD_DEC_KEYS_AT(k, 14) \ __m128i K10 = _mm_loadu_si128(((__m128i *) k)+14+10); \ __m128i K11 = _mm_loadu_si128(((__m128i *) k)+14+11); \ __m128i K12 = _mm_loadu_si128(((__m128i *) k)+14+12); \ __m128i K13 = _mm_loadu_si128(((__m128i *) k)+14+13); \ __m128i K14 = _mm_loadu_si128(((__m128i *) k)+0); #define DO_DEC_BLOCK128(m) \ m = _mm_xor_si128(m, K0); \ m = _mm_aesdec_si128(m, K1); \ m = _mm_aesdec_si128(m, K2); \ m = _mm_aesdec_si128(m, K3); \ m = _mm_aesdec_si128(m, K4); \ m = _mm_aesdec_si128(m, K5); \ m = _mm_aesdec_si128(m, K6); \ m = _mm_aesdec_si128(m, K7); \ m = _mm_aesdec_si128(m, K8); \ m = _mm_aesdec_si128(m, K9); \ m = _mm_aesdeclast_si128(m, K10); #define DO_DEC_BLOCK256(m) \ m = _mm_xor_si128(m, K0); \ m = _mm_aesdec_si128(m, K1); \ m = _mm_aesdec_si128(m, K2); \ m = _mm_aesdec_si128(m, K3); \ m = _mm_aesdec_si128(m, K4); \ m = _mm_aesdec_si128(m, K5); \ m = _mm_aesdec_si128(m, K6); \ m = _mm_aesdec_si128(m, K7); \ m = _mm_aesdec_si128(m, K8); \ m = _mm_aesdec_si128(m, K9); \ m = _mm_aesdec_si128(m, K10); \ m = _mm_aesdec_si128(m, K11); \ m = _mm_aesdec_si128(m, K12); \ m = _mm_aesdec_si128(m, K13); \ m = _mm_aesdeclast_si128(m, K14); #define SIZE 128 #define SIZED(m) m##128 #define PRELOAD_ENC PRELOAD_ENC_KEYS128 #define DO_ENC_BLOCK DO_ENC_BLOCK128 #define PRELOAD_DEC PRELOAD_DEC_KEYS128 #define DO_DEC_BLOCK DO_DEC_BLOCK128 #include "aes_x86ni_impl.c" #undef SIZE #undef SIZED #undef PRELOAD_ENC #undef PRELOAD_DEC #undef DO_ENC_BLOCK #undef DO_DEC_BLOCK #define SIZED(m) m##256 #define SIZE 256 #define PRELOAD_ENC PRELOAD_ENC_KEYS256 #define DO_ENC_BLOCK DO_ENC_BLOCK256 #define PRELOAD_DEC PRELOAD_DEC_KEYS256 #define DO_DEC_BLOCK DO_DEC_BLOCK256 #include "aes_x86ni_impl.c" #undef SIZE #undef SIZED #undef PRELOAD_ENC #undef PRELOAD_DEC #undef DO_ENC_BLOCK #undef DO_DEC_BLOCK #endif #endif cipher-aes-0.2.11/cbits/aes_x86ni.h0000644000000000000000000000747112541525177015104 0ustar0000000000000000/* * Copyright (c) 2012 Vincent Hanquez * * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of his contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #ifndef AES_X86NI_H #define AES_X86NI_H #ifdef WITH_AESNI #if defined(__i386__) || defined(__x86_64__) #include #include #include "aes.h" #include "block128.h" #ifdef IMPL_DEBUG static void block128_sse_print(__m128i m) { block128 b; _mm_storeu_si128((__m128i *) &b.b, m); block128_print(&b); } #endif void aes_ni_init(aes_key *key, uint8_t *origkey, uint8_t size); void aes_ni_encrypt_block128(aes_block *out, aes_key *key, aes_block *in); void aes_ni_encrypt_block256(aes_block *out, aes_key *key, aes_block *in); void aes_ni_decrypt_block128(aes_block *out, aes_key *key, aes_block *in); void aes_ni_decrypt_block256(aes_block *out, aes_key *key, aes_block *in); void aes_ni_encrypt_ecb128(aes_block *out, aes_key *key, aes_block *in, uint32_t blocks); void aes_ni_encrypt_ecb256(aes_block *out, aes_key *key, aes_block *in, uint32_t blocks); void aes_ni_decrypt_ecb128(aes_block *out, aes_key *key, aes_block *in, uint32_t blocks); void aes_ni_decrypt_ecb256(aes_block *out, aes_key *key, aes_block *in, uint32_t blocks); void aes_ni_encrypt_cbc128(aes_block *out, aes_key *key, aes_block *_iv, aes_block *in, uint32_t blocks); void aes_ni_encrypt_cbc256(aes_block *out, aes_key *key, aes_block *_iv, aes_block *in, uint32_t blocks); void aes_ni_decrypt_cbc128(aes_block *out, aes_key *key, aes_block *_iv, aes_block *in, uint32_t blocks); void aes_ni_decrypt_cbc256(aes_block *out, aes_key *key, aes_block *_iv, aes_block *in, uint32_t blocks); void aes_ni_encrypt_ctr128(uint8_t *out, aes_key *key, aes_block *_iv, uint8_t *in, uint32_t length); void aes_ni_encrypt_ctr256(uint8_t *out, aes_key *key, aes_block *_iv, uint8_t *in, uint32_t length); void aes_ni_encrypt_xts128(aes_block *out, aes_key *key1, aes_key *key2, aes_block *_tweak, uint32_t spoint, aes_block *in, uint32_t blocks); void aes_ni_encrypt_xts256(aes_block *out, aes_key *key1, aes_key *key2, aes_block *_tweak, uint32_t spoint, aes_block *in, uint32_t blocks); void aes_ni_gcm_encrypt128(uint8_t *out, aes_gcm *gcm, aes_key *key, uint8_t *in, uint32_t length); void aes_ni_gcm_encrypt256(uint8_t *out, aes_gcm *gcm, aes_key *key, uint8_t *in, uint32_t length); void gf_mul_x86ni(block128 *res, block128 *a_, block128 *b_); #endif #endif #endif cipher-aes-0.2.11/cbits/aes_x86ni_impl.c0000644000000000000000000002134112541525177016110 0ustar0000000000000000/* * Copyright (c) 2012-2013 Vincent Hanquez * * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of his contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ void SIZED(aes_ni_encrypt_block)(aes_block *out, aes_key *key, aes_block *in) { __m128i *k = (__m128i *) key->data; PRELOAD_ENC(k); __m128i m = _mm_loadu_si128((__m128i *) in); DO_ENC_BLOCK(m); _mm_storeu_si128((__m128i *) out, m); } void SIZED(aes_ni_decrypt_block)(aes_block *out, aes_key *key, aes_block *in) { __m128i *k = (__m128i *) key->data; PRELOAD_DEC(k); __m128i m = _mm_loadu_si128((__m128i *) in); DO_DEC_BLOCK(m); _mm_storeu_si128((__m128i *) out, m); } void SIZED(aes_ni_encrypt_ecb)(aes_block *out, aes_key *key, aes_block *in, uint32_t blocks) { __m128i *k = (__m128i *) key->data; PRELOAD_ENC(k); for (; blocks-- > 0; in += 1, out += 1) { __m128i m = _mm_loadu_si128((__m128i *) in); DO_ENC_BLOCK(m); _mm_storeu_si128((__m128i *) out, m); } } void SIZED(aes_ni_decrypt_ecb)(aes_block *out, aes_key *key, aes_block *in, uint32_t blocks) { __m128i *k = (__m128i *) key->data; PRELOAD_DEC(k); for (; blocks-- > 0; in += 1, out += 1) { __m128i m = _mm_loadu_si128((__m128i *) in); DO_DEC_BLOCK(m); _mm_storeu_si128((__m128i *) out, m); } } void SIZED(aes_ni_encrypt_cbc)(aes_block *out, aes_key *key, aes_block *_iv, aes_block *in, uint32_t blocks) { __m128i *k = (__m128i *) key->data; __m128i iv = _mm_loadu_si128((__m128i *) _iv); PRELOAD_ENC(k); for (; blocks-- > 0; in += 1, out += 1) { __m128i m = _mm_loadu_si128((__m128i *) in); m = _mm_xor_si128(m, iv); DO_ENC_BLOCK(m); iv = m; _mm_storeu_si128((__m128i *) out, m); } } void SIZED(aes_ni_decrypt_cbc)(aes_block *out, aes_key *key, aes_block *_iv, aes_block *in, uint32_t blocks) { __m128i *k = (__m128i *) key->data; __m128i iv = _mm_loadu_si128((__m128i *) _iv); PRELOAD_DEC(k); for (; blocks-- > 0; in += 1, out += 1) { __m128i m = _mm_loadu_si128((__m128i *) in); __m128i ivnext = m; DO_DEC_BLOCK(m); m = _mm_xor_si128(m, iv); _mm_storeu_si128((__m128i *) out, m); iv = ivnext; } } void SIZED(aes_ni_encrypt_ctr)(uint8_t *output, aes_key *key, aes_block *_iv, uint8_t *input, uint32_t len) { __m128i *k = (__m128i *) key->data; __m128i bswap_mask = _mm_setr_epi8(7,6,5,4,3,2,1,0,15,14,13,12,11,10,9,8); __m128i one = _mm_set_epi32(0,1,0,0); uint32_t nb_blocks = len / 16; uint32_t part_block_len = len % 16; /* get the IV in little endian format */ __m128i iv = _mm_loadu_si128((__m128i *) _iv); iv = _mm_shuffle_epi8(iv, bswap_mask); PRELOAD_ENC(k); for (; nb_blocks-- > 0; output += 16, input += 16) { /* put back the iv in big endian mode, * encrypt it and and xor it the input block */ __m128i tmp = _mm_shuffle_epi8(iv, bswap_mask); DO_ENC_BLOCK(tmp); __m128i m = _mm_loadu_si128((__m128i *) input); m = _mm_xor_si128(m, tmp); _mm_storeu_si128((__m128i *) output, m); /* iv += 1 */ iv = _mm_add_epi64(iv, one); } if (part_block_len != 0) { aes_block block; memset(&block.b, 0, 16); memcpy(&block.b, input, part_block_len); __m128i m = _mm_loadu_si128((__m128i *) &block); __m128i tmp = _mm_shuffle_epi8(iv, bswap_mask); DO_ENC_BLOCK(tmp); m = _mm_xor_si128(m, tmp); _mm_storeu_si128((__m128i *) &block.b, m); memcpy(output, &block.b, part_block_len); } return ; } void SIZED(aes_ni_encrypt_xts)(aes_block *out, aes_key *key1, aes_key *key2, aes_block *_tweak, uint32_t spoint, aes_block *in, uint32_t blocks) { __m128i tweak = _mm_loadu_si128((__m128i *) _tweak); do { __m128i *k2 = (__m128i *) key2->data; PRELOAD_ENC(k2); DO_ENC_BLOCK(tweak); while (spoint-- > 0) tweak = gfmulx(tweak); } while (0) ; do { __m128i *k1 = (__m128i *) key1->data; PRELOAD_ENC(k1); for ( ; blocks-- > 0; in += 1, out += 1, tweak = gfmulx(tweak)) { __m128i m = _mm_loadu_si128((__m128i *) in); m = _mm_xor_si128(m, tweak); DO_ENC_BLOCK(m); m = _mm_xor_si128(m, tweak); _mm_storeu_si128((__m128i *) out, m); } } while (0); } void SIZED(aes_ni_gcm_encrypt)(uint8_t *output, aes_gcm *gcm, aes_key *key, uint8_t *input, uint32_t length) { __m128i *k = (__m128i *) key->data; __m128i bswap_mask = _mm_setr_epi8(7,6,5,4,3,2,1,0,15,14,13,12,11,10,9,8); __m128i one = _mm_set_epi32(0,1,0,0); uint32_t nb_blocks = length / 16; uint32_t part_block_len = length % 16; gcm->length_input += length; __m128i h = _mm_loadu_si128((__m128i *) &gcm->h); __m128i tag = _mm_loadu_si128((__m128i *) &gcm->tag); __m128i iv = _mm_loadu_si128((__m128i *) &gcm->civ); iv = _mm_shuffle_epi8(iv, bswap_mask); PRELOAD_ENC(k); for (; nb_blocks-- > 0; output += 16, input += 16) { /* iv += 1 */ iv = _mm_add_epi64(iv, one); /* put back iv in big endian, encrypt it, * and xor it to input */ __m128i tmp = _mm_shuffle_epi8(iv, bswap_mask); DO_ENC_BLOCK(tmp); __m128i m = _mm_loadu_si128((__m128i *) input); m = _mm_xor_si128(m, tmp); tag = ghash_add(tag, h, m); /* store it out */ _mm_storeu_si128((__m128i *) output, m); } if (part_block_len > 0) { __m128i mask; aes_block block; /* FIXME could do something a bit more clever (slli & sub & and maybe) ... */ switch (part_block_len) { case 1: mask = _mm_setr_epi8(0,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80); break; case 2: mask = _mm_setr_epi8(0,1,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80); break; case 3: mask = _mm_setr_epi8(0,1,2,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80); break; case 4: mask = _mm_setr_epi8(0,1,2,3,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80); break; case 5: mask = _mm_setr_epi8(0,1,2,3,4,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80); break; case 6: mask = _mm_setr_epi8(0,1,2,3,4,5,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80); break; case 7: mask = _mm_setr_epi8(0,1,2,3,4,5,6,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80); break; case 8: mask = _mm_setr_epi8(0,1,2,3,4,5,6,7,0x80,0x80,0x80,0x80,0x80,0x80,0x80,0x80); break; case 9: mask = _mm_setr_epi8(0,1,2,3,4,5,6,7,8,0x80,0x80,0x80,0x80,0x80,0x80,0x80); break; case 10: mask = _mm_setr_epi8(0,1,2,3,4,5,6,7,8,9,0x80,0x80,0x80,0x80,0x80,0x80); break; case 11: mask = _mm_setr_epi8(0,1,2,3,4,5,6,7,8,9,10,0x80,0x80,0x80,0x80,0x80); break; case 12: mask = _mm_setr_epi8(0,1,2,3,4,5,6,7,8,9,10,11,0x80,0x80,0x80,0x80); break; case 13: mask = _mm_setr_epi8(0,1,2,3,4,5,6,7,8,9,10,11,12,0x80,0x80,0x80); break; case 14: mask = _mm_setr_epi8(0,1,2,3,4,5,6,7,8,9,10,11,12,13,0x80,0x80); break; case 15: mask = _mm_setr_epi8(0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,0x80); break; default: mask = _mm_setr_epi8(0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15); break; } block128_zero(&block); block128_copy_bytes(&block, input, part_block_len); /* iv += 1 */ iv = _mm_add_epi64(iv, one); /* put back iv in big endian mode, encrypt it and xor it with input */ __m128i tmp = _mm_shuffle_epi8(iv, bswap_mask); DO_ENC_BLOCK(tmp); __m128i m = _mm_loadu_si128((__m128i *) &block); m = _mm_xor_si128(m, tmp); m = _mm_shuffle_epi8(m, mask); tag = ghash_add(tag, h, m); /* make output */ _mm_storeu_si128((__m128i *) &block.b, m); memcpy(output, &block.b, part_block_len); } /* store back IV & tag */ __m128i tmp = _mm_shuffle_epi8(iv, bswap_mask); _mm_storeu_si128((__m128i *) &gcm->civ, tmp); _mm_storeu_si128((__m128i *) &gcm->tag, tag); } cipher-aes-0.2.11/cbits/bitfn.h0000644000000000000000000001540412541525177014375 0ustar0000000000000000/* * Copyright (C) 2006-2009 Vincent Hanquez * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef BITFN_H #define BITFN_H #include #ifndef NO_INLINE_ASM /**********************************************************/ # if (defined(__i386__)) # define ARCH_HAS_SWAP32 static inline uint32_t bitfn_swap32(uint32_t a) { asm ("bswap %0" : "=r" (a) : "0" (a)); return a; } /**********************************************************/ # elif (defined(__arm__)) # define ARCH_HAS_SWAP32 static inline uint32_t bitfn_swap32(uint32_t a) { uint32_t tmp = a; asm volatile ("eor %1, %0, %0, ror #16\n" "bic %1, %1, #0xff0000\n" "mov %0, %0, ror #8\n" "eor %0, %0, %1, lsr #8\n" : "=r" (a), "=r" (tmp) : "0" (a), "1" (tmp)); return a; } /**********************************************************/ # elif defined(__x86_64__) # define ARCH_HAS_SWAP32 # define ARCH_HAS_SWAP64 static inline uint32_t bitfn_swap32(uint32_t a) { asm ("bswap %0" : "=r" (a) : "0" (a)); return a; } static inline uint64_t bitfn_swap64(uint64_t a) { asm ("bswap %0" : "=r" (a) : "0" (a)); return a; } # endif #endif /* NO_INLINE_ASM */ /**********************************************************/ #ifndef ARCH_HAS_ROL32 static inline uint32_t rol32(uint32_t word, uint32_t shift) { return (word << shift) | (word >> (32 - shift)); } #endif #ifndef ARCH_HAS_ROR32 static inline uint32_t ror32(uint32_t word, uint32_t shift) { return (word >> shift) | (word << (32 - shift)); } #endif #ifndef ARCH_HAS_ROL64 static inline uint64_t rol64(uint64_t word, uint32_t shift) { return (word << shift) | (word >> (64 - shift)); } #endif #ifndef ARCH_HAS_ROR64 static inline uint64_t ror64(uint64_t word, uint32_t shift) { return (word >> shift) | (word << (64 - shift)); } #endif #ifndef ARCH_HAS_SWAP32 static inline uint32_t bitfn_swap32(uint32_t a) { return (a << 24) | ((a & 0xff00) << 8) | ((a >> 8) & 0xff00) | (a >> 24); } #endif #ifndef ARCH_HAS_ARRAY_SWAP32 static inline void array_swap32(uint32_t *d, uint32_t *s, uint32_t nb) { while (nb--) *d++ = bitfn_swap32(*s++); } #endif #ifndef ARCH_HAS_SWAP64 static inline uint64_t bitfn_swap64(uint64_t a) { return ((uint64_t) bitfn_swap32((uint32_t) (a >> 32))) | (((uint64_t) bitfn_swap32((uint32_t) a)) << 32); } #endif #ifndef ARCH_HAS_ARRAY_SWAP64 static inline void array_swap64(uint64_t *d, uint64_t *s, uint32_t nb) { while (nb--) *d++ = bitfn_swap64(*s++); } #endif #ifndef ARCH_HAS_MEMORY_ZERO static inline void memory_zero(void *ptr, uint32_t len) { uint32_t *ptr32 = ptr; uint8_t *ptr8; int i; for (i = 0; i < len / 4; i++) *ptr32++ = 0; if (len % 4) { ptr8 = (uint8_t *) ptr32; for (i = len % 4; i >= 0; i--) ptr8[i] = 0; } } #endif #ifndef ARCH_HAS_ARRAY_COPY32 static inline void array_copy32(uint32_t *d, uint32_t *s, uint32_t nb) { while (nb--) *d++ = *s++; } #endif #ifndef ARCH_HAS_ARRAY_COPY64 static inline void array_copy64(uint64_t *d, uint64_t *s, uint32_t nb) { while (nb--) *d++ = *s++; } #endif #ifdef __GNUC__ #define bitfn_ntz(n) __builtin_ctz(n) #else #error "define ntz for your platform" #endif #ifdef __MINGW32__ # define LITTLE_ENDIAN 1234 # define BYTE_ORDER LITTLE_ENDIAN #elif defined(__FreeBSD__) || defined(__DragonFly__) || defined(__NetBSD__) # include #elif defined(__OpenBSD__) || defined(__SVR4) # include #elif defined(__APPLE__) # include #elif defined( BSD ) && ( BSD >= 199103 ) # include #elif defined( __QNXNTO__ ) && defined( __LITTLEENDIAN__ ) # define LITTLE_ENDIAN 1234 # define BYTE_ORDER LITTLE_ENDIAN #elif defined( __QNXNTO__ ) && defined( __BIGENDIAN__ ) # define BIG_ENDIAN 1234 # define BYTE_ORDER BIG_ENDIAN #else # include #endif /* big endian to cpu */ #if LITTLE_ENDIAN == BYTE_ORDER # define be32_to_cpu(a) bitfn_swap32(a) # define cpu_to_be32(a) bitfn_swap32(a) # define le32_to_cpu(a) (a) # define cpu_to_le32(a) (a) # define be64_to_cpu(a) bitfn_swap64(a) # define cpu_to_be64(a) bitfn_swap64(a) # define le64_to_cpu(a) (a) # define cpu_to_le64(a) (a) # define cpu_to_le32_array(d, s, l) array_copy32(d, s, l) # define le32_to_cpu_array(d, s, l) array_copy32(d, s, l) # define cpu_to_be32_array(d, s, l) array_swap32(d, s, l) # define be32_to_cpu_array(d, s, l) array_swap32(d, s, l) # define cpu_to_le64_array(d, s, l) array_copy64(d, s, l) # define le64_to_cpu_array(d, s, l) array_copy64(d, s, l) # define cpu_to_be64_array(d, s, l) array_swap64(d, s, l) # define be64_to_cpu_array(d, s, l) array_swap64(d, s, l) # define ror32_be(a, s) rol32(a, s) # define rol32_be(a, s) ror32(a, s) # define ARCH_IS_LITTLE_ENDIAN #elif BIG_ENDIAN == BYTE_ORDER # define be32_to_cpu(a) (a) # define cpu_to_be32(a) (a) # define be64_to_cpu(a) (a) # define cpu_to_be64(a) (a) # define le64_to_cpu(a) bitfn_swap64(a) # define cpu_to_le64(a) bitfn_swap64(a) # define le32_to_cpu(a) bitfn_swap32(a) # define cpu_to_le32(a) bitfn_swap32(a) # define cpu_to_le32_array(d, s, l) array_swap32(d, s, l) # define le32_to_cpu_array(d, s, l) array_swap32(d, s, l) # define cpu_to_be32_array(d, s, l) array_copy32(d, s, l) # define be32_to_cpu_array(d, s, l) array_copy32(d, s, l) # define cpu_to_le64_array(d, s, l) array_swap64(d, s, l) # define le64_to_cpu_array(d, s, l) array_swap64(d, s, l) # define cpu_to_be64_array(d, s, l) array_copy64(d, s, l) # define be64_to_cpu_array(d, s, l) array_copy64(d, s, l) # define ror32_be(a, s) ror32(a, s) # define rol32_be(a, s) rol32(a, s) # define ARCH_IS_BIG_ENDIAN #else # error "endian not supported" #endif #endif /* !BITFN_H */ cipher-aes-0.2.11/cbits/block128.h0000644000000000000000000000552012541525177014616 0ustar0000000000000000/* * Copyright (c) 2012 Vincent Hanquez * * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of his contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #ifndef BLOCK128_H #define BLOCK128_H #include "bitfn.h" typedef union { uint64_t q[2]; uint32_t d[4]; uint16_t w[8]; uint8_t b[16]; } block128; static inline void block128_copy_bytes(block128 *block, uint8_t *src, uint32_t len) { int i; for (i = 0; i < len; i++) block->b[i] = src[i]; } static inline void block128_copy(block128 *d, const block128 *s) { d->q[0] = s->q[0]; d->q[1] = s->q[1]; } static inline void block128_zero(block128 *d) { d->q[0] = 0; d->q[1] = 0; } static inline void block128_xor(block128 *d, const block128 *s) { d->q[0] ^= s->q[0]; d->q[1] ^= s->q[1]; } static inline void block128_vxor(block128 *d, const block128 *s1, const block128 *s2) { d->q[0] = s1->q[0] ^ s2->q[0]; d->q[1] = s1->q[1] ^ s2->q[1]; } static inline void block128_xor_bytes(block128 *block, uint8_t *src, uint32_t len) { int i; for (i = 0; i < len; i++) block->b[i] ^= src[i]; } static inline void block128_inc_be(block128 *b) { uint64_t v = be64_to_cpu(b->q[1]); if (++v == 0) { b->q[0] = cpu_to_be64(be64_to_cpu(b->q[0]) + 1); b->q[1] = 0; } else b->q[1] = cpu_to_be64(v); } #ifdef IMPL_DEBUG #include static inline void block128_print(block128 *b) { int i; for (i = 0; i < 16; i++) { printf("%02x ", b->b[i]); } printf("\n"); } #endif #endif cipher-aes-0.2.11/cbits/cpu.c0000644000000000000000000000452612541525177014060 0ustar0000000000000000/* * Copyright (C) 2012 Vincent Hanquez * * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of his contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #include "cpu.h" #include #ifdef ARCH_X86 static void cpuid(uint32_t info, uint32_t *eax, uint32_t *ebx, uint32_t *ecx, uint32_t *edx) { *eax = info; asm volatile ( #ifdef __x86_64__ "mov %%rbx, %%rdi;" #else "mov %%ebx, %%edi;" #endif "cpuid;" "mov %%ebx, %%esi;" #ifdef __x86_64__ "mov %%rdi, %%rbx;" #else "mov %%edi, %%ebx;" #endif :"+a" (*eax), "=S" (*ebx), "=c" (*ecx), "=d" (*edx) : :"edi"); } #ifdef USE_AESNI void initialize_hw(void (*init_table)(int, int)) { static int inited = 0; if (inited == 0) { uint32_t eax, ebx, ecx, edx; int aesni, pclmul; inited = 1; cpuid(1, &eax, &ebx, &ecx, &edx); aesni = (ecx & 0x02000000); pclmul = (ecx & 0x00000001); init_table(aesni, pclmul); } } #else #define initialize_hw(init_table) (0) #endif #endif cipher-aes-0.2.11/cbits/cpu.h0000644000000000000000000000343612541525177014064 0ustar0000000000000000/* * Copyright (C) 2012 Vincent Hanquez * * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of his contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * */ #ifndef CPU_H #define CPU_H #if defined(__i386__) || defined(__x86_64__) #define ARCH_X86 #define USE_AESNI #endif #ifdef USE_AESNI void initialize_hw(void (*init_table)(int, int)); #else #define initialize_hw(init_table) (0) #endif #endif cipher-aes-0.2.11/cbits/gf.c0000644000000000000000000000510512541525177013657 0ustar0000000000000000/* * Copyright (c) 2012 Vincent Hanquez * * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of his contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include #include #include "cpu.h" #include "gf.h" #include "aes_x86ni.h" /* this is a really inefficient way to GF multiply. * the alternative without hw accel is building small tables * to speed up the multiplication. * TODO: optimise with tables */ void gf_mul(block128 *a, block128 *b) { uint64_t a0, a1, v0, v1; int i, j; a0 = a1 = 0; v0 = cpu_to_be64(a->q[0]); v1 = cpu_to_be64(a->q[1]); for (i = 0; i < 16; i++) for (j = 0x80; j != 0; j >>= 1) { uint8_t x = b->b[i] & j; a0 ^= x ? v0 : 0; a1 ^= x ? v1 : 0; x = (uint8_t) v1 & 1; v1 = (v1 >> 1) | (v0 << 63); v0 = (v0 >> 1) ^ (x ? (0xe1ULL << 56) : 0); } a->q[0] = cpu_to_be64(a0); a->q[1] = cpu_to_be64(a1); } /* inplace GFMUL for xts mode */ void gf_mulx(block128 *a) { const uint64_t gf_mask = cpu_to_le64(0x8000000000000000ULL); uint64_t r = ((a->q[1] & gf_mask) ? cpu_to_le64(0x87) : 0); a->q[1] = cpu_to_le64((le64_to_cpu(a->q[1]) << 1) | (a->q[0] & gf_mask ? 1 : 0)); a->q[0] = cpu_to_le64(le64_to_cpu(a->q[0]) << 1) ^ r; } cipher-aes-0.2.11/cbits/gf.h0000644000000000000000000000333112541525177013663 0ustar0000000000000000/* * Copyright (c) 2012 Vincent Hanquez * * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the author nor the names of his contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #ifndef _GF128MUL_H #define _GF128MUL_H #include "block128.h" void gf_mul(block128 *a, block128 *b); void gf_mulx(block128 *a); #endif cipher-aes-0.2.11/Crypto/0000755000000000000000000000000012541525177013272 5ustar0000000000000000cipher-aes-0.2.11/Crypto/Cipher/0000755000000000000000000000000012541525177014504 5ustar0000000000000000cipher-aes-0.2.11/Crypto/Cipher/AES.hs0000644000000000000000000006160712541525177015462 0ustar0000000000000000{-# LANGUAGE ForeignFunctionInterface #-} {-# LANGUAGE ViewPatterns #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE BangPatterns #-} {-# LANGUAGE CPP #-} {-# LANGUAGE GeneralizedNewtypeDeriving #-} -- | -- Module : Crypto.Cipher.AES -- License : BSD-style -- Maintainer : Vincent Hanquez -- Stability : stable -- Portability : good -- module Crypto.Cipher.AES ( -- * block cipher data types AES , AES128 , AES192 , AES256 -- * IV , AESIV , aesIV_ -- * Authenticated encryption block cipher types , AESGCM -- * creation , initAES , initKey -- * misc , genCTR , genCounter -- * encryption , encryptECB , encryptCBC , encryptCTR , encryptXTS , encryptGCM , encryptOCB -- * decryption , decryptECB , decryptCBC , decryptCTR , decryptXTS , decryptGCM , decryptOCB ) where import Data.Word import Foreign.Ptr import Foreign.ForeignPtr import Foreign.C.Types import Foreign.C.String import Data.ByteString.Internal import Data.ByteString.Unsafe import Data.Byteable import qualified Data.ByteString as B import qualified Data.ByteString.Internal as B (ByteString(PS), mallocByteString, memcpy) import System.IO.Unsafe (unsafePerformIO) import Crypto.Cipher.Types import Data.SecureMem -- | AES Context (pre-processed key) newtype AES = AES SecureMem -- | AES with 128 bit key newtype AES128 = AES128 AES -- | AES with 192 bit key newtype AES192 = AES192 AES -- | AES with 256 bit key newtype AES256 = AES256 AES -- | AES IV is always 16 bytes newtype AESIV = AESIV ByteString deriving (Show,Eq,Byteable) -- | convert a bytestring to an AESIV aesIV_ :: ByteString -> AESIV aesIV_ iv | B.length iv /= 16 = error $ "AES error: IV length must be block size (16). Its length is: " ++ (show $ B.length iv) | otherwise = AESIV iv instance Cipher AES where cipherName _ = "AES" cipherKeySize _ = KeySizeEnum [16,24,32] cipherInit k = initAES k instance Cipher AES128 where cipherName _ = "AES128" cipherKeySize _ = KeySizeFixed 16 cipherInit k = AES128 $ initAES k instance Cipher AES192 where cipherName _ = "AES192" cipherKeySize _ = KeySizeFixed 24 cipherInit k = AES192 $ initAES k instance Cipher AES256 where cipherName _ = "AES256" cipherKeySize _ = KeySizeFixed 32 cipherInit k = AES256 $ initAES k instance BlockCipher AES where blockSize _ = 16 ecbEncrypt = encryptECB ecbDecrypt = decryptECB cbcEncrypt = encryptCBC cbcDecrypt = decryptCBC ctrCombine = encryptCTR xtsEncrypt = encryptXTS xtsDecrypt = decryptXTS aeadInit AEAD_GCM aes iv = Just $ AEAD aes $ AEADState $ gcmInit aes iv aeadInit AEAD_OCB aes iv = Just $ AEAD aes $ AEADState $ ocbInit aes iv aeadInit _ _ _ = Nothing instance AEADModeImpl AES AESGCM where aeadStateAppendHeader _ = gcmAppendAAD aeadStateEncrypt = gcmAppendEncrypt aeadStateDecrypt = gcmAppendDecrypt aeadStateFinalize = gcmFinish instance AEADModeImpl AES AESOCB where aeadStateAppendHeader = ocbAppendAAD aeadStateEncrypt = ocbAppendEncrypt aeadStateDecrypt = ocbAppendDecrypt aeadStateFinalize = ocbFinish #define INSTANCE_BLOCKCIPHER(CSTR) \ instance BlockCipher CSTR where \ { blockSize _ = 16 \ ; ecbEncrypt (CSTR aes) = encryptECB aes \ ; ecbDecrypt (CSTR aes) = decryptECB aes \ ; cbcEncrypt (CSTR aes) = encryptCBC aes \ ; cbcDecrypt (CSTR aes) = decryptCBC aes \ ; ctrCombine (CSTR aes) = encryptCTR aes \ ; xtsEncrypt (CSTR aes1, CSTR aes2) = encryptXTS (aes1,aes2) \ ; xtsDecrypt (CSTR aes1, CSTR aes2) = decryptXTS (aes1,aes2) \ ; aeadInit AEAD_GCM cipher@(CSTR aes) iv = Just $ AEAD cipher $ AEADState $ gcmInit aes iv \ ; aeadInit AEAD_OCB cipher@(CSTR aes) iv = Just $ AEAD cipher $ AEADState $ ocbInit aes iv \ ; aeadInit _ _ _ = Nothing \ }; \ \ instance AEADModeImpl CSTR AESGCM where \ { aeadStateAppendHeader (CSTR _) gcmState bs = gcmAppendAAD gcmState bs \ ; aeadStateEncrypt (CSTR aes) gcmState input = gcmAppendEncrypt aes gcmState input \ ; aeadStateDecrypt (CSTR aes) gcmState input = gcmAppendDecrypt aes gcmState input \ ; aeadStateFinalize (CSTR aes) gcmState len = gcmFinish aes gcmState len \ }; \ \ instance AEADModeImpl CSTR AESOCB where \ { aeadStateAppendHeader (CSTR aes) ocbState bs = ocbAppendAAD aes ocbState bs \ ; aeadStateEncrypt (CSTR aes) ocbState input = ocbAppendEncrypt aes ocbState input \ ; aeadStateDecrypt (CSTR aes) ocbState input = ocbAppendDecrypt aes ocbState input \ ; aeadStateFinalize (CSTR aes) ocbState len = ocbFinish aes ocbState len \ } INSTANCE_BLOCKCIPHER(AES128) INSTANCE_BLOCKCIPHER(AES192) INSTANCE_BLOCKCIPHER(AES256) -- | AESGCM State newtype AESGCM = AESGCM SecureMem -- | AESOCB State newtype AESOCB = AESOCB SecureMem sizeGCM :: Int sizeGCM = 80 sizeOCB :: Int sizeOCB = 160 keyToPtr :: AES -> (Ptr AES -> IO a) -> IO a keyToPtr (AES b) f = withSecureMemPtr b (f . castPtr) ivToPtr :: Byteable iv => iv -> (Ptr Word8 -> IO a) -> IO a ivToPtr iv f = withBytePtr iv (f . castPtr) ivCopyPtr :: AESIV -> (Ptr Word8 -> IO ()) -> IO AESIV ivCopyPtr (AESIV iv) f = do newIV <- create 16 $ \newPtr -> do withBytePtr iv $ \ivPtr -> B.memcpy newPtr ivPtr 16 withBytePtr newIV $ f return $! AESIV newIV withKeyAndIV :: Byteable iv => AES -> iv -> (Ptr AES -> Ptr Word8 -> IO a) -> IO a withKeyAndIV ctx iv f = keyToPtr ctx $ \kptr -> ivToPtr iv $ \ivp -> f kptr ivp withKey2AndIV :: Byteable iv => AES -> AES -> iv -> (Ptr AES -> Ptr AES -> Ptr Word8 -> IO a) -> IO a withKey2AndIV key1 key2 iv f = keyToPtr key1 $ \kptr1 -> keyToPtr key2 $ \kptr2 -> ivToPtr iv $ \ivp -> f kptr1 kptr2 ivp withGCMKeyAndCopySt :: AES -> AESGCM -> (Ptr AESGCM -> Ptr AES -> IO a) -> IO (a, AESGCM) withGCMKeyAndCopySt aes (AESGCM gcmSt) f = keyToPtr aes $ \aesPtr -> do newSt <- secureMemCopy gcmSt a <- withSecureMemPtr newSt $ \gcmStPtr -> f (castPtr gcmStPtr) aesPtr return (a, AESGCM newSt) withNewGCMSt :: AESGCM -> (Ptr AESGCM -> IO ()) -> IO AESGCM withNewGCMSt (AESGCM gcmSt) f = withSecureMemCopy gcmSt (f . castPtr) >>= \sm2 -> return (AESGCM sm2) withOCBKeyAndCopySt :: AES -> AESOCB -> (Ptr AESOCB -> Ptr AES -> IO a) -> IO (a, AESOCB) withOCBKeyAndCopySt aes (AESOCB gcmSt) f = keyToPtr aes $ \aesPtr -> do newSt <- secureMemCopy gcmSt a <- withSecureMemPtr newSt $ \gcmStPtr -> f (castPtr gcmStPtr) aesPtr return (a, AESOCB newSt) -- | Initialize a new context with a key -- -- Key need to be of length 16, 24 or 32 bytes. any other values will cause undefined behavior initAES :: Byteable b => b -> AES initAES k | len == 16 = initWithRounds 10 | len == 24 = initWithRounds 12 | len == 32 = initWithRounds 14 | otherwise = error "AES: not a valid key length (valid=16,24,32)" where len = byteableLength k initWithRounds nbR = AES $ unsafeCreateSecureMem (16+2*2*16*nbR) aesInit aesInit ptr = withBytePtr k $ \ikey -> c_aes_init (castPtr ptr) (castPtr ikey) (fromIntegral len) {-# DEPRECATED initKey "use initAES" #-} initKey :: Byteable b => b -> AES initKey = initAES -- | encrypt using Electronic Code Book (ECB) {-# NOINLINE encryptECB #-} encryptECB :: AES -> ByteString -> ByteString encryptECB = doECB c_aes_encrypt_ecb -- | encrypt using Cipher Block Chaining (CBC) {-# NOINLINE encryptCBC #-} encryptCBC :: Byteable iv => AES -- ^ AES Context -> iv -- ^ Initial vector of AES block size -> ByteString -- ^ plaintext -> ByteString -- ^ ciphertext encryptCBC = doCBC c_aes_encrypt_cbc -- | generate a counter mode pad. this is generally xor-ed to an input -- to make the standard counter mode block operations. -- -- if the length requested is not a multiple of the block cipher size, -- more data will be returned, so that the returned bytestring is -- a multiple of the block cipher size. {-# NOINLINE genCTR #-} genCTR :: Byteable iv => AES -- ^ Cipher Key. -> iv -- ^ usually a 128 bit integer. -> Int -- ^ length of bytes required. -> ByteString genCTR ctx iv len | len <= 0 = B.empty | byteableLength iv /= 16 = error $ "AES error: IV length must be block size (16). Its length is: " ++ (show $ byteableLength iv) | otherwise = unsafeCreate (nbBlocks * 16) generate where generate o = withKeyAndIV ctx iv $ \k i -> c_aes_gen_ctr (castPtr o) k i (fromIntegral nbBlocks) (nbBlocks',r) = len `quotRem` 16 nbBlocks = if r == 0 then nbBlocks' else nbBlocks' + 1 -- | generate a counter mode pad. this is generally xor-ed to an input -- to make the standard counter mode block operations. -- -- if the length requested is not a multiple of the block cipher size, -- more data will be returned, so that the returned bytestring is -- a multiple of the block cipher size. -- -- Similiar to 'genCTR' but also return the next IV for continuation {-# NOINLINE genCounter #-} genCounter :: AES -> AESIV -> Int -> (ByteString, AESIV) genCounter ctx iv len | len <= 0 = (B.empty, iv) | otherwise = unsafePerformIO $ do fptr <- B.mallocByteString outputLength newIv <- withForeignPtr fptr $ \o -> keyToPtr ctx $ \k -> ivCopyPtr iv $ \i -> do c_aes_gen_ctr_cont (castPtr o) k i (fromIntegral nbBlocks) let !out = B.PS fptr 0 outputLength return $! (out `seq` newIv `seq` (out, newIv)) where (nbBlocks',r) = len `quotRem` 16 nbBlocks = if r == 0 then nbBlocks' else nbBlocks' + 1 outputLength = nbBlocks * 16 {- TODO: when genCTR has same AESIV requirements for IV, add the following rules: - RULES "snd . genCounter" forall ctx iv len . snd (genCounter ctx iv len) = genCTR ctx iv len -} -- | encrypt using Counter mode (CTR) -- -- in CTR mode encryption and decryption is the same operation. {-# NOINLINE encryptCTR #-} encryptCTR :: Byteable iv => AES -- ^ AES Context -> iv -- ^ initial vector of AES block size (usually representing a 128 bit integer) -> ByteString -- ^ plaintext input -> ByteString -- ^ ciphertext output encryptCTR ctx iv input | len <= 0 = B.empty | byteableLength iv /= 16 = error $ "AES error: IV length must be block size (16). Its length is: " ++ (show $ byteableLength iv) | otherwise = unsafeCreate len doEncrypt where doEncrypt o = withKeyAndIV ctx iv $ \k v -> unsafeUseAsCString input $ \i -> c_aes_encrypt_ctr (castPtr o) k v i (fromIntegral len) len = B.length input -- | encrypt using Galois counter mode (GCM) -- return the encrypted bytestring and the tag associated -- -- note: encrypted data is identical to CTR mode in GCM, however -- a tag is also computed. {-# NOINLINE encryptGCM #-} encryptGCM :: Byteable iv => AES -- ^ AES Context -> iv -- ^ IV initial vector of any size -> ByteString -- ^ data to authenticate (AAD) -> ByteString -- ^ data to encrypt -> (ByteString, AuthTag) -- ^ ciphertext and tag encryptGCM = doGCM gcmAppendEncrypt -- | encrypt using OCB v3 -- return the encrypted bytestring and the tag associated {-# NOINLINE encryptOCB #-} encryptOCB :: Byteable iv => AES -- ^ AES Context -> iv -- ^ IV initial vector of any size -> ByteString -- ^ data to authenticate (AAD) -> ByteString -- ^ data to encrypt -> (ByteString, AuthTag) -- ^ ciphertext and tag encryptOCB = doOCB ocbAppendEncrypt -- | encrypt using XTS -- -- the first key is the normal block encryption key -- the second key is used for the initial block tweak {-# NOINLINE encryptXTS #-} encryptXTS :: Byteable iv => (AES,AES) -- ^ AES cipher and tweak context -> iv -- ^ a 128 bits IV, typically a sector or a block offset in XTS -> Word32 -- ^ number of rounds to skip, also seen a 16 byte offset in the sector or block. -> ByteString -- ^ input to encrypt -> ByteString -- ^ output encrypted encryptXTS = doXTS c_aes_encrypt_xts -- | decrypt using Electronic Code Book (ECB) {-# NOINLINE decryptECB #-} decryptECB :: AES -> ByteString -> ByteString decryptECB = doECB c_aes_decrypt_ecb -- | decrypt using Cipher block chaining (CBC) {-# NOINLINE decryptCBC #-} decryptCBC :: Byteable iv => AES -> iv -> ByteString -> ByteString decryptCBC = doCBC c_aes_decrypt_cbc -- | decrypt using Counter mode (CTR). -- -- in CTR mode encryption and decryption is the same operation. decryptCTR :: Byteable iv => AES -- ^ AES Context -> iv -- ^ initial vector, usually representing a 128 bit integer -> ByteString -- ^ ciphertext input -> ByteString -- ^ plaintext output decryptCTR = encryptCTR -- | decrypt using XTS {-# NOINLINE decryptXTS #-} decryptXTS :: Byteable iv => (AES,AES) -- ^ AES cipher and tweak context -> iv -- ^ a 128 bits IV, typically a sector or a block offset in XTS -> Word32 -- ^ number of rounds to skip, also seen a 16 byte offset in the sector or block. -> ByteString -- ^ input to decrypt -> ByteString -- ^ output decrypted decryptXTS = doXTS c_aes_decrypt_xts -- | decrypt using Galois Counter Mode (GCM) {-# NOINLINE decryptGCM #-} decryptGCM :: Byteable iv => AES -- ^ Key -> iv -- ^ IV initial vector of any size -> ByteString -- ^ data to authenticate (AAD) -> ByteString -- ^ data to decrypt -> (ByteString, AuthTag) -- ^ plaintext and tag decryptGCM = doGCM gcmAppendDecrypt -- | decrypt using Offset Codebook Mode (OCB) {-# NOINLINE decryptOCB #-} decryptOCB :: Byteable iv => AES -- ^ Key -> iv -- ^ IV initial vector of any size -> ByteString -- ^ data to authenticate (AAD) -> ByteString -- ^ data to decrypt -> (ByteString, AuthTag) -- ^ plaintext and tag decryptOCB = doOCB ocbAppendDecrypt {-# INLINE doECB #-} doECB :: (Ptr b -> Ptr AES -> CString -> CUInt -> IO ()) -> AES -> ByteString -> ByteString doECB f ctx input | r /= 0 = error $ "Encryption error: input length must be a multiple of block size (16). Its length is: " ++ (show len) | otherwise = unsafeCreate len $ \o -> keyToPtr ctx $ \k -> unsafeUseAsCString input $ \i -> f (castPtr o) k i (fromIntegral nbBlocks) where (nbBlocks, r) = len `quotRem` 16 len = (B.length input) {-# INLINE doCBC #-} doCBC :: Byteable iv => (Ptr b -> Ptr AES -> Ptr Word8 -> CString -> CUInt -> IO ()) -> AES -> iv -> ByteString -> ByteString doCBC f ctx iv input | len == 0 = B.empty | byteableLength iv /= 16 = error $ "AES error: IV length must be block size (16). Its length is: " ++ (show $ byteableLength iv) | r /= 0 = error $ "Encryption error: input length must be a multiple of block size (16). Its length is: " ++ (show len) | otherwise = unsafeCreate len $ \o -> withKeyAndIV ctx iv $ \k v -> unsafeUseAsCString input $ \i -> f (castPtr o) k v i (fromIntegral nbBlocks) where (nbBlocks, r) = len `quotRem` 16 len = B.length input {-# INLINE doXTS #-} doXTS :: Byteable iv => (Ptr b -> Ptr AES -> Ptr AES -> Ptr Word8 -> CUInt -> CString -> CUInt -> IO ()) -> (AES, AES) -> iv -> Word32 -> ByteString -> ByteString doXTS f (key1,key2) iv spoint input | len == 0 = B.empty | r /= 0 = error $ "Encryption error: input length must be a multiple of block size (16) for now. Its length is: " ++ (show len) | otherwise = unsafeCreate len $ \o -> withKey2AndIV key1 key2 iv $ \k1 k2 v -> unsafeUseAsCString input $ \i -> f (castPtr o) k1 k2 v (fromIntegral spoint) i (fromIntegral nbBlocks) where (nbBlocks, r) = len `quotRem` 16 len = B.length input ------------------------------------------------------------------------ -- GCM ------------------------------------------------------------------------ {-# INLINE doGCM #-} doGCM :: Byteable iv => (AES -> AESGCM -> ByteString -> (ByteString, AESGCM)) -> AES -> iv -> ByteString -> ByteString -> (ByteString, AuthTag) doGCM f ctx iv aad input = (output, tag) where tag = gcmFinish ctx after 16 (output, after) = f ctx afterAAD input afterAAD = gcmAppendAAD ini aad ini = gcmInit ctx iv -- | initialize a gcm context {-# NOINLINE gcmInit #-} gcmInit :: Byteable iv => AES -> iv -> AESGCM gcmInit ctx iv = unsafePerformIO $ do sm <- createSecureMem sizeGCM $ \gcmStPtr -> withKeyAndIV ctx iv $ \k v -> c_aes_gcm_init (castPtr gcmStPtr) k v (fromIntegral $ byteableLength iv) return $ AESGCM sm -- | append data which is going to just be authentified to the GCM context. -- -- need to happen after initialization and before appending encryption/decryption data. {-# NOINLINE gcmAppendAAD #-} gcmAppendAAD :: AESGCM -> ByteString -> AESGCM gcmAppendAAD gcmSt input = unsafePerformIO doAppend where doAppend = withNewGCMSt gcmSt $ \gcmStPtr -> unsafeUseAsCString input $ \i -> c_aes_gcm_aad gcmStPtr i (fromIntegral $ B.length input) -- | append data to encrypt and append to the GCM context -- -- bytestring need to be multiple of AES block size, unless it's the last call to this function. -- need to happen after AAD appending, or after initialization if no AAD data. {-# NOINLINE gcmAppendEncrypt #-} gcmAppendEncrypt :: AES -> AESGCM -> ByteString -> (ByteString, AESGCM) gcmAppendEncrypt ctx gcm input = unsafePerformIO $ withGCMKeyAndCopySt ctx gcm doEnc where len = B.length input doEnc gcmStPtr aesPtr = create len $ \o -> unsafeUseAsCString input $ \i -> c_aes_gcm_encrypt (castPtr o) gcmStPtr aesPtr i (fromIntegral len) -- | append data to decrypt and append to the GCM context -- -- bytestring need to be multiple of AES block size, unless it's the last call to this function. -- need to happen after AAD appending, or after initialization if no AAD data. {-# NOINLINE gcmAppendDecrypt #-} gcmAppendDecrypt :: AES -> AESGCM -> ByteString -> (ByteString, AESGCM) gcmAppendDecrypt ctx gcm input = unsafePerformIO $ withGCMKeyAndCopySt ctx gcm doDec where len = B.length input doDec gcmStPtr aesPtr = create len $ \o -> unsafeUseAsCString input $ \i -> c_aes_gcm_decrypt (castPtr o) gcmStPtr aesPtr i (fromIntegral len) -- | Generate the Tag from GCM context {-# NOINLINE gcmFinish #-} gcmFinish :: AES -> AESGCM -> Int -> AuthTag gcmFinish ctx gcm taglen = AuthTag $ B.take taglen computeTag where computeTag = unsafeCreate 16 $ \t -> withGCMKeyAndCopySt ctx gcm (c_aes_gcm_finish (castPtr t)) >> return () ------------------------------------------------------------------------ -- OCB v3 ------------------------------------------------------------------------ {-# INLINE doOCB #-} doOCB :: Byteable iv => (AES -> AESOCB -> ByteString -> (ByteString, AESOCB)) -> AES -> iv -> ByteString -> ByteString -> (ByteString, AuthTag) doOCB f ctx iv aad input = (output, tag) where tag = ocbFinish ctx after 16 (output, after) = f ctx afterAAD input afterAAD = ocbAppendAAD ctx ini aad ini = ocbInit ctx iv -- | initialize an ocb context {-# NOINLINE ocbInit #-} ocbInit :: Byteable iv => AES -> iv -> AESOCB ocbInit ctx iv = unsafePerformIO $ do sm <- createSecureMem sizeOCB $ \ocbStPtr -> withKeyAndIV ctx iv $ \k v -> c_aes_ocb_init (castPtr ocbStPtr) k v (fromIntegral $ byteableLength iv) return $ AESOCB sm -- | append data which is going to just be authentified to the OCB context. -- -- need to happen after initialization and before appending encryption/decryption data. {-# NOINLINE ocbAppendAAD #-} ocbAppendAAD :: AES -> AESOCB -> ByteString -> AESOCB ocbAppendAAD ctx ocb input = unsafePerformIO (snd `fmap` withOCBKeyAndCopySt ctx ocb doAppend) where doAppend ocbStPtr aesPtr = unsafeUseAsCString input $ \i -> c_aes_ocb_aad ocbStPtr aesPtr i (fromIntegral $ B.length input) -- | append data to encrypt and append to the OCB context -- -- bytestring need to be multiple of AES block size, unless it's the last call to this function. -- need to happen after AAD appending, or after initialization if no AAD data. {-# NOINLINE ocbAppendEncrypt #-} ocbAppendEncrypt :: AES -> AESOCB -> ByteString -> (ByteString, AESOCB) ocbAppendEncrypt ctx ocb input = unsafePerformIO $ withOCBKeyAndCopySt ctx ocb doEnc where len = B.length input doEnc ocbStPtr aesPtr = create len $ \o -> unsafeUseAsCString input $ \i -> c_aes_ocb_encrypt (castPtr o) ocbStPtr aesPtr i (fromIntegral len) -- | append data to decrypt and append to the OCB context -- -- bytestring need to be multiple of AES block size, unless it's the last call to this function. -- need to happen after AAD appending, or after initialization if no AAD data. {-# NOINLINE ocbAppendDecrypt #-} ocbAppendDecrypt :: AES -> AESOCB -> ByteString -> (ByteString, AESOCB) ocbAppendDecrypt ctx ocb input = unsafePerformIO $ withOCBKeyAndCopySt ctx ocb doDec where len = B.length input doDec ocbStPtr aesPtr = create len $ \o -> unsafeUseAsCString input $ \i -> c_aes_ocb_decrypt (castPtr o) ocbStPtr aesPtr i (fromIntegral len) -- | Generate the Tag from OCB context {-# NOINLINE ocbFinish #-} ocbFinish :: AES -> AESOCB -> Int -> AuthTag ocbFinish ctx ocb taglen = AuthTag $ B.take taglen computeTag where computeTag = unsafeCreate 16 $ \t -> withOCBKeyAndCopySt ctx ocb (c_aes_ocb_finish (castPtr t)) >> return () ------------------------------------------------------------------------ foreign import ccall "aes.h aes_initkey" c_aes_init :: Ptr AES -> CString -> CUInt -> IO () ------------------------------------------------------------------------ foreign import ccall "aes.h aes_encrypt_ecb" c_aes_encrypt_ecb :: CString -> Ptr AES -> CString -> CUInt -> IO () foreign import ccall "aes.h aes_decrypt_ecb" c_aes_decrypt_ecb :: CString -> Ptr AES -> CString -> CUInt -> IO () ------------------------------------------------------------------------ foreign import ccall "aes.h aes_encrypt_cbc" c_aes_encrypt_cbc :: CString -> Ptr AES -> Ptr Word8 -> CString -> CUInt -> IO () foreign import ccall "aes.h aes_decrypt_cbc" c_aes_decrypt_cbc :: CString -> Ptr AES -> Ptr Word8 -> CString -> CUInt -> IO () ------------------------------------------------------------------------ foreign import ccall "aes.h aes_encrypt_xts" c_aes_encrypt_xts :: CString -> Ptr AES -> Ptr AES -> Ptr Word8 -> CUInt -> CString -> CUInt -> IO () foreign import ccall "aes.h aes_decrypt_xts" c_aes_decrypt_xts :: CString -> Ptr AES -> Ptr AES -> Ptr Word8 -> CUInt -> CString -> CUInt -> IO () ------------------------------------------------------------------------ foreign import ccall "aes.h aes_gen_ctr" c_aes_gen_ctr :: CString -> Ptr AES -> Ptr Word8 -> CUInt -> IO () foreign import ccall unsafe "aes.h aes_gen_ctr_cont" c_aes_gen_ctr_cont :: CString -> Ptr AES -> Ptr Word8 -> CUInt -> IO () foreign import ccall "aes.h aes_encrypt_ctr" c_aes_encrypt_ctr :: CString -> Ptr AES -> Ptr Word8 -> CString -> CUInt -> IO () ------------------------------------------------------------------------ foreign import ccall "aes.h aes_gcm_init" c_aes_gcm_init :: Ptr AESGCM -> Ptr AES -> Ptr Word8 -> CUInt -> IO () foreign import ccall "aes.h aes_gcm_aad" c_aes_gcm_aad :: Ptr AESGCM -> CString -> CUInt -> IO () foreign import ccall "aes.h aes_gcm_encrypt" c_aes_gcm_encrypt :: CString -> Ptr AESGCM -> Ptr AES -> CString -> CUInt -> IO () foreign import ccall "aes.h aes_gcm_decrypt" c_aes_gcm_decrypt :: CString -> Ptr AESGCM -> Ptr AES -> CString -> CUInt -> IO () foreign import ccall "aes.h aes_gcm_finish" c_aes_gcm_finish :: CString -> Ptr AESGCM -> Ptr AES -> IO () ------------------------------------------------------------------------ foreign import ccall "aes.h aes_ocb_init" c_aes_ocb_init :: Ptr AESOCB -> Ptr AES -> Ptr Word8 -> CUInt -> IO () foreign import ccall "aes.h aes_ocb_aad" c_aes_ocb_aad :: Ptr AESOCB -> Ptr AES -> CString -> CUInt -> IO () foreign import ccall "aes.h aes_ocb_encrypt" c_aes_ocb_encrypt :: CString -> Ptr AESOCB -> Ptr AES -> CString -> CUInt -> IO () foreign import ccall "aes.h aes_ocb_decrypt" c_aes_ocb_decrypt :: CString -> Ptr AESOCB -> Ptr AES -> CString -> CUInt -> IO () foreign import ccall "aes.h aes_ocb_finish" c_aes_ocb_finish :: CString -> Ptr AESOCB -> Ptr AES -> IO () cipher-aes-0.2.11/Tests/0000755000000000000000000000000012541525177013114 5ustar0000000000000000cipher-aes-0.2.11/Tests/KATCBC.hs0000644000000000000000000005501512541525177014405 0ustar0000000000000000{-# LANGUAGE OverloadedStrings #-} module KATCBC where import qualified Data.ByteString as B import Data.ByteString.Char8 () type KATCBC = (B.ByteString, B.ByteString, B.ByteString, B.ByteString) vectors_aes128_enc, vectors_aes128_dec , vectors_aes192_enc, vectors_aes192_dec , vectors_aes256_enc, vectors_aes256_dec :: [KATCBC] vectors_aes128_enc = [ ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x66\xe9\x4b\xd4\xef\x8a\x2c\x3b\x88\x4c\xfa\x59\xca\x34\x2b\x2e") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\xb6\xae\xaf\xfa\x75\x2d\xc0\x8b\x51\x63\x97\x31\x76\x1a\xed\x00") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\xcb\x64\xcf\x3f\x42\x2a\xe8\x4b\xb9\x0e\x3a\xb4\xdb\xa7\xbd\x86") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\xe5\xb5\x07\x7f\x93\x46\x46\x2c\x62\xa0\x75\xc0\xc7\x08\xee\x96") , ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\xe1\x4d\x5d\x0e\xe2\x77\x15\xdf\x08\xb4\x15\x2b\xa2\x3d\xa8\xe0") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x5e\x77\xe5\x9f\x8f\x85\x94\x34\x89\xa2\x41\x49\xc7\x5f\x4e\xc9") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x8f\x42\xc2\x4b\xee\x6e\x63\x47\x2b\x16\x5a\xa9\x41\x31\x2f\x7c") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\xb0\xea\x4a\xc0\xd2\x5c\xcd\x7c\x82\xcb\x8a\x30\x68\xc6\xfe\x2e") , ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\xe1\x4d\x5d\x0e\xe2\x77\x15\xdf\x08\xb4\x15\x2b\xa2\x3d\xa8\xe0") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x17\xd6\x14\xf3\x79\xa9\x35\x90\x77\xe9\x55\x77\xfd\x31\xc2\x0a") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x8f\x42\xc2\x4b\xee\x6e\x63\x47\x2b\x16\x5a\xa9\x41\x31\x2f\x7c") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\xe5\xb5\x07\x7f\x93\x46\x46\x2c\x62\xa0\x75\xc0\xc7\x08\xee\x96") ] vectors_aes192_enc = [ ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\xaa\xe0\x69\x92\xac\xbf\x52\xa3\xe8\xf4\xa9\x6e\xc9\x30\x0b\xd7") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x5f\x88\xef\x3f\xbd\xeb\xf2\xe4\xe2\x66\x65\x12\xd3\xbc\xb7\x0f") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\xdb\x42\xf5\x1c\xd2\x0e\xca\xd2\x9e\xb0\x13\x2b\x0f\xaa\x4b\x85") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\xda\xb4\x01\x5f\x98\x70\x25\xeb\xb8\xa8\x5f\x3c\x7f\x73\x70\x19") , ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\xcf\x1e\xce\x3c\x44\xb0\x78\xfb\x27\xcb\x0a\x3e\x07\x1b\x08\x20") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x98\xb8\x95\xa1\x45\xca\x4e\x0b\xf8\x3e\x69\x32\x81\xc1\xa0\x97") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\xf2\xf0\xae\xd8\xcd\xc9\x21\xca\x4b\x55\x84\x5d\xa4\x15\x21\xc2") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x5e\xea\x4b\x13\xdd\xd9\x17\x12\xb0\x14\xe2\x82\x2d\x18\x76\xfb") , ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\xcf\x1e\xce\x3c\x44\xb0\x78\xfb\x27\xcb\x0a\x3e\x07\x1b\x08\x20") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\xeb\x8c\x17\x30\x90\xc7\x5b\x77\xd6\x72\xb4\x57\xa7\x78\xd9\xd0") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\xf2\xf0\xae\xd8\xcd\xc9\x21\xca\x4b\x55\x84\x5d\xa4\x15\x21\xc2") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\xda\xb4\x01\x5f\x98\x70\x25\xeb\xb8\xa8\x5f\x3c\x7f\x73\x70\x19") ] vectors_aes256_enc = [ ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\xdc\x95\xc0\x78\xa2\x40\x89\x89\xad\x48\xa2\x14\x92\x84\x20\x87") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x72\x98\xca\xa5\x65\x03\x1e\xad\xc6\xce\x23\xd2\x3e\xa6\x63\x78") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\xf4\x35\xa1\x11\xa3\xe4\xa1\x94\x49\x19\xf9\x12\xc5\xa2\x41\xde") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x91\xc0\x87\x62\x87\x6d\xcc\xf9\xba\x20\x4a\x33\x76\x8f\xa5\xfe") , ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x7b\xc3\x02\x6c\xd7\x37\x10\x3e\x62\x90\x2b\xcd\x18\xfb\x01\x63") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x9c\xac\x94\xc6\xb4\x85\x61\xf8\xff\xaa\xa7\x86\x16\xba\x48\x92") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\xf9\xc7\x44\x4b\xb0\xcc\x80\x6c\x7c\x39\xee\x22\x11\xf1\x46") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x6d\xed\xd0\xa3\xe6\x94\xa0\xde\x65\x1d\x68\xa6\xb5\x5a\x64\xa2") , ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x7b\xc3\x02\x6c\xd7\x37\x10\x3e\x62\x90\x2b\xcd\x18\xfb\x01\x63") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x62\xae\x12\xf3\x24\xbf\xea\x08\xd5\xf6\x75\xb5\x13\x02\x6b\xbf") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x00\xf9\xc7\x44\x4b\xb0\xcc\x80\x6c\x7c\x39\xee\x22\x11\xf1\x46") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x91\xc0\x87\x62\x87\x6d\xcc\xf9\xba\x20\x4a\x33\x76\x8f\xa5\xfe") ] vectors_aes128_dec = [ ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x14\x0f\x0f\x10\x11\xb5\x22\x3d\x79\x58\x77\x17\xff\xd9\xec\x3a") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x33\x08\x32\x40\xd6\x5c\xbc\x72\xaa\x0b\x44\xf3\xe1\x9e\xa9\x5a") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x65\x0a\x42\xa0\x3c\x4b\x93\xa4\xb7\x43\xdc\x9e\x9c\xf4\xc0\x9b") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x80\xcd\x20\xe1\xbd\x89\x3c\x5e\xe4\x20\x76\x85\xb0\x9a\x0e\x3e") , ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x15\x0e\x0e\x11\x10\xb4\x23\x3c\x78\x59\x76\x16\xfe\xd8\xed\x3b") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x32\x09\x33\x41\xd7\x5d\xbd\x73\xab\x0a\x45\xf2\xe0\x9f\xa8\x5b") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x64\x0b\x43\xa1\x3d\x4a\x92\xa5\xb6\x42\xdd\x9f\x9d\xf5\xc1\x9a") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x81\xcc\x21\xe0\xbc\x88\x3d\x5f\xe5\x21\x77\x84\xb1\x9b\x0f\x3f") , ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\xf5\x06\x41\x7e\x6a\x8f\xbc\x32\xdd\xa5\x52\x73\xbf\x9f\x4d\x5c") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\xbf\x6d\x28\xac\x20\xc9\x1d\x65\xa9\xd4\xb0\x96\xc2\xd5\xa5\x09") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x5f\x2a\x46\xab\x8d\xb9\x5b\x22\x15\xfe\x1a\xa4\xdd\x69\x59\x26") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x71\x9b\x21\xb5\x39\x7c\x2f\x16\x7c\x8b\x45\x22\xb5\x20\xec\x2e") ] vectors_aes192_dec = [ ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x13\x46\x0e\x87\xa8\xfc\x02\x3e\xf2\x50\x1a\xfe\x7f\xf5\x1c\x51") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x91\x75\x27\xfc\xd4\xa0\x6f\x32\x27\x29\x90\x14\xca\xde\xd4\x1a") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x29\x64\x80\xb6\xa5\xd6\xcf\xb3\x78\x3f\x21\x6b\x80\x31\x3d\xb3") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\xbc\xa5\x06\x07\xd0\x67\x30\x85\x2d\x3a\x50\x4b\x68\x0a\x19\xcc") , ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x12\x47\x0f\x86\xa9\xfd\x03\x3f\xf3\x51\x1b\xff\x7e\xf4\x1d\x50") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x90\x74\x26\xfd\xd5\xa1\x6e\x33\x26\x28\x91\x15\xcb\xdf\xd5\x1b") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x28\x65\x81\xb7\xa4\xd7\xce\xb2\x79\x3e\x20\x6a\x81\x30\x3c\xb2") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\xbd\xa4\x07\x06\xd1\x66\x31\x84\x2c\x3b\x51\x4a\x69\x0b\x18\xcd") , ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x38\xf9\xf9\xd1\x7e\x2c\x82\xaf\xdc\xed\x68\x03\xb6\x31\x46\x3e") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x35\x4e\xc1\x01\x0f\x17\x50\x5e\x63\x37\x40\x4b\x9a\xf2\xc0\x5c") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\xa7\x7c\xc9\xd1\x4f\x44\xf7\xf7\xcc\x45\x80\x83\x19\xb7\xa4\x71") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\xf9\x1d\xb1\x13\x0b\xd1\xc0\x66\x9f\xfa\xc2\x0e\xbe\xdd\xcb\xca") ] vectors_aes256_dec = [ ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x67\x67\x1c\xe1\xfa\x91\xdd\xeb\x0f\x8f\xbb\xb3\x66\xb5\x31\xb4") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x7b\xd3\xfb\x90\x65\x56\x9f\x39\x8b\x09\xcb\x93\x4b\x1e\x01\x23") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\xda\xa8\xbf\x5c\xde\x2e\x52\x45\x5f\xa3\xb3\xfe\x33\x32\x47\xca") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x83\x24\xdc\xb4\x30\x12\x73\x6c\xed\x58\xab\x8f\x4b\x05\xca\x0b") , ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x66\x66\x1d\xe0\xfb\x90\xdc\xea\x0e\x8e\xba\xb2\x67\xb4\x30\xb5") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x7a\xd2\xfa\x91\x64\x57\x9e\x38\x8a\x08\xca\x92\x4a\x1f\x00\x22") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\xdb\xa9\xbe\x5d\xdf\x2f\x53\x44\x5e\xa2\xb2\xff\x32\x33\x46\xcb") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x82\x25\xdd\xb5\x31\x13\x72\x6d\xec\x59\xaa\x8e\x4a\x04\xcb\x0a") , ("\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x68\xe9\x07\x16\xe3\x66\x1b\x1d\xb1\x89\x74\xb0\x9c\x46\x47\xe4") , ("\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01","\x7f\xb9\xeb\xa4\xd3\x5f\x70\x40\xab\x52\xec\xd2\x3b\x48\xb7\x6e") , ("\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02\x02","\x6c\x58\x0f\x41\x82\x36\xbc\xff\x64\x1d\xac\xa7\x3e\x34\x11\x18") , ("\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03\x03","\x3f\x62\xd6\x8c\xb1\xf7\x62\x28\xa4\xc3\x82\x4f\x8b\x24\xe7\x4b") ] vectors_encrypt = [ ("AES 128 Enc", vectors_aes128_enc) , ("AES 192 Enc", vectors_aes192_enc) , ("AES 256 Enc", vectors_aes256_enc) ] vectors_decrypt = [ ("AES 128 Dec", vectors_aes128_dec) , ("AES 192 Dec", vectors_aes192_dec) , ("AES 256 Dec", vectors_aes256_dec) ] cipher-aes-0.2.11/Tests/KATECB.hs0000644000000000000000000001162312541525177014404 0ustar0000000000000000module KATECB where import qualified Data.ByteString as B vectors_aes128_enc = [ ( B.pack [0x10, 0xa5, 0x88, 0x69, 0xd7, 0x4b, 0xe5, 0xa3,0x74,0xcf,0x86,0x7c,0xfb,0x47,0x38,0x59] , B.replicate 16 0 , B.pack [0x6d,0x25,0x1e,0x69,0x44,0xb0,0x51,0xe0,0x4e,0xaa,0x6f,0xb4,0xdb,0xf7,0x84,0x65] ) , ( B.replicate 16 0 , B.replicate 16 0 , B.pack [0x66,0xe9,0x4b,0xd4,0xef,0x8a,0x2c,0x3b,0x88,0x4c,0xfa,0x59,0xca,0x34,0x2b,0x2e] ) , ( B.replicate 16 0 , B.replicate 16 1 , B.pack [0xe1,0x4d,0x5d,0x0e,0xe2,0x77,0x15,0xdf,0x08,0xb4,0x15,0x2b,0xa2,0x3d,0xa8,0xe0] ) , ( B.replicate 16 1 , B.replicate 16 2 , B.pack [0x17,0xd6,0x14,0xf3,0x79,0xa9,0x35,0x90,0x77,0xe9,0x55,0x77,0xfd,0x31,0xc2,0x0a] ) , ( B.replicate 16 2 , B.replicate 16 1 , B.pack [0x8f,0x42,0xc2,0x4b,0xee,0x6e,0x63,0x47,0x2b,0x16,0x5a,0xa9,0x41,0x31,0x2f,0x7c] ) , ( B.replicate 16 3 , B.replicate 16 2 , B.pack [0x90,0x98,0x85,0xe4,0x77,0xbc,0x20,0xf5,0x8a,0x66,0x97,0x1d,0xa0,0xbc,0x75,0xe3] ) ] vectors_aes192_enc = [ ( B.replicate 24 0 , B.replicate 16 0 , B.pack [0xaa,0xe0,0x69,0x92,0xac,0xbf,0x52,0xa3,0xe8,0xf4,0xa9,0x6e,0xc9,0x30,0x0b,0xd7] ) , ( B.replicate 24 0 , B.replicate 16 1 , B.pack [0xcf,0x1e,0xce,0x3c,0x44,0xb0,0x78,0xfb,0x27,0xcb,0x0a,0x3e,0x07,0x1b,0x08,0x20] ) , ( B.replicate 24 1 , B.replicate 16 2 , B.pack [0xeb,0x8c,0x17,0x30,0x90,0xc7,0x5b,0x77,0xd6,0x72,0xb4,0x57,0xa7,0x78,0xd9,0xd0] ) , ( B.replicate 24 2 , B.replicate 16 1 , B.pack [0xf2,0xf0,0xae,0xd8,0xcd,0xc9,0x21,0xca,0x4b,0x55,0x84,0x5d,0xa4,0x15,0x21,0xc2] ) , ( B.replicate 24 3 , B.replicate 16 2 , B.pack [0xca,0xcc,0x30,0x79,0xe4,0xb7,0x95,0x27,0x63,0xd2,0x55,0xd6,0x34,0x10,0x46,0x14] ) ] vectors_aes256_enc = [ ( B.replicate 32 0 , B.replicate 16 0 , B.pack [0xdc,0x95,0xc0,0x78,0xa2,0x40,0x89,0x89,0xad,0x48,0xa2,0x14,0x92,0x84,0x20,0x87] ) , ( B.replicate 32 0 , B.replicate 16 1 , B.pack [0x7b,0xc3,0x02,0x6c,0xd7,0x37,0x10,0x3e,0x62,0x90,0x2b,0xcd,0x18,0xfb,0x01,0x63] ) , ( B.replicate 32 1 , B.replicate 16 2 , B.pack [0x62,0xae,0x12,0xf3,0x24,0xbf,0xea,0x08,0xd5,0xf6,0x75,0xb5,0x13,0x02,0x6b,0xbf] ) , ( B.replicate 32 2 , B.replicate 16 1 , B.pack [0x00,0xf9,0xc7,0x44,0x4b,0xb0,0xcc,0x80,0x6c,0x7c,0x39,0xee,0x22,0x11,0xf1,0x46] ) , ( B.replicate 32 3 , B.replicate 16 2 , B.pack [0xb4,0x05,0x87,0x3e,0xa0,0x76,0x1b,0x9c,0xa9,0x9f,0x70,0xb0,0x16,0x16,0xce,0xb1] ) ] vectors_aes128_dec = [ ( B.replicate 16 0 , B.replicate 16 0 , B.pack [0x14,0x0f,0x0f,0x10,0x11,0xb5,0x22,0x3d,0x79,0x58,0x77,0x17,0xff,0xd9,0xec,0x3a] ) , ( B.replicate 16 0 , B.replicate 16 1 , B.pack [0x15,0x6d,0x0f,0x85,0x75,0xd5,0x33,0x07,0x52,0xf8,0x4a,0xf2,0x72,0xff,0x30,0x50] ) , ( B.replicate 16 1 , B.replicate 16 2 , B.pack [0x34,0x37,0xd6,0xe2,0x31,0xd7,0x02,0x41,0x9b,0x51,0xb4,0x94,0x72,0x71,0xb6,0x11] ) , ( B.replicate 16 2 , B.replicate 16 1 , B.pack [0xe3,0xcd,0xe2,0x37,0xc8,0xf2,0xd9,0x7b,0x8d,0x79,0xf9,0x17,0x1d,0x4b,0xda,0xc1] ) , ( B.replicate 16 3 , B.replicate 16 2 , B.pack [0x5b,0x94,0xaa,0xed,0xd7,0x83,0x99,0x8c,0xd5,0x15,0x35,0x35,0x18,0xcc,0x45,0xe2] ) ] vectors_aes192_dec = [ ( B.replicate 24 0 , B.replicate 16 0 , B.pack [0x13,0x46,0x0e,0x87,0xa8,0xfc,0x02,0x3e,0xf2,0x50,0x1a,0xfe,0x7f,0xf5,0x1c,0x51] ) , ( B.replicate 24 0 , B.replicate 16 1 , B.pack [0x92,0x17,0x07,0xc3,0x3d,0x1c,0xc5,0x96,0x7d,0xa5,0x1d,0xbb,0xb0,0x66,0xb2,0x6c] ) , ( B.replicate 24 1 , B.replicate 16 2 , B.pack [0xee,0x92,0x97,0xc6,0xba,0xe8,0x26,0x4d,0xff,0x08,0x0e,0xbb,0x1e,0x74,0x11,0xc1] ) , ( B.replicate 24 2 , B.replicate 16 1 , B.pack [0x49,0x67,0xdf,0x70,0xd2,0x9e,0x9a,0x7f,0x5d,0x7c,0xb9,0xc1,0x20,0xc3,0x8a,0x71] ) , ( B.replicate 24 3 , B.replicate 16 2 , B.pack [0x74,0x38,0x62,0x42,0x6b,0x56,0x7f,0xd5,0xf0,0x1d,0x1b,0x59,0x56,0x01,0x26,0x29] ) ] vectors_aes256_dec = [ ( B.replicate 32 0 , B.replicate 16 0 , B.pack [0x67,0x67,0x1c,0xe1,0xfa,0x91,0xdd,0xeb,0x0f,0x8f,0xbb,0xb3,0x66,0xb5,0x31,0xb4] ) , ( B.replicate 32 0 , B.replicate 16 1 , B.pack [0xcc,0x09,0x21,0xa3,0xc5,0xca,0x17,0xf7,0x48,0xb7,0xc2,0x7b,0x73,0xba,0x87,0xa2] ) , ( B.replicate 32 1 , B.replicate 16 2 , B.pack [0xc0,0x4b,0x27,0x90,0x1a,0x50,0xcf,0xfa,0xf1,0xbb,0x88,0x9f,0xc0,0x92,0x5e,0x14] ) , ( B.replicate 32 2 , B.replicate 16 1 , B.pack [0x24,0x61,0x53,0x5d,0x16,0x1c,0x15,0x39,0x88,0x32,0x77,0x29,0xc5,0x8c,0xc0,0x3a] ) , ( B.replicate 32 3 , B.replicate 16 2 , B.pack [0x30,0xc9,0x1c,0xce,0xfe,0x89,0x30,0xcf,0xff,0x31,0xdb,0xcc,0xfc,0x11,0xc5,0x23] ) ] vectors_encrypt = [ ("AES 128 Enc", vectors_aes128_enc) , ("AES 192 Enc", vectors_aes192_enc) , ("AES 256 Enc", vectors_aes256_enc) ] vectors_decrypt = [ ("AES 128 Dec", vectors_aes128_dec) , ("AES 192 Dec", vectors_aes192_dec) , ("AES 256 Dec", vectors_aes256_dec) ] cipher-aes-0.2.11/Tests/KATGCM.hs0000644000000000000000000001335312541525177014423 0ustar0000000000000000{-# LANGUAGE OverloadedStrings #-} module KATGCM where import qualified Data.ByteString as B import Data.ByteString.Char8 () -- (key, iv, aad, input, out, taglen, tag) type KATGCM = (B.ByteString, B.ByteString, B.ByteString, B.ByteString, B.ByteString, Int, B.ByteString) vectors_aes128_enc :: [KATGCM] vectors_aes128_enc = [ -- vectors 0 ( {-key = -}"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , {-iv = -}"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , {-aad = -}"" , {-input = -}"" , {-out = -}"" , {-taglen = -}16 , {-tag = -}"\x58\xe2\xfc\xce\xfa\x7e\x30\x61\x36\x7f\x1d\x57\xa4\xe7\x45\x5a") -- vectors 1 , ( {-key = -}"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , {-iv = -}"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , {-aad = -}"\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01" , {-input = -}"\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a" , {-out = -}"\x09\x82\xd0\xc4\x6a\xbc\xa9\x98\xf9\x22\xc8\xb3\x7b\xb8\xf4\x72\xfd\x9f\xa0\xa1\x43\x41\x53\x29\xfd\xf7\x83\xf5\x9e\x81\xcb\xea" , {-taglen = -}16 , {-tag = -}"\x28\x50\x64\x2f\xa8\x8b\xab\x21\x2a\x67\x1a\x97\x48\x69\xa5\x6c") -- vectors 2 , ( {-key = -}"\x01\x02\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , {-iv = -}"\xff\xfe\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , {-aad = -}"\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01" , {-input = -}"\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a" , {-out = -}"\x1c\xa3\xb5\x41\x39\x6f\x19\x7a\x91\x2d\x27\x15\x70\xd1\xf5\x76\xde\xf1\xbe\x84\x42\x2a\xbb\xbe\x0b\x2d\x91\x21\x82\xbf\x7f\x17" , {-taglen = -}16 , {-tag = -}"\x15\x2a\x05\xbb\x7e\x13\x5d\xbe\x93\x7f\xa0\x54\x7a\x8e\x74\xb6") -- vectors 3 , ( {-key = -}"\x01\x02\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , {-iv = -}"\xff\xfe\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , {-aad = -}"\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01" , {-input = -}"\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a" , {-out = -}"\xda\x35\xf6\x0a\x65\xc2\xa4\x6c\xb6\x6e\xb6\xf8\x1f\x0b\x9c\x74\x53\x4c\x97\x70\x36\xf7\xdf\x05\x6d\x00\xfe\xbf\xb4\xcb\xf5\x27" , {-taglen = -}16 , {-tag = -}"\xb7\x76\x7c\x3b\x9e\xf1\xe2\xcb\xc9\x11\xf1\x9a\xdc\xfa\x35\x0d") , ( {-key = -}"\x01\x02\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , {-iv = -}"\xff\xfe\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , {-aad = -}"\x76\x76\x76\x76\x76\x76\x76\x76\x76\x76\x76\x76\x76\x76\x76\x76" , {-input = -}"\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b" , {-out = -}"\xe4\x42\xf8\xc4\xc6\x67\x84\x86\x4a\x5a\x6e\xc7\xe0\xca\x68\xac\x16\xbc\x5b\xbf\xf7\xd5\xf3\xfa\xf3\xb2\xcb\xb0\xa2\x14\xa1\x81" , {-taglen = -}16 , {-tag = -}"\x5f\x63\xb8\xeb\x1d\x6f\xa8\x7a\xeb\x39\xa5\xf6\xd7\xed\xc3\x13") , ( {-key = -}"\x01\x02\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , {-iv = -}"\xff\xfe\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , {-aad = -}"\x76\x76\x76\x76\x76\x76\x76\x76\x76\x76\x76\x76\x76" , {-input = -}"\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b\x0b" , {-out = -}"\xe4\x42\xf8\xc4\xc6\x67\x84\x86\x4a\x5a\x6e\xc7\xe0\xca\x68\xac\x16\xbc\x5b\xbf\xf7\xd5\xf3\xfa\xf3\xb2\xcb\xb0\xa2\x14\xa1" , {-taglen = -}16 , {-tag = -}"\x94\xd1\x47\xc3\xa2\xca\x93\xe9\x66\x93\x1e\x3b\xb3\xbb\x67\x01") ] vectors_aes256_enc :: [KATGCM] vectors_aes256_enc = [ ( "\xb5\x2c\x50\x5a\x37\xd7\x8e\xda\x5d\xd3\x4f\x20\xc2\x25\x40\xea\x1b\x58\x96\x3c\xf8\xe5\xbf\x8f\xfa\x85\xf9\xf2\x49\x25\x05\xb4" , "\x51\x6c\x33\x92\x9d\xf5\xa3\x28\x4f\xf4\x63\xd7" , "" , "" , "" , 16 , "\xbd\xc1\xac\x88\x4d\x33\x24\x57\xa1\xd2\x66\x4f\x16\x8c\x76\xf0") , ( "\x78\xdc\x4e\x0a\xaf\x52\xd9\x35\xc3\xc0\x1e\xea\x57\x42\x8f\x00\xca\x1f\xd4\x75\xf5\xda\x86\xa4\x9c\x8d\xd7\x3d\x68\xc8\xe2\x23" , "\xd7\x9c\xf2\x2d\x50\x4c\xc7\x93\xc3\xfb\x6c\x8a" , "\xb9\x6b\xaa\x8c\x1c\x75\xa6\x71\xbf\xb2\xd0\x8d\x06\xbe\x5f\x36" , "" , "" , 16 , "\x3e\x5d\x48\x6a\xa2\xe3\x0b\x22\xe0\x40\xb8\x57\x23\xa0\x6e\x76") , ( "\xc3\xf1\x05\x86\xf2\x46\xaa\xca\xdc\xce\x37\x01\x44\x17\x70\xc0\x3c\xfe\xc9\x40\xaf\xe1\x90\x8c\x4c\x53\x7d\xf4\xe0\x1c\x50\xa0" , "\x4f\x52\xfa\xa1\xfa\x67\xa0\xe5\xf4\x19\x64\x52" , "\x46\xf9\xa2\x2b\x4e\x52\xe1\x52\x65\x13\xa9\x52\xdb\xee\x3b\x91\xf6\x95\x95\x50\x1e\x01\x77\xd5\x0f\xf3\x64\x63\x85\x88\xc0\x8d\x92\xfa\xb8\xc5\x8a\x96\x9b\xdc\xc8\x4c\x46\x8d\x84\x98\xc4\xf0\x63\x92\xb9\x9e\xd5\xe0\xc4\x84\x50\x7f\xc4\x8d\xc1\x8d\x87\xc4\x0e\x2e\xd8\x48\xb4\x31\x50\xbe\x9d\x36\xf1\x4c\xf2\xce\xf1\x31\x0b\xa4\xa7\x45\xad\xcc\x7b\xdc\x41\xf6" , "\x79\xd9\x7e\xa3\xa2\xed\xd6\x50\x45\x82\x1e\xa7\x45\xa4\x47\x42" , "\x56\x0c\xf7\x16\xe5\x61\x90\xe9\x39\x7c\x2f\x10\x36\x29\xeb\x1f" , 16 , "\xff\x7c\x91\x24\x87\x96\x44\xe8\x05\x55\x68\x7d\x27\x3c\x55\xd8" ) ] vectors_encrypt = [ ("AES128 Enc", vectors_aes128_enc) , ("AES256 Enc", vectors_aes256_enc) ] vectors_decrypt = [] cipher-aes-0.2.11/Tests/KATOCB3.hs0000644000000000000000000000330712541525177014501 0ustar0000000000000000{-# LANGUAGE OverloadedStrings #-} module KATOCB3 where import qualified Data.ByteString as B import Data.ByteString.Char8 () -- (key, iv, aad, input, out, taglen, tag) type KATOCB3 = (B.ByteString, B.ByteString, B.ByteString, B.ByteString, B.ByteString, Int, B.ByteString) key1 = "\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" nonce1 = "\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b" vectors_aes128_enc :: [KATOCB3] vectors_aes128_enc = [ ( {-key = -} key1 , {-iv = -} nonce1 , {-aad = -}"" , {-input = -}"" , {-out = -}"" , {-taglen = -} 16 , {-tag = -} "\x19\x7b\x9c\x3c\x44\x1d\x3c\x83\xea\xfb\x2b\xef\x63\x3b\x91\x82") , ( key1, nonce1 , "\x00\x01\x02\x03\x04\x05\x06\x07" , "\x00\x01\x02\x03\x04\x05\x06\x07" , "\x92\xb6\x57\x13\x0a\x74\xb8\x5a" , 16 , "\x16\xdc\x76\xa4\x6d\x47\xe1\xea\xd5\x37\x20\x9e\x8a\x96\xd1\x4e") , ( key1, nonce1 , "\x00\x01\x02\x03\x04\x05\x06\x07" , "" , "" , 16 , "\x98\xb9\x15\x52\xc8\xc0\x09\x18\x50\x44\xe3\x0a\x6e\xb2\xfe\x21") , ( key1, nonce1 , "" , "\x00\x01\x02\x03\x04\x05\x06\x07" , "\x92\xb6\x57\x13\x0a\x74\xb8\x5a" , 16 , "\x97\x1e\xff\xca\xe1\x9a\xd4\x71\x6f\x88\xe8\x7b\x87\x1f\xbe\xed") , ( key1, nonce1 , "\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" , "\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" , "\xbe\xa5\xe8\x79\x8d\xbe\x71\x10\x03\x1c\x14\x4d\xa0\xb2\x61\x22" , 16 , "\x77\x6c\x99\x24\xd6\x72\x3a\x1f\xc4\x52\x45\x32\xac\x3e\x5b\xeb") ] vectors_encrypt = [ ("AES128 Enc", vectors_aes128_enc) ] vectors_decrypt = [] cipher-aes-0.2.11/Tests/KATXTS.hs0000644000000000000000000002641412541525177014475 0ustar0000000000000000{-# LANGUAGE OverloadedStrings #-} module KATXTS where import qualified Data.ByteString as B import Data.ByteString.Char8 () type KATXTS = (B.ByteString, B.ByteString, B.ByteString, B.ByteString, B.ByteString, B.ByteString) vectors_aes128_enc, vectors_aes128_dec, vectors_aes256_enc, vectors_aes256_dec :: [KATXTS] vectors_aes128_enc = [ ( "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , "\x66\xe9\x4b\xd4\xef\x8a\x2c\x3b\x88\x4c\xfa\x59\xca\x34\x2b\x2e\xcc\xd2\x97\xa8\xdf\x15\x59\x76\x10\x99\xf4\xb3\x94\x69\x56\x5c" , "\x91\x7c\xf6\x9e\xbd\x68\xb2\xec\x9b\x9f\xe9\xa3\xea\xdd\xa6\x92\xcd\x43\xd2\xf5\x95\x98\xed\x85\x8c\x02\xc2\x65\x2f\xbf\x92\x2e" ) , ( "\x11\x11\x11\x11\x11\x11\x11\x11\x11\x11\x11\x11\x11\x11\x11\x11" , "\x22\x22\x22\x22\x22\x22\x22\x22\x22\x22\x22\x22\x22\x22\x22\x22" , "\x33\x33\x33\x33\x33\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , "\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44" , "\x3f\x80\x3b\xcd\x0d\x7f\xd2\xb3\x75\x58\x41\x9f\x59\xd5\xcd\xa6\xf9\x00\x77\x9a\x1b\xfe\xa4\x67\xeb\xb0\x82\x3e\xb3\xaa\x9b\x4d" , "\xc4\x54\x18\x5e\x6a\x16\x93\x6e\x39\x33\x40\x38\xac\xef\x83\x8b\xfb\x18\x6f\xff\x74\x80\xad\xc4\x28\x93\x82\xec\xd6\xd3\x94\xf0" ) , ( "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0" , "\x22\x22\x22\x22\x22\x22\x22\x22\x22\x22\x22\x22\x22\x22\x22\x22" , "\x33\x33\x33\x33\x33\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , "\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44\x44" , "\x3f\x80\x3b\xcd\x0d\x7f\xd2\xb3\x75\x58\x41\x9f\x59\xd5\xcd\xa6\xf9\x00\x77\x9a\x1b\xfe\xa4\x67\xeb\xb0\x82\x3e\xb3\xaa\x9b\x4d" , "\xaf\x85\x33\x6b\x59\x7a\xfc\x1a\x90\x0b\x2e\xb2\x1e\xc9\x49\xd2\x92\xdf\x4c\x04\x7e\x0b\x21\x53\x21\x86\xa5\x97\x1a\x22\x7a\x89" ) , ( "\x27\x18\x28\x18\x28\x45\x90\x45\x23\x53\x60\x28\x74\x71\x35\x26" , "\x31\x41\x59\x26\x53\x58\x97\x93\x23\x84\x62\x64\x33\x83\x27\x95" , "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , "\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f\x20\x21\x22\x23\x24\x25\x26\x27\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f\x30\x31\x32\x33\x34\x35\x36\x37\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f\x40\x41\x42\x43\x44\x45\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f\x50\x51\x52\x53\x54\x55\x56\x57\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f\x60\x61\x62\x63\x64\x65\x66\x67\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f\x70\x71\x72\x73\x74\x75\x76\x77\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f\x20\x21\x22\x23\x24\x25\x26\x27\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f\x30\x31\x32\x33\x34\x35\x36\x37\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f\x40\x41\x42\x43\x44\x45\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f\x50\x51\x52\x53\x54\x55\x56\x57\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f\x60\x61\x62\x63\x64\x65\x66\x67\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f\x70\x71\x72\x73\x74\x75\x76\x77\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff" , "" , "\x27\xa7\x47\x9b\xef\xa1\xd4\x76\x48\x9f\x30\x8c\xd4\xcf\xa6\xe2\xa9\x6e\x4b\xbe\x32\x08\xff\x25\x28\x7d\xd3\x81\x96\x16\xe8\x9c\xc7\x8c\xf7\xf5\xe5\x43\x44\x5f\x83\x33\xd8\xfa\x7f\x56\x00\x00\x05\x27\x9f\xa5\xd8\xb5\xe4\xad\x40\xe7\x36\xdd\xb4\xd3\x54\x12\x32\x80\x63\xfd\x2a\xab\x53\xe5\xea\x1e\x0a\x9f\x33\x25\x00\xa5\xdf\x94\x87\xd0\x7a\x5c\x92\xcc\x51\x2c\x88\x66\xc7\xe8\x60\xce\x93\xfd\xf1\x66\xa2\x49\x12\xb4\x22\x97\x61\x46\xae\x20\xce\x84\x6b\xb7\xdc\x9b\xa9\x4a\x76\x7a\xae\xf2\x0c\x0d\x61\xad\x02\x65\x5e\xa9\x2d\xc4\xc4\xe4\x1a\x89\x52\xc6\x51\xd3\x31\x74\xbe\x51\xa1\x0c\x42\x11\x10\xe6\xd8\x15\x88\xed\xe8\x21\x03\xa2\x52\xd8\xa7\x50\xe8\x76\x8d\xef\xff\xed\x91\x22\x81\x0a\xae\xb9\x9f\x91\x72\xaf\x82\xb6\x04\xdc\x4b\x8e\x51\xbc\xb0\x82\x35\xa6\xf4\x34\x13\x32\xe4\xca\x60\x48\x2a\x4b\xa1\xa0\x3b\x3e\x65\x00\x8f\xc5\xda\x76\xb7\x0b\xf1\x69\x0d\xb4\xea\xe2\x9c\x5f\x1b\xad\xd0\x3c\x5c\xcf\x2a\x55\xd7\x05\xdd\xcd\x86\xd4\x49\x51\x1c\xeb\x7e\xc3\x0b\xf1\x2b\x1f\xa3\x5b\x91\x3f\x9f\x74\x7a\x8a\xfd\x1b\x13\x0e\x94\xbf\xf9\x4e\xff\xd0\x1a\x91\x73\x5c\xa1\x72\x6a\xcd\x0b\x19\x7c\x4e\x5b\x03\x39\x36\x97\xe1\x26\x82\x6f\xb6\xbb\xde\x8e\xcc\x1e\x08\x29\x85\x16\xe2\xc9\xed\x03\xff\x3c\x1b\x78\x60\xf6\xde\x76\xd4\xce\xcd\x94\xc8\x11\x98\x55\xef\x52\x97\xca\x67\xe9\xf3\xe7\xff\x72\xb1\xe9\x97\x85\xca\x0a\x7e\x77\x20\xc5\xb3\x6d\xc6\xd7\x2c\xac\x95\x74\xc8\xcb\xbc\x2f\x80\x1e\x23\xe5\x6f\xd3\x44\xb0\x7f\x22\x15\x4b\xeb\xa0\xf0\x8c\xe8\x89\x1e\x64\x3e\xd9\x95\xc9\x4d\x9a\x69\xc9\xf1\xb5\xf4\x99\x02\x7a\x78\x57\x2a\xee\xbd\x74\xd2\x0c\xc3\x98\x81\xc2\x13\xee\x77\x0b\x10\x10\xe4\xbe\xa7\x18\x84\x69\x77\xae\x11\x9f\x7a\x02\x3a\xb5\x8c\xca\x0a\xd7\x52\xaf\xe6\x56\xbb\x3c\x17\x25\x6a\x9f\x6e\x9b\xf1\x9f\xdd\x5a\x38\xfc\x82\xbb\xe8\x72\xc5\x53\x9e\xdb\x60\x9e\xf4\xf7\x9c\x20\x3e\xbb\x14\x0f\x2e\x58\x3c\xb2\xad\x15\xb4\xaa\x5b\x65\x50\x16\xa8\x44\x92\x77\xdb\xd4\x77\xef\x2c\x8d\x6c\x01\x7d\xb7\x38\xb1\x8d\xeb\x4a\x42\x7d\x19\x23\xce\x3f\xf2\x62\x73\x57\x79\xa4\x18\xf2\x0a\x28\x2d\xf9\x20\x14\x7b\xea\xbe\x42\x1e\xe5\x31\x9d\x05\x68" ) ] vectors_aes128_dec = [] vectors_aes256_enc = [ ( "\x27\x18\x28\x18\x28\x45\x90\x45\x23\x53\x60\x28\x74\x71\x35\x26\x62\x49\x77\x57\x24\x70\x93\x69\x99\x59\x57\x49\x66\x96\x76\x27" , "\x31\x41\x59\x26\x53\x58\x97\x93\x23\x84\x62\x64\x33\x83\x27\x95\x02\x88\x41\x97\x16\x93\x99\x37\x51\x05\x82\x09\x74\x94\x45\x92" , "\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" , "\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f\x20\x21\x22\x23\x24\x25\x26\x27\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f\x30\x31\x32\x33\x34\x35\x36\x37\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f\x40\x41\x42\x43\x44\x45\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f\x50\x51\x52\x53\x54\x55\x56\x57\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f\x60\x61\x62\x63\x64\x65\x66\x67\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f\x70\x71\x72\x73\x74\x75\x76\x77\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f\x20\x21\x22\x23\x24\x25\x26\x27\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f\x30\x31\x32\x33\x34\x35\x36\x37\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f\x40\x41\x42\x43\x44\x45\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f\x50\x51\x52\x53\x54\x55\x56\x57\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f\x60\x61\x62\x63\x64\x65\x66\x67\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f\x70\x71\x72\x73\x74\x75\x76\x77\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff" , "" , "\x1c\x3b\x3a\x10\x2f\x77\x03\x86\xe4\x83\x6c\x99\xe3\x70\xcf\x9b\xea\x00\x80\x3f\x5e\x48\x23\x57\xa4\xae\x12\xd4\x14\xa3\xe6\x3b\x5d\x31\xe2\x76\xf8\xfe\x4a\x8d\x66\xb3\x17\xf9\xac\x68\x3f\x44\x68\x0a\x86\xac\x35\xad\xfc\x33\x45\xbe\xfe\xcb\x4b\xb1\x88\xfd\x57\x76\x92\x6c\x49\xa3\x09\x5e\xb1\x08\xfd\x10\x98\xba\xec\x70\xaa\xa6\x69\x99\xa7\x2a\x82\xf2\x7d\x84\x8b\x21\xd4\xa7\x41\xb0\xc5\xcd\x4d\x5f\xff\x9d\xac\x89\xae\xba\x12\x29\x61\xd0\x3a\x75\x71\x23\xe9\x87\x0f\x8a\xcf\x10\x00\x02\x08\x87\x89\x14\x29\xca\x2a\x3e\x7a\x7d\x7d\xf7\xb1\x03\x55\x16\x5c\x8b\x9a\x6d\x0a\x7d\xe8\xb0\x62\xc4\x50\x0d\xc4\xcd\x12\x0c\x0f\x74\x18\xda\xe3\xd0\xb5\x78\x1c\x34\x80\x3f\xa7\x54\x21\xc7\x90\xdf\xe1\xde\x18\x34\xf2\x80\xd7\x66\x7b\x32\x7f\x6c\x8c\xd7\x55\x7e\x12\xac\x3a\x0f\x93\xec\x05\xc5\x2e\x04\x93\xef\x31\xa1\x2d\x3d\x92\x60\xf7\x9a\x28\x9d\x6a\x37\x9b\xc7\x0c\x50\x84\x14\x73\xd1\xa8\xcc\x81\xec\x58\x3e\x96\x45\xe0\x7b\x8d\x96\x70\x65\x5b\xa5\xbb\xcf\xec\xc6\xdc\x39\x66\x38\x0a\xd8\xfe\xcb\x17\xb6\xba\x02\x46\x9a\x02\x0a\x84\xe1\x8e\x8f\x84\x25\x20\x70\xc1\x3e\x9f\x1f\x28\x9b\xe5\x4f\xbc\x48\x14\x57\x77\x8f\x61\x60\x15\xe1\x32\x7a\x02\xb1\x40\xf1\x50\x5e\xb3\x09\x32\x6d\x68\x37\x8f\x83\x74\x59\x5c\x84\x9d\x84\xf4\xc3\x33\xec\x44\x23\x88\x51\x43\xcb\x47\xbd\x71\xc5\xed\xae\x9b\xe6\x9a\x2f\xfe\xce\xb1\xbe\xc9\xde\x24\x4f\xbe\x15\x99\x2b\x11\xb7\x7c\x04\x0f\x12\xbd\x8f\x6a\x97\x5a\x44\xa0\xf9\x0c\x29\xa9\xab\xc3\xd4\xd8\x93\x92\x72\x84\xc5\x87\x54\xcc\xe2\x94\x52\x9f\x86\x14\xdc\xd2\xab\xa9\x91\x92\x5f\xed\xc4\xae\x74\xff\xac\x6e\x33\x3b\x93\xeb\x4a\xff\x04\x79\xda\x9a\x41\x0e\x44\x50\xe0\xdd\x7a\xe4\xc6\xe2\x91\x09\x00\x57\x5d\xa4\x01\xfc\x07\x05\x9f\x64\x5e\x8b\x7e\x9b\xfd\xef\x33\x94\x30\x54\xff\x84\x01\x14\x93\xc2\x7b\x34\x29\xea\xed\xb4\xed\x53\x76\x44\x1a\x77\xed\x43\x85\x1a\xd7\x7f\x16\xf5\x41\xdf\xd2\x69\xd5\x0d\x6a\x5f\x14\xfb\x0a\xab\x1c\xbb\x4c\x15\x50\xbe\x97\xf7\xab\x40\x66\x19\x3c\x4c\xaa\x77\x3d\xad\x38\x01\x4b\xd2\x09\x2f\xa7\x55\xc8\x24\xbb\x5e\x54\xc4\xf3\x6f\xfd\xa9\xfc\xea\x70\xb9\xc6\xe6\x93\xe1\x48\xc1\x51" ) ] vectors_aes256_dec = [] vectors_encrypt = [ ("AES 128 Enc", vectors_aes128_enc) , ("AES 256 Enc", vectors_aes256_enc) ] vectors_decrypt = [ ("AES 128 Dec", vectors_aes128_dec) , ("AES 256 Dec", vectors_aes256_dec) ] cipher-aes-0.2.11/Tests/Tests.hs0000644000000000000000000000615212541525177014556 0ustar0000000000000000{-# LANGUAGE ViewPatterns #-} {-# LANGUAGE OverloadedStrings #-} module Main where import Control.Applicative import Control.Monad import Test.Framework (Test, defaultMain, testGroup) import Test.Framework.Providers.QuickCheck2 (testProperty) import Test.QuickCheck import Test.QuickCheck.Test import Data.Byteable import qualified Data.ByteString as B import qualified Crypto.Cipher.AES as AES import Crypto.Cipher.Types import Crypto.Cipher.Tests import qualified KATECB import qualified KATCBC import qualified KATXTS import qualified KATGCM import qualified KATOCB3 instance Show AES.AES where show _ = "AES" instance Arbitrary AES.AESIV where arbitrary = AES.aesIV_ . B.pack <$> replicateM 16 arbitrary instance Arbitrary AES.AES where arbitrary = AES.initAES . B.pack <$> replicateM 16 arbitrary toKatECB (k,p,c) = KAT_ECB { ecbKey = k, ecbPlaintext = p, ecbCiphertext = c } toKatCBC (k,iv,p,c) = KAT_CBC { cbcKey = k, cbcIV = iv, cbcPlaintext = p, cbcCiphertext = c } toKatXTS (k1,k2,iv,p,_,c) = KAT_XTS { xtsKey1 = k1, xtsKey2 = k2, xtsIV = iv, xtsPlaintext = p, xtsCiphertext = c } toKatAEAD mode (k,iv,h,p,c,taglen,tag) = KAT_AEAD { aeadMode = mode , aeadKey = k , aeadIV = iv , aeadHeader = h , aeadPlaintext = p , aeadCiphertext = c , aeadTaglen = taglen , aeadTag = AuthTag tag } toKatGCM = toKatAEAD AEAD_GCM toKatOCB = toKatAEAD AEAD_OCB kats128 = defaultKATs { kat_ECB = map toKatECB KATECB.vectors_aes128_enc , kat_CBC = map toKatCBC KATCBC.vectors_aes128_enc , kat_CFB = [ KAT_CFB { cfbKey = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6\xab\xf7\x15\x88\x09\xcf\x4f\x3c" , cfbIV = "\xC8\xA6\x45\x37\xA0\xB3\xA9\x3F\xCD\xE3\xCD\xAD\x9F\x1C\xE5\x8B" , cfbPlaintext = "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11\xe5\xfb\xc1\x19\x1a\x0a\x52\xef" , cfbCiphertext = "\x26\x75\x1f\x67\xa3\xcb\xb1\x40\xb1\x80\x8c\xf1\x87\xa4\xf4\xdf" } ] , kat_XTS = map toKatXTS KATXTS.vectors_aes128_enc , kat_AEAD = map toKatGCM KATGCM.vectors_aes128_enc ++ map toKatOCB KATOCB3.vectors_aes128_enc } kats192 = defaultKATs { kat_ECB = map toKatECB KATECB.vectors_aes192_enc , kat_CBC = map toKatCBC KATCBC.vectors_aes192_enc } kats256 = defaultKATs { kat_ECB = map toKatECB KATECB.vectors_aes256_enc , kat_CBC = map toKatCBC KATCBC.vectors_aes256_enc , kat_XTS = map toKatXTS KATXTS.vectors_aes256_enc , kat_AEAD = map toKatGCM KATGCM.vectors_aes256_enc } main = defaultMain [ testBlockCipher kats128 (undefined :: AES.AES128) , testBlockCipher kats192 (undefined :: AES.AES192) , testBlockCipher kats256 (undefined :: AES.AES256) , testProperty "genCtr" $ \(key, iv1) -> let (bs1, iv2) = AES.genCounter key iv1 32 (bs2, iv3) = AES.genCounter key iv2 32 (bsAll, iv3') = AES.genCounter key iv1 64 in (B.concat [bs1,bs2] == bsAll && iv3 == iv3') ]