Failed in reading PKCS7 signed file with s/mime format ( big size ) - linux

I want to verify signed file with s/mime format and the pkcs7 file size is 500MB.
openssl smime -verify -in test.pk7 -inform DER
Error reading S/MIME message
715956256:error:07069041:memory buffer routines:BUF_MEM_grow_clean:malloc failure:buffer.c:152:
715956256:error:0D06B041:asn1 encoding routines:ASN1_D2I_READ_BIO:malloc failure:a_d2i_fp.c:229:
Is it possible with limited memory usage e.g.200MB ?

Unfortunately, OpenSSL will load the whole file in memory.
If possible switching PKCS#7 detached signatures would reduce significantly the memory requirements. That means having the data and the signature as 2 separate files.

I had this problem with a 1.4GB encrypted file, on 32bit host it failed on mallocs, on 64bit it got through.

As Mathias mentions, you can stream process the data in OpenSSL if the signature is detached.
Now if your signature isn't detached, you should still be able detach it yourself. The PKCS#7 format is well-documented. asn1c can work in chunks so you should be able to work with that.
Of course, the proper solution is to get a detached signature in the first place.

I have used NSS library which supports chunk-based processing and it worked perfectly.

Related

Why does ECC signature verification need random numbers (sometimes taking a long time) in OpenSSL 1.1?

I was working on a Linux boot-time (kinit) signature checker using ECC certificates, changing over
from raw RSA signatures to CMS-format ECC signatures. In doing so, I found the
CMS_Verify() function stalling until the kernel printed "crng init done", indicating it needed to wait for there to be enough system entropy for cryptographically secure random number generation. Since nothing else is going on in the system, this took about 90 seconds on a Beaglebone Black.
This surprised me, I would have expected secure random numbers to be needed for certificate generation or maybe for signature generation, but there aren't any secrets to protect in public-key signature verification. So what gives?
(I figured it out but had not been able to find the solution elsewhere, so the answer is below for others).
Through a painstaking debug-by-printf process (my best option given it's a kinit), I found that a fundamental ECC operation uses random numbers as a defense against side-channel attacks. This is called "blinding" and helps prevent attackers from sussing out secrets based on how long computation takes, cache misses, power spikes, etc. by adding some indeterminacy.
From comments deep within the OpenSSL source:
/*-
* Computes the multiplicative inverse of a in GF(p), storing the result in r.
* If a is zero (or equivalent), you'll get a EC_R_CANNOT_INVERT error.
* Since we don't have a Mont structure here, SCA hardening is with blinding.
*/
int ec_GFp_simple_field_inv(const EC_GROUP *group, BIGNUM *r, const BIGNUM *a,
BN_CTX *ctx)
and that function goes on to call BN_priv_rand_range().
But in a public-key signature verification there are no secrets to protect. To solve the problem, in my kinit I just pre-seeded the OpenSSL random number generator with a fixed set of randomly-chosen data, as follows:
RAND_seed( "\xe5\xe3[...29 other characters...]\x9a", 32 );
DON'T DO THAT if your program works with secrets or generates any keys, signatures, or random numbers. In a signature-checking kinit it's OK. In a program that required more security I could have seeded with data from the on-chip RNG hardware (/dev/hw_random), or saved away some entropy in secure storage if I had any, or sucked it up and waited for crng init done.

OpenSSL data transmission using AES

I want to use OpenSSL for data transmission between server and client. I want to do it using EVP with AES in CBC mode. But when I try to decode second message on client, EVP_EncryptFinal_ex returns 0.
The my scheme is shown on picture.
I think, this behavior because I call EVP_EncryptFinal_ex (and EVP_DecryptFinal_ex) twice for one EVP context. How to do it correctly?
You cannot call EVP_EncryptUpdate() after calling EVP_EncryptFinal_ex() according to the EVP docs.
If padding is enabled (the default) then EVP_EncryptFinal_ex()
encrypts the "final" data, that is any data that remains in a partial
block. It uses standard block padding (aka PKCS padding) as described
in the NOTES section, below. The encrypted final data is written to
out which should have sufficient space for one cipher block. The
number of bytes written is placed in outl. After this function is
called the encryption operation is finished and no further calls to
EVP_EncryptUpdate() should be made.
Instead, you should setup the cipher ctx for encryption again by calling EVP_EncryptInit_ex(). Note that unlike EVP_EncryptInit(), with EVP_EncryptInit_ex(), you can continue reusing an existing context without allocating and freeing it up on each call.

EVP_DecryptFinal in OpenSSL

I am working on an OpenSSL project. While using the encryption and decryption functions under EVP. EVP_Decrypt_Final is not showing an error but after every OP_SIZE there is 8 bytes of extra data coming in the decrypted file. I used the programs given in stackoverflow with various other users but the error was same.
Please help :)
Extra 8 bytes of data may be result of padding. Block cipher encrypts/decrypts a block of fixed size at a time. If a given block is smaller than the block size, it is padded.
It looks like that you are using ECB or CBC mode.
You may be encrypting the data of multiple blocks. Then you should know different modes of block cipher.
If you do not want padding, consider encrypting your data using CFB or CTR mode.

Which of these encryption methods is more secure? Why?

I am writing a program that takes a passphrase from the user and then writes some encrypted data to file. The method that I have come up with so far is as follows:
Generate a 128-bit IV from hashing the filename and the system time, and write this to the beginning of the file.
Generate a 256-bit key from the passphrase using SHA256.
Encrypt the data (beginning with a 32-bit static signature) with this key using AES in CBC mode, and write it to file.
When decrypting, the IV is read, and then the passphrase used to generate the key in the same way, and the first 32-bits are compared against what the signature should be in order to tell if the key is valid.
However I was looking at the AES example provided in PolarSSL (the library I am using to do the hashing and encryption), and they use a much more complex method:
Generate a 128-bit IV from hashing the filename and file size, and write this to the beginning of the file.
Generate a 256-bit key from hashing (SHA256) the passphrase and the IV together 8192 times.
Initialize the HMAC with this key.
Encrypt the data with this key using AES in CBC mode, and write it to file, while updating the HMAC with each encrypted block.
Write the HMAC to the end of the file.
I get the impression that the second method is more secure, but I don't have enough knowledge to back that up, other than that it looks more complicated.
If it is more secure, what are the reasons for this?
Is appending an HMAC to the end of the file more secure than having a signature at the beginning of the encrypted data?
Does hashing 8192 times increase the security?
Note: This is an open source project so whatever method I use, it will be freely available to anyone.
The second option is more secure.
Your method, does not provide any message integrity. This means that an attacker can modify parts of the ciphertext and alter what the plain text decrypts to. So long as they don't modify anything that will alter your 32-bit static signature then you'll trust it. The HMAC on the second method provides message integrity.
By hashing the key 8192 times it adds extra computational steps for someone to try and bruteforce the key. Assume a user will pick a dictionary based password. With your method an attacker must perform SHA256(someguess) and then try and decrypt. However, with the PolarSSL version, they will have to calculate SHA256(SHA256(SHA256...(SHA256(someguess))) for 8192 times. This will only slow an attacker down, but it might be enough (for now).
For what it's worth, please use an existing library. Cryptography is hard and is prone to subtle mistakes.

m2crypto aes-256-cbc not working against encoded openssl files

$ echo 'this is text' > text.1
$ openssl enc -aes-256-cbc -a -k "thisisapassword" -in text.1 -out text.enc
$ openssl enc -d -aes-256-cbc -a -k "thisisapassword" -in text.enc -out text.2
$ cat text.2
this is text
I can do this with openssl. Now, how do I do the same in m2crypto. Documentation is lacking this. I looked at the snv test cases, still nothing there. I found one sample, http://passingcuriosity.com/2009/aes-encryption-in-python-with-m2crypto/ (changed to aes_256_cbc), and it will encrypted/descrypt it's own strings, but it cannot decrypt anything made with openssl, and anything it encrypts isn't decryptable from openssl.
I need to be able enc/dec with aes-256-cbc as have many files already encrypted with this and we have many other systems in place that also handle the aes-256-cbc output just fine.
We use password phrases only, with no IV. So setting the IV to \0 * 16 makes sense, but I'm not sure if this is also part of the problem.
Anyone have any working samples of doing AES 256 that is compatible with m2crypto?
I will also be trying some additional libraries and seeing if they work any better.
Part of the problem is that the openssl created file contains 16 bytes of prepended salt information Salted__xxxxxxxx. So, these must be extracted first, then decryption may occur. The next problem is to take original password, sprinkle in the salt, and take the generated key from that and make the key/iv pair for decryption. I have been able to make the first round of they key in hash, but being 256 bit, it needs two rounds to be successful. The problem is creating the second round of hash.
It should also be mentioned that we are locked into python 2.4 so some of the future key routines that are introduced do not work for us.

Resources