Is there any method in OpenSSL which is an equivalent of Crypto++'s SecByteBlock?
Something that clears the memory before freeing it among other things of securing the memory block with sensitive information. Is there any way of securing the RSA struct in memory?
Is there any method in OpenSSL which is an equivalent of Crypto++'s SecByteBlock?
A SecByteBlock is a class that takes advantage of OOP by combining data with the operations to act on the data (lots of hand waiving). OpenSSL is a C library, and it does not have most of the goodies related to OOP.
In OpenSSL, you would use OPENSSL_cleanse. Here are some one-liner uses of it in OpenSSL:
$ grep -R cleanse * | grep -v doc
...
apps/apps.c: OPENSSL_cleanse(buff, (unsigned int)bufsiz);
apps/apps.c: OPENSSL_cleanse(buf, (unsigned int)bufsiz);
apps/apps.c: OPENSSL_cleanse(buf, (unsigned int)bufsiz);
apps/ca.c: OPENSSL_cleanse(key, strlen(key));
apps/dgst.c: OPENSSL_cleanse(buf, BUFSIZE);
apps/enc.c: OPENSSL_cleanse(str, SIZE);
apps/enc.c: OPENSSL_cleanse(str, strlen(str));
...
Is there any way of securing the RSA struct in memory?
RSA_free calls OPENSSL_cleanse internally. So the structure is zeroized when its discarded. According to the OpenSSL man page on RSA_new and RSA_free:
RSA_free() frees the RSA structure and its components. The key is erased before the memory is returned to the system.
But you should probably to define your requirements for "secure in memory." If your requirements include wrapping, then no, OpenSSL does not provide it. But neither does Crypto++.
Related
I was working on a Linux boot-time (kinit) signature checker using ECC certificates, changing over
from raw RSA signatures to CMS-format ECC signatures. In doing so, I found the
CMS_Verify() function stalling until the kernel printed "crng init done", indicating it needed to wait for there to be enough system entropy for cryptographically secure random number generation. Since nothing else is going on in the system, this took about 90 seconds on a Beaglebone Black.
This surprised me, I would have expected secure random numbers to be needed for certificate generation or maybe for signature generation, but there aren't any secrets to protect in public-key signature verification. So what gives?
(I figured it out but had not been able to find the solution elsewhere, so the answer is below for others).
Through a painstaking debug-by-printf process (my best option given it's a kinit), I found that a fundamental ECC operation uses random numbers as a defense against side-channel attacks. This is called "blinding" and helps prevent attackers from sussing out secrets based on how long computation takes, cache misses, power spikes, etc. by adding some indeterminacy.
From comments deep within the OpenSSL source:
/*-
* Computes the multiplicative inverse of a in GF(p), storing the result in r.
* If a is zero (or equivalent), you'll get a EC_R_CANNOT_INVERT error.
* Since we don't have a Mont structure here, SCA hardening is with blinding.
*/
int ec_GFp_simple_field_inv(const EC_GROUP *group, BIGNUM *r, const BIGNUM *a,
BN_CTX *ctx)
and that function goes on to call BN_priv_rand_range().
But in a public-key signature verification there are no secrets to protect. To solve the problem, in my kinit I just pre-seeded the OpenSSL random number generator with a fixed set of randomly-chosen data, as follows:
RAND_seed( "\xe5\xe3[...29 other characters...]\x9a", 32 );
DON'T DO THAT if your program works with secrets or generates any keys, signatures, or random numbers. In a signature-checking kinit it's OK. In a program that required more security I could have seeded with data from the on-chip RNG hardware (/dev/hw_random), or saved away some entropy in secure storage if I had any, or sucked it up and waited for crng init done.
at the moment I am working on the kernel module of ccn-lite (http://www.ccn-lite.net/).
For that I need some security functionality (sha1 and public/private key authentificaton).
For the user-space I use the openssl library, but I cannot use a library in the kernel module.
It is also hard to pick the functions out of OpenSSL and add them to the kernel module, because most of them have dependencies to libc.
Is there any any security function in the linux kernel, which I could use?
Edit:
I can compute the hash function of the data received over ethernet:
struct scatterlist sg[1];
struct crypto_hash *tfm;
struct hash_desc desc;
tfm = crypto_alloc_hash("sha1", 0, CRYPTO_ALG_ASYNC);
desc.tfm = tfm;
desc.flags = 0;
crypto_hash_init(&desc);
sg_init_table(sg, ARRAY_SIZE(sg));
sg_set_buf(&sg[0], input, length);
crypto_hash_digest(&desc, sg, length, md);
crypto_free_hash(tfm);
And now I want to verify the signature field of the data by using the function digsig_verify.
verified = digsig_verify(keyring, sig, sig_len, md, md_len);
As far as I can see, the second parameter is the signature, the third the len of the signature, the forth is the hash of the data and the last is the length of the hash.
The first field has the type "struct key", and should contain the publickey, which is needed to verify the signature?
How can I initialize this parameter i.e. how can I get the systems public key?
Is there also a way to sign a char* in the linux kernel?
The linux kernel comes with a bunch of crypto functions.
See: http://lxr.linux.no/#linux+v3.11/Documentation/crypto/
you could use ipc like netlink to send the data from kernel to user-space and let userspace openssl do the security implementation and revert back the data to the kernel.
I've started some work of which requires some quality random bytes, such as 32 at a time for an initialising vector for certain cryptographic applications. My issue is, this may be called upon multiple times simultaneously and I cannot afford the block /dev/random issues to wait for more collection of entropy.
I could use it to seed other algorithms, for example what /dev/urandom may do - however I do not trust what I cannot understand, I do not have any readily available resource on its method nor do I know if it remains the same between many kernel versions, I prefer a well defined method of some sort.
Are you aware of any methods you can think of over standard PRNGs that would be suited enough to use for (simultaneous) key generation and alike?
Would certain ciphers such as RC4 with a large seed be sufficient to generate random output? (I've seen a /dev/frandom implementation that uses this, however am not entirely sure of it.)
If it means anything, I am on a headless Debian server, reason of lack of entropy gathering.
The response is simple: use /dev/urandom, not /dev/random. /dev/urandom is cryptographically secure, and will not block. The "superiority" of /dev/random over /dev/urandom exist only in a specific theoretical setting which makes no sense if the random bytes are to be used with just about any "normal" cryptographic algorithm, such as encryption or signatures.
See this for more details.
(Trust me, I am a cryptographer.)
Consider using a hardware random number generator. For example, the entropy key or Whirlygig. Using /dev/urandom instead will avoid blocking but may (depending on your level of paranoia) degrade security (you'll output more random bits than you have input entropy, so in theory the output is predictable - this isn't a problem if you're just using it for IVs however)
On a modern CPU with AES hardware acceleration, you can easily reach more than 1 GiB/s of random data by encrypting a string of zeros using a random password (from /dev/urandom), as shown by another answer on serverfault. Note that the random password is passed as a pipe, so that it doesn't show up in the process list.
On my machine, this approach is roughly 100 times faster than /dev/urandom:
$ openssl enc -aes-256-ctr -pass file:<(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64) -nosalt < /dev/zero | pv > /dev/null
11.2GiB 0:00:09 [1.23GiB/s] [ <=> ]
$
$ # Let's look at /dev/urandom for comparison:
$ pv < /dev/urandom > /dev/null
48MiB 0:00:04 [12.4MiB/s] [ <=> ]
If you put this in a shell script, you can easily pipe it into other processes:
$ cat ~/.bin/fast_random
#!/bin/bash
openssl enc -aes-256-ctr \
-pass file:<(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64) \
-nosalt < /dev/zero
I want to verify signed file with s/mime format and the pkcs7 file size is 500MB.
openssl smime -verify -in test.pk7 -inform DER
Error reading S/MIME message
715956256:error:07069041:memory buffer routines:BUF_MEM_grow_clean:malloc failure:buffer.c:152:
715956256:error:0D06B041:asn1 encoding routines:ASN1_D2I_READ_BIO:malloc failure:a_d2i_fp.c:229:
Is it possible with limited memory usage e.g.200MB ?
Unfortunately, OpenSSL will load the whole file in memory.
If possible switching PKCS#7 detached signatures would reduce significantly the memory requirements. That means having the data and the signature as 2 separate files.
I had this problem with a 1.4GB encrypted file, on 32bit host it failed on mallocs, on 64bit it got through.
As Mathias mentions, you can stream process the data in OpenSSL if the signature is detached.
Now if your signature isn't detached, you should still be able detach it yourself. The PKCS#7 format is well-documented. asn1c can work in chunks so you should be able to work with that.
Of course, the proper solution is to get a detached signature in the first place.
I have used NSS library which supports chunk-based processing and it worked perfectly.
The problem is not about randomness itself (we have rand), but in cryptographically secure PRNG. What can be used on Linux, or ideally POSIX? Does NSS have something useful?
Clarification: I know about /dev/random, but it may run out of entropy pool. And I'm not sure whether /dev/urandom is guaranteed to be cryptographically secure.
Use /dev/random (requires user input, eg mouse movements) or /dev/urandom. The latter has an entropy pool and doesn't require any user input unless the pool is empty.
You can read from the pool like this:
char buf[100];
FILE *fp;
if (fp = fopen("/dev/urandom", "r")) {
fread(&buf, sizeof(char), 100, fp);
fclose(fp);
}
Or something like that.
From Wikipedia (my italics):
A counterpart to /dev/random is /dev/urandom ("unlocked" random source) which reuses the internal pool to produce more pseudo-random bits. This means that the call will not block, but the output may contain less entropy than the corresponding read from /dev/random. The intent is to serve as a cryptographically secure pseudorandom number generator. This may be used for less secure applications.
The /dev/random device is intended to be a source of cryptographically secure bits.