I recently had a bug where I didn't properly handle when the entropy on my linux server got too low and a read of /dev/urandom returned less than the number of bytes expected.
How can I recreate this with a test? Is there a way to lower the entropy on a system or to reliably empty /dev/urandom?
I'd like to be able to have a regression test that will verify my fix. I'm using Ubuntu 12.04.
According to random(4) man page,
read from the /dev/urandom device will not block
You should read a lot of bytes from /dev/random (without any u) if you want it to block. (How many is hardware and system dependent).
So you cannot "exaust" /dev/urandom, since
A read from the /dev/urandom device will not block waiting for
more entropy. As a result, if there is not sufficient entropy in
the entropy pool, the returned values are theoretically vulnerable
to a cryptographic attack on the algorithms used by the driver.
I believe you should use /dev/random which indeed can be exhausted, by blocking.
But you should not read more than about 256 bits from it.
Related
I know that /dev/random blocks to wait for more entropy comes in, but why does /dev/urandom not block?
Because its purpose is not to block. That's literally what distinguishes it from /dev/random. So you don't have to worry about blocking, but the bits you get from it may not be quite as random as those from /dev/random.
According to the man page:
If there is not sufficient entropy in the entropy pool, the returned
values from /dev/urandom are theoretically vulnerable to a
cryptographic attack on the algorithms used by the driver.
The man page adds:
Knowledge of how to do this is not available in the current
unclassified literature, but it is theoretically possible that such an
attack may exist.
I tried to generate very big amounts (> 1GB) of pseudo-random data using crypto.randomBytes() method but I could not produce the exception for drained entropy sources to see what is the behaviour of my application in case of this possible exception.
From Node.JS docs:
Note: Will throw error or invoke callback with error, if there is not enough
accumulated entropy to generate cryptographically strong data.
My question is:
How to drain all entropy sources to make crypto.randomBytes() to produce an exception?
Short answer is - you can't.
Little bit longer answer is - it depends on OS. I assume you use Linux. In theory entropy pool in linux can be easily drained using following script:
#!/bin/bash
while true; do
# write how much entropy is left
cat /proc/sys/kernel/random/entropy_avail
# drain a little bit
dd if=/dev/random of=/dev/null bs=1 count=1 2> /dev/null
done
Running this script will eventually block operations which uses /dev/random, but not /dev/urandom. Urandom doesn't read directly from entropy pool, it uses PRNG and reseeds it (by default) every 60 seconds using /dev/random. So what happen when entropy pool dries up? Nothing. PRNG will be not reseeded, but it will be still generating new numbers, just less cryptographically strong ones.
The only time when this exception could be throwed, is right after system was booted for the first time. I guess it's rather unlikely... Of course other operating systems can handle this matter differently, but as long you use Linux, you shouldn't have to worry about that.
I'm using /dev/urandom to generate random data for my programs. I learned that /dev/random can be empty because, unlike /dev/urandom, it doesn't use SHA when there are not enough bytes generated. /dev/random uses "the kernel entropy pool". Apparently it relies on keyboard timings, mouse movements, and IDE timings.
But how does this really work?
And wouldn't it be possible to "feed" the entropy pool making the /dev/random output predictable?
What you are saying is spot on, yes theoretically it is possible to feed entropy into /dev/random, but you'd need to control a lot of the kernel "noise" sources for it to be significant. You can look at the source for random.c, to see where /dev/random picks up noise from. Basically, if you control a significant number of the noises sources, then you can guess what the others are contributing to the entropy pool.
Since /dev/urandom is a Hash chain seeded from /dev/random, then you could actually predict the next numbers, if you knew the seed. If you have enough control over the entropy pool, then from the output of /dev/urandom you might be able to guess this seed, which would enable you to predict all the next numbers from /dev/urandom, but only if you keep /dev/random exhausted, otherwise /dev/urandom will be reseeded.
That being said, I haven't seen anyone actually do it, not even in a controlled environment. Of course this isn't a guarantee, but I wouldn't worry.
So I'd rather use /dev/urandom and guarantee that my program doesn't block while waiting for entropy, instead of using /dev/random and asking the user to do silly things, like moving the mouse or bang on the keyboard.
I think you should read On entropy and randomness from LWN, hopefully it will calm your worries :-).
Should you still be worried, then get yourself a HRNG.
Edit
Here is a small note on entropy:
I think the concept of entropy is generally difficult to grasp. There is an article with more information on Wikipedia. But basically, in this case, you can read entropy as randomness.
So how I see it, is that you have a big bag of coloured balls, the higher entropy in this bag the harder it is to predict the next colour drawn from the bag.
In this context, your entropy pool is just a bunch of random bytes, where one cannot be derived from the previous, or any of the others. Which means you have high entropy.
I appreciate the depth of jbr's answer.
Adding a practical update for anyone currently staring at a ipsec pki command or something similar blocking on empty entropy pool:
I just installed rng-tools in another window and my pki command completed.
apt-get install rng-tools
I am in the midst of reading a paper at
factorable
and made note of the section where it says:
"For library developers:
Default to the most secure configuration. Both OpenSSL
and Dropbear default to using /dev/urandom instead of
/dev/random, and Dropbear defaults to using a less secure
DSA signature randomness technique even though
a more secure technique is available as an option."
The authors address the tradeoff of an application hanging while waiting for entropy to build /dev/random to get better security compared to a quick, but less secure, result from /dev/urandom.
Some additional Info:
IRQF_SAMPLE_RANDOM : This interrupt flag specifies that interrupts generated by a device should contribute to kernel entropy pool
Interrupt are what devices like mouse, keyboard etc (devices) are sending asynchronously.
How are pipes implemented re buffering? I might be creating many pipes but only ever sending/receiving a few bytes through them at a time, so don't want to waste memory unnecessarily.
Edit: I understand what buffering is, I am asking how the buffering is implemented in Linux pipes specifically, ie does the full 64K get allocated regardless of highwatermark?
Buffers are used to equal out the difference in speed between producer and consumer. If you didn't have a buffer, you would have to switch tasks after every byte produced, which would be very inefficient due to the cost of context switches, data and code caches never becoming hot etc. If your consumer can produce data about as fast as the producer consumes it, your buffer use will usually be low (but read on). If the producer is much faster than the consumer, the buffer will fill up completely and the producer will be forced to wait until more space becomes available. The reversed case of slow producer and fast consumer will use a very small part of the buffer for most of the time.
The usage also depends on whether your both processes actually run in parallel (e.g. on separate cores) or if they share a core and only due to the OS's process management are fooled into thinking that they are concurrent. If you have real concurrency (separate core/CPU), your buffer will usually be used less.
Any way, if your applications are not producing much data and their speeds are similar, the buffer will not be very full most of the time. However, I wouldn't be surprised if at OS level, the full 64 kB were allocated any way. But unless you are using an embedded device, 64 kB is not much, so even if always the maximum size is alloctaed, I wouldn't worry about it.
By the way, it is not easy to modify the size of the pipe buffer, for example in this discussion a number of tricks are suggested but they are actually workarounds which modify the way data from the buffer is consumed, not modifying the actual buffer size. You could check ulimit -p but I'm not 100% sure it will give you the control you need.
EDIT: Looking at fs/pipe.c and include/linux/pipe_fs_i.h in Linux code, it looks like the buffers do change their size. Minimum size of the buffer is a full page, though, so if you only need a few bytes, there will be waste. I'm not sure at this point, but some code that uses PIPE_DEF_BUFFERS, which is 16, giving 64 kB with 4 kB pages, makes me wonder if the buffer can fall below 64 kB (the 1 page minimum could be just an additional restriction).
I need a reasonable supply of high-quality random data for an application I'm writing. Linux provides the /dev/random file for this purpose which is ideal; however, because my server is a single-service virtual machine, it has very limited sources of entropy, meaning /dev/random quickly becomes exhausted.
I've noticed that if I read from /dev/random, I will only get 16 or so random bytes before the device blocks while it waits for more entropy:
[duke#poopz ~]# hexdump /dev/random
0000000 f4d3 8e1e 447a e0e3 d937 a595 1df9 d6c5
<process blocks...>
If I terminate this process, go away for an hour and repeat the command, again only 16 or so bytes of random data are produced.
However - if instead I leave the command running for the same amount of time, much, much more random data are collected. I assume from this that over the course of a given timeperiod, the system produces plenty of entropy, but Linux only utilises it if you are actually reading from /dev/random, and discards it if you are not. If this is the case, my question is:
Is it possible to configure Linux to buffer /dev/random so that reading from it yields much larger bursts of high-quality random data?
It wouldn't be difficult for me to buffer /dev/random as part of my program but I feel doing this at a system level would be more elegant. I also wonder if having Linux buffer its random data in memory would have security implications.
Sounds like you need an entropy deamon that feeds the entropy pool from other sources.
Use /dev/urandom.
A counterpart to /dev/random is
/dev/urandom ("unlocked"/non-blocking
random source[4]) which reuses the
internal pool to produce more
pseudo-random bits. This means that
the call will not block, but the
output may contain less entropy than
the corresponding read from
/dev/random. While it is still
intended as a pseudorandom number
generator suitable for most
cryptographic purposes, it is not
recommended for the generation of
long-term cryptographic keys.
Have you got, or can you buy, a Linux-compatible hardware random number generator? That could be a solution to your underlying problem. See http://www.linuxcertified.com/hw_random.html