Why does /dev/urandom not block? - linux

I know that /dev/random blocks to wait for more entropy comes in, but why does /dev/urandom not block?

Because its purpose is not to block. That's literally what distinguishes it from /dev/random. So you don't have to worry about blocking, but the bits you get from it may not be quite as random as those from /dev/random.
According to the man page:
If there is not sufficient entropy in the entropy pool, the returned
values from /dev/urandom are theoretically vulnerable to a
cryptographic attack on the algorithms used by the driver.
The man page adds:
Knowledge of how to do this is not available in the current
unclassified literature, but it is theoretically possible that such an
attack may exist.

Related

Why bittorrent need chunk hash

In the torrent file,every chunk(or piece) has a SHA1 hash.
Sure, this hash is used for verification because public network is unreliable.
In a private network, if all peers is reliable, should this hash been ignored, i.e skip chunk verification in client ?
Is there other consideration about using hash? e.g. network transfer error or software bug.
In a private network, if all peers is reliable
Hardware is never 100% reliable. At large scale you're going to see random bitflips everywhere. TCP and UDP only have weak checksums that will miss a bit flip happening in flight every now and then. Memory may not be protected by ECC. Storage might not even be protected by checksums.
So eventually there will be some corruption go uncaught if data isn't verified.
Generic SHA1 software implementations already are quite fast and should be faster than most common network or storage systems. With specialized SHA1 instructions in recent CPUs the cost of checksumming should become even lower, assuming the software makes use of them.
So generally speaking the risk of bitrot is not worth the very tiny decrease in CPU load. There might be exceptional situations where that is not the case, but it would be up to the operator of that specific system to measure the impact and decide whether he can accept bitrot to save a few CPU cycles.

crypto.randomBytes entropy sources draining

I tried to generate very big amounts (> 1GB) of pseudo-random data using crypto.randomBytes() method but I could not produce the exception for drained entropy sources to see what is the behaviour of my application in case of this possible exception.
From Node.JS docs:
Note: Will throw error or invoke callback with error, if there is not enough
accumulated entropy to generate cryptographically strong data.
My question is:
How to drain all entropy sources to make crypto.randomBytes() to produce an exception?
Short answer is - you can't.
Little bit longer answer is - it depends on OS. I assume you use Linux. In theory entropy pool in linux can be easily drained using following script:
#!/bin/bash
while true; do
# write how much entropy is left
cat /proc/sys/kernel/random/entropy_avail
# drain a little bit
dd if=/dev/random of=/dev/null bs=1 count=1 2> /dev/null
done
Running this script will eventually block operations which uses /dev/random, but not /dev/urandom. Urandom doesn't read directly from entropy pool, it uses PRNG and reseeds it (by default) every 60 seconds using /dev/random. So what happen when entropy pool dries up? Nothing. PRNG will be not reseeded, but it will be still generating new numbers, just less cryptographically strong ones.
The only time when this exception could be throwed, is right after system was booted for the first time. I guess it's rather unlikely... Of course other operating systems can handle this matter differently, but as long you use Linux, you shouldn't have to worry about that.

How does the kernel entropy pool work?

I'm using /dev/urandom to generate random data for my programs. I learned that /dev/random can be empty because, unlike /dev/urandom, it doesn't use SHA when there are not enough bytes generated. /dev/random uses "the kernel entropy pool". Apparently it relies on keyboard timings, mouse movements, and IDE timings.
But how does this really work?
And wouldn't it be possible to "feed" the entropy pool making the /dev/random output predictable?
What you are saying is spot on, yes theoretically it is possible to feed entropy into /dev/random, but you'd need to control a lot of the kernel "noise" sources for it to be significant. You can look at the source for random.c, to see where /dev/random picks up noise from. Basically, if you control a significant number of the noises sources, then you can guess what the others are contributing to the entropy pool.
Since /dev/urandom is a Hash chain seeded from /dev/random, then you could actually predict the next numbers, if you knew the seed. If you have enough control over the entropy pool, then from the output of /dev/urandom you might be able to guess this seed, which would enable you to predict all the next numbers from /dev/urandom, but only if you keep /dev/random exhausted, otherwise /dev/urandom will be reseeded.
That being said, I haven't seen anyone actually do it, not even in a controlled environment. Of course this isn't a guarantee, but I wouldn't worry.
So I'd rather use /dev/urandom and guarantee that my program doesn't block while waiting for entropy, instead of using /dev/random and asking the user to do silly things, like moving the mouse or bang on the keyboard.
I think you should read On entropy and randomness from LWN, hopefully it will calm your worries :-).
Should you still be worried, then get yourself a HRNG.
Edit
Here is a small note on entropy:
I think the concept of entropy is generally difficult to grasp. There is an article with more information on Wikipedia. But basically, in this case, you can read entropy as randomness.
So how I see it, is that you have a big bag of coloured balls, the higher entropy in this bag the harder it is to predict the next colour drawn from the bag.
In this context, your entropy pool is just a bunch of random bytes, where one cannot be derived from the previous, or any of the others. Which means you have high entropy.
I appreciate the depth of jbr's answer.
Adding a practical update for anyone currently staring at a ipsec pki command or something similar blocking on empty entropy pool:
I just installed rng-tools in another window and my pki command completed.
apt-get install rng-tools
I am in the midst of reading a paper at
factorable
and made note of the section where it says:
"For library developers:
Default to the most secure configuration. Both OpenSSL
and Dropbear default to using /dev/urandom instead of
/dev/random, and Dropbear defaults to using a less secure
DSA signature randomness technique even though
a more secure technique is available as an option."
The authors address the tradeoff of an application hanging while waiting for entropy to build /dev/random to get better security compared to a quick, but less secure, result from /dev/urandom.
Some additional Info:
IRQF_SAMPLE_RANDOM : This interrupt flag specifies that interrupts generated by a device should contribute to kernel entropy pool
Interrupt are what devices like mouse, keyboard etc (devices) are sending asynchronously.

How can I exhaust /dev/urandom for testing?

I recently had a bug where I didn't properly handle when the entropy on my linux server got too low and a read of /dev/urandom returned less than the number of bytes expected.
How can I recreate this with a test? Is there a way to lower the entropy on a system or to reliably empty /dev/urandom?
I'd like to be able to have a regression test that will verify my fix. I'm using Ubuntu 12.04.
According to random(4) man page,
read from the /dev/urandom device will not block
You should read a lot of bytes from /dev/random (without any u) if you want it to block. (How many is hardware and system dependent).
So you cannot "exaust" /dev/urandom, since
A read from the /dev/urandom device will not block waiting for
more entropy. As a result, if there is not sufficient entropy in
the entropy pool, the returned values are theoretically vulnerable
to a cryptographic attack on the algorithms used by the driver.
I believe you should use /dev/random which indeed can be exhausted, by blocking.
But you should not read more than about 256 bits from it.

Getting linux to buffer /dev/random

I need a reasonable supply of high-quality random data for an application I'm writing. Linux provides the /dev/random file for this purpose which is ideal; however, because my server is a single-service virtual machine, it has very limited sources of entropy, meaning /dev/random quickly becomes exhausted.
I've noticed that if I read from /dev/random, I will only get 16 or so random bytes before the device blocks while it waits for more entropy:
[duke#poopz ~]# hexdump /dev/random
0000000 f4d3 8e1e 447a e0e3 d937 a595 1df9 d6c5
<process blocks...>
If I terminate this process, go away for an hour and repeat the command, again only 16 or so bytes of random data are produced.
However - if instead I leave the command running for the same amount of time, much, much more random data are collected. I assume from this that over the course of a given timeperiod, the system produces plenty of entropy, but Linux only utilises it if you are actually reading from /dev/random, and discards it if you are not. If this is the case, my question is:
Is it possible to configure Linux to buffer /dev/random so that reading from it yields much larger bursts of high-quality random data?
It wouldn't be difficult for me to buffer /dev/random as part of my program but I feel doing this at a system level would be more elegant. I also wonder if having Linux buffer its random data in memory would have security implications.
Sounds like you need an entropy deamon that feeds the entropy pool from other sources.
Use /dev/urandom.
A counterpart to /dev/random is
/dev/urandom ("unlocked"/non-blocking
random source[4]) which reuses the
internal pool to produce more
pseudo-random bits. This means that
the call will not block, but the
output may contain less entropy than
the corresponding read from
/dev/random. While it is still
intended as a pseudorandom number
generator suitable for most
cryptographic purposes, it is not
recommended for the generation of
long-term cryptographic keys.
Have you got, or can you buy, a Linux-compatible hardware random number generator? That could be a solution to your underlying problem. See http://www.linuxcertified.com/hw_random.html

Resources