[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: /dev/random



(For Lisi and Bob and others ;-/)

On Fri, Aug 1, 2014 at 8:41 AM,  <pecondon@mesanetworks.net> wrote:
> As many of you know, /dev/random is a source of random bits that are
> suitable for use in cryptographic analysis.

Wikipedia, stackoverflow, and other places have useful entries on
random numbers, randomness, and the problem of mechanically generating
random numbers when the machines we build are cyclic (repetitive) in
nature. "pseudorandom" and "true random numbers" are useful search
terms. Also,

    man 3 rand

and following up on the references in the man page on debian presents
some useful background information.

Way oversimplifying, while a random number between 0 and 32,000 seems
a bit hard to guess to humans, a modern computer can count through the
possibilities in something like a millisecond.

And the random number algorithm for rand() generally uses a standard
formula (for compatibility reasons) such that, once you know one
random number it generates, you can predetermine the next with a
fairly simple calculation.

Random numbers are used in key places in our security systems, to help
thwart attackers. Obviously, the random numbers used there should not
be numbers returned from rand().

Various approaches have been taken to resolve the potential problems,
and the size of the random number and the predictability of the
calculation are always issues. 32 bits is still too quickly countable
to be comfortable, and any static formula is inherently reproduceable.

So we want some source of truly random numbers. The timing of events
that occur within the system tends to hide cyclic behavior and
synchronizations we don't expect. If the attacker can profile the
hardware of a system, he can guess the timings too often for such
things to be used alone.

> The software supporting
> /dev/random collects random time data from monitoring events that are
> not generated by the functioning of the computer, but from something
> like the keystroke times of a human asking for help on this list.

Typing on the keyboard and moving the mouse are two things that are
really out of sync with what's going on in the computer, especially at
sub-millisecond timings.

Floppy disks could be useful, with some care, but of course those are
not generally available any more. USB drives, with no moving parts
except electrons, are generally too regular to be of much use,
although a USB insertion would provide a bit or two of "real" world
"randomness".

"Entropy" here is a mathematical term for referring to this kind of
"real randomness", by the way. (Oversimplifications, again, but you
have to start somewhere.)

Mechanical hard disks and network devices also provide a bit of access
to de-synchronized events, and can also be used with care to provide
some of this entropy. (Emphasis on "with care".)

The HAVEGE algorithm and the haveged daemon attempt to use CPU and
memory timings to provide more.

The CPU by itself is, even with out-of-order execution, too
predictable to be of use. But when you factor in cache hits and
misses, the unpredictable nature of what the CPU has been working on
before, in addition to variations between actual physical cache and
how that interplays with the actual CPU doing the work, etc., can give
useable entropy, according to the HAVEGE project description. Again,
you need to run the resulting timings through some pseudorandomness
massaging to make it actually useable.

The HAVEGE approach has been criticized, particularly if haveged is
allowed to replace all the other sources of entropy.

/dev/random and /dev/urandom are two devices which modern Unix-like
systems provide, which give programs (and users) access to the
randomness of external events like keypresses and mouse motion,
heavily massaged and filtered with pseudorandom techniques. (Without
the pseudorandomness, those devices would give the attacker new ways
to probe a computer.)

> It
> differs from /dev/urandom in that /dev/random blocks and does not give
> any bits if there have not been enough keystrokes since the last call
> to replenish the supply of entropy in its entropy store. In contrast,
> /dev/urandom gives the number of bits requested quickly, but with no
> guarantee as to the quality of their randomness.

I'm thinking I should wonder which openssl defaults to using.

Okay, the openssl site says /dev/urandom since way back, and
/dev/random only if /dev/urandom is not avaiable.

Hmm.

> Places where this
> distinction is discussed suggest that a user of /dev/random 'randomly'
> poke at the keys on his keyboard if he finds himself waiting for
> /dev/random to un-block and give the needed random bits. Some users of
> Debian are concerned about performing cryptographic analysis correctly
> and I wonder: Just how often do you have to poke at the keyboard? And
> when you do poke at it, about how many key presses do you make before
> you get the number of bits you requested? I'm wondering is this a
> event with which many Debianers are quite familiar, or is it more
> like something of a rare event that people know about, but most
> have never actually had it happen to them? Why do I ask?: Just wondering.
>
> Thanks for reading, and please reply with
> whatever experience you want to share.

Well, many years ago, on different OSses, I would be typing a
paragraph or two, to get enough entropy to generate the key. Now, even
from a freshly booted Wheezy, no need for that.

That is, I would type in something like

openssl req -x509 -newkey rsa:1024 -keyout dummykey.pem -out dummyreq.pem

and the machine would block. So I would type randomly, as prompted,
and the machine would, after a few lines of random typing, generate
the key and tell me it had finished.

Now, the random typing is not necessary.

-- 
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.


Reply to: