[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bug#754513: ITP: libressl -- SSL library, forked from OpenSSL

On Sat, Jul 19, 2014 at 02:27:56AM +0200, Kurt Roeckx wrote:
> > Of course, the syscall numbers and interface details are not set into
> > stone until this gets merged into mainline.
> It doesn't say much about sizes you can request and what the
> result of that would be.  The getentropy() replacement seems to
> suggest 256 isn't something you want to do (when GRND_RANDOM is
> not set?).  random(4) says not to use > 256 bit (32 byte).

You can request tons of entropy; but in general it's a symptom of a
bug, either in the program or in the programmer.  (For example, the
NSS library was using fopen("/dev/urandom", 0), so the first thing it
did was suck in 4k out of the urandom pool.  Sigh...)

I seriously thought of printk'ing a warning if the program tried
grabbing more than say, 1024 bytes, but I decided that might be too

Basically, if you request less than or equal to 256 bytes, with the
GRND_RANDOM flag not set, and assuming that the entropy pool has been
initialized, getrandom(2) will not block, and you will get all of the
bytes that you requested.

Under any other circumstances, the read() paradigm applies.  It can
return EAGAIN or EINTR, and it might not return all of the bytes you
requested.  There are a few cases where this might apply, such as
GnuPG getting enough bits to generate a long-term public key, but the
assumption is that programmers who are doing that sort of work will
know what they are doing.

Basically, the OpenBSD's position is that all application programmers
are morons, even the ones who are implementing cryptographic code (or
perhaps especially those who are implementing cryptographic code).  So
they wanted to make a getentropy(2) system call that was completely
moron-proof.  Hence their getentropy(2) call will return EIO if you
try to fetch more than 256 bytes, and EFAULT if you give it an invalid
buffer, but other than that, will never, ever fail.  (Because
applications programmers are morons and won't check return codes, and
do the appropriate handling for short reads, etc.)

I take a somewhat different philosophical position, which is that it's
impossible to make something moron-proof, because morons are
incredibly ingenious :-), and there are legitimate times when you
might indeed want more than 256 bytes (for example, generating a 4096
bit RSA key pair).  So the design is a compromise.  For "normal"
users, who are just grabbing enough bytes to seed a userspace
cryptographic random number generator (a DRBG in NIST SP 800-90
speak), getrandom(crng_state, 64, 0) is enough to seed an AES-512 RNG,
while you _should_ be checking error returns and checking for short
reads, it shouldn't be necessary, and even if the application
programmer is a moron, and doesn't check return codes, it's unlikely
they will get shot in the foot.

Realistically, if someone is moronic enough not to check return codes,
they probably shouldn't be allowed anywhere near crypto code, since
they will probably be making other, more fatal mistakes.  So in many
ways this is a very minor design point....

> Shouldn't it return a ssize_t instead of an int?  I see it's
> limited to INT_MAX, but it seems in the code to return a ssize_t
> but the manpage says int.

All Linux system calls return an int.  POSIX may specify the use of a
ssize_t, but look at syscall(2).

And for the values of buflen that we're talking about, it really
doesn't matter.  We cap requests larger than INT_MAX anyway, inside
the kernel.

							- Ted

Reply to: