[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Is #285371 really an exim problem, or is it gnutls failing?



* Marc Haber:

> bug #285371 is about exim blocking for extended periods of time when
> re-generating some gnutls keys.

Disclaimer: I'm just an armchair cryptographer.

"some gnutls keys" refers to the keys that are used to support the
RSA_EXPORT mode in TLS (which is a downgrade to a sub-512 bits RSA key
for servers that have a longer RSA key, but whose certificate is
flagged as "international grade cryptography"), and the DH parameters
for the DH key exchange modes.

The keys are signed using the certified keys and are only used to
encrypt the asymmetric session key.  In theory, if you've got a
certificate for an RSA key, you could use the same key to do the
encryption, but this is generally not recommended.  In addition, you
can throw away the keys after the connection has terminated, thus
achieving perfect forward secrecy: if an attacker compromises the
private key of your certificate, she's still not able to decrypt past
conversations which she has record before.

Note that, however, HTTP over TLS usually doesn't use DH and doesn't
provide perfect forward secrecy.  Key generation involves too much
overhead (not just the PRNG part, but also the primality checks etc.).

> The fix mentioned in the bug report is to regenerate the gnutls keys
> asynchronously and only to exchange them for the exim processes after
> they have been successfully generated, preventing exim from blocking.

I'm not sure if this is a good idea.  Certainly you haven't got
another option if the CPU is too loaded.  But if the PRNG is the only
problem, it's possible to use a slightly weaker PRNG.  At least I
think so.  I'll try to get a second opinion on this matter.

> I am, however, concerned about a tool potentially hanging on to
> /dev/random for extended periods of time, sucking in all available
> entropy, draining it from being available to other applications which
> might be in more dire need of entropy.
>
> Wouldn't it probably be a better idea to have gnutls read entropy from
> /dev/urandom instead?

/dev/urandom drains entropy in much the same way like /dev/random, but
doesn't block when the entropy estimate indicates that no more entropy
is left.

To be more clear, let's assume that Linux estimates that there are 512
bits of entropy in the pool.  If an application reads 64 bytes from
/dev/urandom, another application which reads from /dev/random will
now block because the pool is shared.



Reply to: