[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bug#516394: djbdns



* Russ Allbery:

> The remaining statement on this bug from the security team is:
>
> | djbdns should not be part of squeeze until it is properly hardened
> | against cache poisoning.  It is between 100 and 200 times easier than
> | with other DNS servers.
>
> I don't understand the basis of that comment just from the bug log.  The
> djbdns-specific attack I'm aware of is on SOA, but the bug discussion
> indicates that protecting against SOA isn't sufficient and any cache miss
> will do.  So apparently there's some hardening other than UDP port
> randomization (which djbdns has done for eons) that needs to be done here
> from the security team perspective?  It looks like the hardening that they
> want to implement is duplicate query merging?

Here's an attempt of a write-up of the maths involved, ready for
pasting into LaTeX.  Hopefully, it's not too embarrassing for me.
It's been a while I did such stuff, probability theory wasn't my
forte, and I have no idea what to do to reduce the final quotient.

Suppose the resolver chooses among $N$ distinct secrets (combinations
of source ports, IP addresses and transaction identifiers, etc.).  To
simplify things, we assume that subsequent choices are uniformly
distributed and independent.

If the resolver merges multiple queries for the same record, all we
can do is to supply $m$ distinct guesses for its choice.  Each
iteration has a probability of $\frac m N$ for success, and we have to
process $m + 2$ packets per total ($m$ guesses, a triggering query,
and its response).  This means that we have to send or receive
\[(m + 2)\left(\frac N m + 1\right) \cong N\] packets on average until
we reach a successful attempt, assuming that $m$ is much smaller than
$N$.

If the resolver does not merge multiple queries, but allows up to $n$
parallel queries, the mist straightforward way is to push up the
success probability by sending $n$ parallel queries per attempt.  For
each of those queries, the resolver chooses a distinct secret.  We can
assume the attacker does the same for her $m$ guesses.  This
experiment results in one of
\[\left(N \atop n\right)\left(N \atop m\right)\]
outcomes.  An outcome is unsuccessful if the victim and attacker set
do not intersect.  This means that their union is one of
$\bigl({N\atop n+m}\bigr)$ sets.  Each of those can be distributed
among victim and attacker in
$\bigl({n+m\atop n}\bigr)=\bigl({n+m\atop m}\bigr)$ ways, resulting
in a total of
\[\left(N\atop n+m\right)\left(n+m\atop n\right)
=\left(N\atop n\right)\left(N-n\atop m\right)\]
unsuccessful outcomes.  Thus, the probablity of failure is
\[\left(N-n\atop m\right)\left/\left(N\atop m\right)\right.
=\left(N-m\atop N-n-m\right)\left/\left(N\atop N-n\right)\right.
.\]
Each of these attempts requires processing of $m + 2n$ packets.

Putting $N = 2^{30}$, $n=200$, $m=10^4$ yields
$2^{30}\cong1.07\times10^9$ packets for the first approach, and
$5.59\times10^6$ for the second approach, which means that the second
approach is approximately $192$ times cheaper.

> Except that my understanding of the attack is that it requires
> issuing DNS lookups for a (*very*) large number of RRs that are not
> in the local cache.  This is difficult to force a service to do.

Your MTA probably does DNS lookups with user-supplied domain names
(for EHLO and perhaps for MAIL FROM:, if you use things like SPF).
Your browser does as well, although there are some attempts at
limiting Javascript-driven parallel requests.

The general problem with these attacks is that they are likely to take
out your local resolver, but that's a different issue.


Reply to: