[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re[3]: What can make DNS lookups slow? [semi-solved]



Huge thanks to those who have helped on this who may find this
interesting, and to those who have put up with the bandwidth.  I put
all this to the list because increasingly I find that linux/debian
documentation is either so out of date, so incomplete, or so much
written by experts for experts, that I come to debug things from
searches of list archives.

My problem was that DNS lookups from and through my debian firewall
out through my ADSL router to my ISP's DNS servers were slow,
sometimes cripplingly slow.  Lookups from my proxy arp server in a dmz
but which goes through the same firewall seemed to be fine.

Explanation (in my terms, I would hugely appreciate others who
understand arp and the ways UDP communications happen improving on or
correcting this): My ISP's DNS servers are handing back replies from
servers with private IP addresses (10.10.11.11 and 10.10.11.31 come up
in my iptables reject logs.) They send them back to the mac address of the
ethernet that put in the request. That works fine for the server in
the dmz as proxy arp on the firewall is passing messages to and from
that on the basis of its mac address.  However, this system means that
replies from the servers to the firewall itself or to machines
masqueraded through it by DNAT are failing as a I don't have a rule
set up to pass things back on the basis of mac addresses. (I think the
way that shorewall 1.2, the debian stable packaged version of
shorewall, handles DNAT essentially depends on replies to queries
coming back from the IP address to which they were sent.  If there's a
way to set up rules to collect up replies sent from port 53 but from
private IP addresses and enable the firewall machine to know whether
it sent the request being answered itself or forwarded it from one of
the machines inside the firewall, then I'd love to hear of it.  I
don't think I dare ask on the shorewall list as 1.2 is three stable
releases out of date and about to be four behind!)

I have solved the problem in a very crude way by pointing machines
inside the firewall to the firewall itself for DNS queries.  I run
bind9 on the firewall but point it (I hate this but it works) to the
dmz server.  The crucial lines in /etc/bind/named.conf are:

acl "loc" {192.168.1.0/24; 217.34.100.194; 127.0.0.1; 217.34.100.197;};
# which sets up the machines allowed to query this bind service
        query-source address * port 53;
# which makes it use port 53 which makes firewall rules easier
        forwarders { 217.34.100.194;};
# means that it queries the dmz server for everything
        allow-query { "loc"; };
# makes use of that ACL above.

I then run bind9 on the server and only allow the firewall to query
that and I tell that to lookup the ISP's servers.

/etc/resolv.conf on both machines now has only
nameserver 127.0.0.1
to get the machines only to use their own bind9 servers.

I think all this means that I get the advantage of cacheing of the DNS
requests which should cut total querying of the ISP servers
considerably I'd hope, and it works.  However, it strikes me as a
horrid bodge and I'd much prefer to run bind only on the firewall and
have the dmz server lookup in the firewall, not v.v.  So if someone can
see a better way, or how I could achieve that, I'd love to know.

Ugh.  I do love OSS, but we do need better documentation.  Bind looks
to me to be a brilliant piece of s'ware but the documentation assumes
a hell of a lot of the reader.

Cheers all,

Chris





Reply to: