[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: redundancy via DNS



It would depend on how popular the sites hosted on the servers were. If
you set a the times to be too low, say 1 minute, then every time someone
looks up the DNS records, then BLAM... your dns servers are hit because
things aren't cached anywhere.

So I would use something like an hour (we use this). An hour is reasonable
unless you need total 100% uptime. If you needed 100% uptime, you wouldn't
just rely on DNS for this anyway. You'd need something more reliable like
IP takeover, dedicated hardware solutions, etc. Depends greatly on what
your budget is. The dns servers are queried randomly, so say you have 4
DNS servers listed, then each 4, in theory, should get approximately the
same amount of traffic. If one of them goes down, then the client SHOULD
try the next available dns server.

You'd also want to colocate somewhere WAY out of the same network
neighbourhood. Interestingly a few of our clients from the USA do this.
Since we are located in Hong Kong, our networks are totally seperate from
anything you use in the USA. So when these california blackouts (is that
the right term?) hit them, they were fine. If you really want to keep
everything in the USA, try and find totally seperate networks... and i
mean totally (if you want to be real safe). UUnet and the big boys in the
USA tend to have a few core NOCs (even if they tell you everything is
distributed and safe, blah blah blah), and if any one of them is hit with
a blackout, earthquake, etc. then the whole network is affected. This
happened to UUnet in one of the countries in Asia (won't mention which
country it just in case UUnet is watching this) once... something happened
to one of their core international-link routers, and many countries were
affected, including the one our client was in. UUnet may deny it but we...
the people who actually use them... know the true story ;-)

Anyway, if you're really into reliability, you might want to colocate in
hong kong. Can't get much more diversified network-wise than that. Email
me back if you're interested in working something out. Otherwise, consider
the above carefully about the US networks.

Sincerely,
Jason

----- Original Message -----
From: ":yegon" <yegon@yegon.sk>
To: <debian-isp@lists.debian.org>
Sent: Sunday, June 17, 2001 8:50 PM
Subject: redundancy via DNS


> we have several servers colocated with several ISP's
> i am trying to sort out some configuration that would ensure good uptime
for
> customers
>
> i want to place the html documents of every customer on two separate
servers
> connected to separate ISP's
> the dns servers will point to one server and the second one will be just
a
> backup, in case the main server goes down we just change the DNS and
point
> the affected domains to the backup server. when the main server is back
up
> the dns changes back to normal
>
> and now my questions:
> 1. what should the times in zone files be set to to enable the dns
change to
> be propagated very quickly, say 5 minutes max.
>    is it possible/wise to use TTL=0
>
> 2. if a domain has 2 name servers set during registration, are both of
these
> servers used for lookups? Or is it so that just the primary is querried
if
> it works, and the secondary is querried only if the primary is not
> responding?
>
> 3. is this whole idea worth consideration anyway or should I forget it?
>
>
> thanks for answers
>
> Martin Dragun
>
>
> --
> To UNSUBSCRIBE, email to debian-isp-request@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
listmaster@lists.debian.org
>
>



Reply to: