On Wed, 2007-02-28 at 01:08 +0100, Andrew Miehs wrote: > On 28/02/2007, at 12:27 AM, Jim Popovitch wrote: > > >> > >> DNS-based round-robin "load balancing" is pretty useless for what you > >> want. it doesn't give you redundancy, if one server goes down then > >> half > >> the requests will fail. > > > > True. But the client will figure out (in the case of websites) > > which IP > > to use. So the client experience "just works", perhaps after a bit of > > delay by hitting the down server first. > > The client doesn't figure anything out. Most web browsers do. The OP stated his issue was with a website, of which browsers would be the de facto client. > Either you are lucky, and you get the server that is up - or you are > unlucky - and you get the one that is down. > > And seeing how wonderful browsers (and resolver libraries) are, they > have probably cached the IP address - and you will need to restart your > browser to try to get to the second and working web server. Actually most browsers (and certainly most resolver libs) cache as many IPs as are returned by the query. So, the browser will have the list of IPs necessary to reach the website, and if one doesn't work it will try the next one. > The only real way of doing this is as previously suggested, having some > form of IP address takeover system. :-) -Jim P.
Description: This is a digitally signed message part