Re: replicating, balanced web-server with *write* access?
On Sat, 10 Nov 2001 18:47, Christian Hammers wrote:
> Much is written about High-Availability servers but I still didn't find a
> good solution how to build two load-balanced webservers _without_
> connecting them both to one RAID (single point of failure).
> The problem with balancing between two servers is that the might host
> web-servers that could write a file on system A and then reading this file
> (status file or whatever) on system B immediately before e.g. rsync could
> transfer it. In the worst case writing/reading could happen for two
> different connection so that even connection based balancing wouldn't work.
Given that you have already ruled out GFS (and presumably also NBD which is
as far from production-ready as GFS), there are only two real options:
1) Use rsync to transfer files, and for writes have some sort of database
push (EG use ssh to run a program on the primary server which does the
update). Then of course the data you read won't be as new as the data you've
2) Use a database for data that has to be written by the web server. But
that raises a whole new set of issues such as redundant databases.
Some things aren't easily solved, and this is virtually impossible to solve
with today's technology. However that's OK. Some of the attempts I've seen
to solve such things give a net reduction in reliability!
When designing for high availability I aim for minimum loss of service (not
necessarily minimum downtime). So if something goes wrong and 10% of the
functionality isn't available for a few hours it's often not such a big deal.
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/ Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page