[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

how to design mysql clusters with 30,000 clients?



Hello list,

I am expecting to have 30,000 http clients visting my website at the
same time. To meet the HA requirement, we use dual firewall, dual
Layer-4 switch and multiple web servers in the backend. My problem is,
if we use the user-tracking system with apache, php and mysql, it will
surely brings a huge amount of database traffic. How can I balance mysql
load among multiple mysql server yet assure the data consistency among
them? My idea is:

1. use 3 or more mysql servers for write/update and more than 5 mysql
servers for read-only. Native mysql replication is applied among them.
In the mysql write servers, use 1 way replication like A->B->C->A to
keep the data consistency. But I am afraid the loss of data, since we
can't take the risk on it, especially when we are relying our billing
system on it.


2. Use SAN device, that is, put the physical data on the SAN storage
device and let multiple mysql servers use the device at the same time.
I am afraid the locking problem and how mysql support raw device.


3. Put MySQL data on NetApp filer. No experience and experimental report.
Since MySQL is very sick to NFS, though NetApp is well-known for its
stability, I still worry about the MySQL on NFS problem.


Ideas?



-- 
Patrick Hsieh <pahud@pahud.net>
GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg


-- 
To UNSUBSCRIBE, email to debian-isp-request@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org



Reply to: