Re: Mail clustering
On Tue, Mar 20, 2007 at 04:14:40PM +0100, Cherubini Enrico wrote:
> we have 2 mail server, one with antispam/antivirus services and
> another one without them, both with postfix and another server with
> mysql db backend storing login/password. We are happy with this, but
> needing to increase performance, we are looking for load balancing
> solutions. The first wat I was thinking about, is to move the storing
> from the servers to a single NFS server, and use the mail/pop3 server
> just as "engine" that check login/password on the mysql db and access
> data in the NFS and load balance with dns round robin through the
> various servers I can use as frontend. The uplink is a 34Mbps so I
> don't think NFS is a problem for performances. Servers are different,
> the antivirus/antispam is only for receiving, while the other(s)
> is(are) for both receive and send.
> Do you think this would be a good solution ? Is there a better
> solution than round robin (ie, perdition or iptables with DNAT based
> on stattistical redirection) ?
i've got good performance in the past from a setup similar to that.
i built a bunch of MX receivers which accepted all incoming mail,
processed it with amavisd-new/spamassassin/clamav, and then forwarded it
on to the main mail server, which stored it in the users' mail spools.
we started off with 3 of these MX servers, but we could have scaled that
up to as many as we needed. each was a machine with a fair amount of RAM
and CPU power (for spamassassin - at the time, a P3 with 512M. today i'd
use amd64s with at least 2GB), and the smallest disk we could buy at the
time (today i'd probably use the Gigabyte I-RAM battery-backed ram-disk
PCI cards for /var/spool/postfix). these boxes also acted as outbound
mail relay, doing spam/virus filtering on both inbound and outbound
the main server also ran pop and imap and acted as an outbound mail
relay for the handful of users who complained about their outbound mail
being spam/virus filtered. it also ran webmail. it had lots of CPU and
RAM (can't remember exactly but it was dual processor, fastest available
at the time, and 2GB of RAM - today i'd use multiple dual-core amd64s or
better and 8GB or 16GB of RAM) and lots of scsi disk in a raid-5 array
with hardware raid control (and battery backup of the cache). today i'd
probably use lots of medium-sized (~ 300GB) SATA drives on a decent
hardware-raid SATA controller (perhaps an adaptec 2820 or IBM ServeRaid
- both use the aacraid driver).
our main load problem at the time was anti-spam/anti-virus processing of
incoming mail, but the plan was to eventually add more servers to handle
the pop/imap/webmail connections and leave the main mail server to be
just NFS storage. i left the job before i got to implement that part of
i used LVS to load-balance the incoming mail so that I had control over
it, rather than leaving it up to the various resolver implementations
out there on the net....but MX round-robin would have worked as well.
i had two LVS load-balancers set up in failover mode with heartbeat (i
already had this set up for load-balancing our squid servers, so it was
pretty easy to just use it for mail as well). celeron boxes, i believe,
with minimal memory & disk.
the idea was to give better performance and redundancy - we still had
the main mail server as a single point of failure, but it was impossible
to elminate that without spending a few hundred K on SAN storage...which
was way beyond our budget. we did have the raid drives in an external
drive box so that if the mail server died we could quickly swap in
another machine without having to pull all the disks out and install
them in a new box. it never died on us, but we estimated that total
changover time would have been <10 minutes (plus whatever time it took
us to get to the server room). and yes, we did have a second machine
sitting there in the rack powered-off ready to be plugged in - the
advantage of using cheap clones rather than name-brand servers is that
you can afford to do that :-)
that arrangement also gave us an easy upgrade path for the main server
if we wanted to upgrade CPU or RAM on it - upgrade the spare and swap
it over, and then the old one becomes the new spare (no need to upgrade
that immediately - the longer you put off an upgrade, the more you get
for your money)
the MX boxes could be upgraded simply by adding more of them. if any of
them died (including being taken off-line for a motherboard upgrade),
LVS would notice and automatically remove it from the server array, and
adding new MX boxes to the array was a simple one-line change to the
ldirector config file.
all mail servers were running postfix, of course.
note: it is crucial that the MX receivers are able to verify that
recipients exist *before* they accept mail (and 5xx reject mail for
unknown recipients), otherwise they will become backscatter sources.
that should be pretty easy with all your user account info in mysql.
you may want to look into setting up a mysql cluster to replicate
your accounts database - share the load, and good for redundancy.
alternatively, move your account data into LDAP and run LDAP slaves on
each of the MX boxes so they have a local copy of the account data.
oh yeah, of course use Maildir over NFS. not mbox. you can make mbox
work over NFS but it's not worth the trouble. easier to go with Maildir.
using Maildir means using a filesystem that doesn't crap out with lots
of files in a directory. i.e. not ext2 or ext3. i like XFS as a good,
general purpose, robust file-system with many years of testing and
real-world deployment behind it. reiserfs is too experimental and has
had too many problems (and too many instances where upgrades weren't
backwards compatible with previous versions) to trust on production
craig sanders <firstname.lastname@example.org>
BOFH excuse #215: High nuclear activity in your area.