[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: recommendations for large mail system



On Mon, Feb 06, 2006 at 10:09:37AM +0200, Juha Kumpulainen wrote:

> I appreciate if you would like to share your experience to build large
> email system, especially comments on accounts-per-server, filesystem
> and test-methods are welcome!

In my experience, SpamAssassin is the largest resource hog. It is hungry
for both CPU and RAM. IIRC, it's roughly 25Mb/daemon of RES, and we need
to run with --max-children=20.

ClamAv is probably the next heaviest app.  Recently, I've switched to 
using clamcour, which is a c++ Courier-MTA Filter in the hopes that it 
is more effient.  Haven't benchmarked it yet.

The more messages you can cut off at SMTP time, the better.

We put a spamd (OpenBSD, not Apache) SMTP proxy in front to handle
greylisting + blacklisting. (We load the Composite Blacklist
cbl.abuseat.org twice a day.) I know it's not strictly on topic here,
but it could be implemented with whatever MX backend you want to use. It
doesn't need a big box and I think technically it's a great approach, so
I mention it.

If required, pf can do round-robin rdr's if it turns out you need more
than one MX. So far our Courier emstpd daemons work fine for 100,000+
messages/day with ClamAV + sa-spamd + spamd all on the same box. We
courier-imap and lighttpd + squirrel on another box.

The two machines are not nearly as powerful as the ones you spec (dual
1.2GHz, 3G RAM, 300GB 10k scsi software raid1 via NFS). I haven't
put in monitoring yet (bad admin!) as they are not stressed at all
(0.15/0.23/0.35 load, no page outs, lots of free disk space). There are
20k accounts and around 4k are active (that is, logging into IMAP and
sending mail--the other ones still recieve mail). 

When we first turned things on, and the email backlog out on the net
(the other provider had gone down and not come up) deluged us, we were
delivering a max of about two messages/second to disk with over 225
courier submit daemons doing their thing (at about 1M RES each). It took
around three hours to get through the backlog, and the load was high
(6-9). 

If you figure that was three full days of deliveries compressed into one
(roughly), I expect we could probably triple volume with no problem.
To scale, I would probably work on expanding the black lists on the
proxy--probably by parsing logs and creating my own blacklists.

Backing up is another issue.  Be careful with rsync, FWIU, it tries to 
load the list of files to backup into memory.  (I assume it works the 
same on Debian as on FreeBSD):

    Date: Sat, 26 Nov 2005 12:32:26 +0000 (GMT)
    From: Robert Watson <rwatson@FreeBSD.org>
    Subject: Re: Backup solutions

    The problem I've had with rsync is that it wants to build a list of
    all files to be backed up. On my cyrus server, I have file systems
    with >6m files. This causes rsync to core dump when it discovers it
    can't allocate memory to hold the entire list at once.

BTW, spamd is also a tar pit which, while not as effective as I hoped,
gives me some satisfaction as the smtp dialog returns at one byte per
second to know spammers that are not smart enough to disconnect (which
is not that many :( ).

m



Reply to: