[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Sugesstions building a rather big mail system.

Am Die, 2003-10-07 um 17.05 schrieb Emmanuel Lacour:

> What about using localization with ldap and a pop/imap proxy:
> Users are dispatched on several real pop/imap servers
> postfix deliver to the correct server according to the ldap entry
> pop/imap proxies are load balanced and connect to the right server
> according to the ldap entry for that user.

Yes, also possible and also a nice approach, but I think the
LVS/central-storag(es) is the "easier" solution and I've already
deployed setups like the one I described - that's why I mentioned it.

I've read about perdition (http://www.vergenet.net/linux/perdition/)
when Russel Coker mentioned it on the list some time ago but couldn't
find time to give it a try on a testbed (yet).

> Like this you avoid a central storage. If one pop/imap server crash, it
> affects only users on this server. Each pop/imap server need to have
> RAID and backups ;-)

Well, that's not much different from my NFS approach: If one of your
storage server crashes, only those users are affected.

You'll probably want to use a bunch of "smaller" storage-servers (think
0,5-1TB) with fast U320 15k disks anyway as you'll get quite a bit of
I/O and get a good distribution of data across your storage-network as a
bonus. You can saturate even a RAID5 (with U320/15k) quite easily with
the I/O a mail server usually generates (i.e. LOTS of small files)

Backup is something you'll definitely want to take a closer look at as
500k user will generate enough data to keep larger libraries busy for
hours (don't forget the restore procedure may take a long time too).
Incremental FS snapshots would be cool for this, but I don't know of any
way how to do this with Linux.

best regards,
Markus Oswald <moswald@iirc.at>  \ Unix and Network Administration
Graz, AUSTRIA                     \ High Availability / Cluster
Mobile: +43 676 6485415            \ System Consulting
Fax:    +43 316 428896              \ Web Development

Reply to: