[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Framework for web control panel.

Kris Deugau wrote:
> Unless you've got enough hosting customers that one physical machine
> can't handle all those sites

Then open a 2nd server and move customers around, not a big deal.

> never mind the added memory load of a
> leaky hog like BIND

Totally unrelated, and frankly, stuffs like bind are always pointed as
bad and having issues, but this is really because everyone is using it,
so we hear about issues very loudly and fast. I never had any issue with
it myself.

> plus the load of dealing with all the spam that
> inevitably comes in to the one customer who *insists* on using a
> catchall account.  (Never mind the mail load for the other 999 customers.)

That's were monitoring comes handy and very important. A shared host
without a good monitoring solution quickly becomes trashy. Separating
your server into multiple ones will NOT solve this issue anyway. You
should be able to point directly at that customer and let him migrate to
a VPS or a dedicated if he is taking all the resources. And by the way,
we never had issue with catchall either...

> I've done this dance of trying to wedge everything into one box.  Even
> on a very small scale (~40 domains, at a time when spam was only ~10% -
> if that - of the overall mail volume), it works for a while, but sooner
> or later something will blow up and *all* services for a bunch of
> customers will go down.

In years of hosting, this never happened to me, even with thousands of
domains installed. Maybe your anti-spam system is inefficient and/or is
not caring enough about load issues. The things we implemented are (in
the order of blocking in the mail queue):
- iptable incomming connection rate limiting (otherwise, it goes BOOM
very fast, indeed).
- Basic domain checking (is an existing domain, has a valid MX, etc.)
- Basic reverse DNS checks (PTR doesn't contain DSL, DHCP, or the like...)
- RBL check (spamhaus)
- SPF check, if SPF has a hard fail, block, if soft fail, greylist the
incoming email (all this done with tumgreyspf that I maintain in Debian)
- DKIM check (using dkimproxy that I maintain in Debian)

THEN comes the heavy stuffs like amavis, clamav, and spamassassin. If
you do things this way, then everything goes smoothly.

>> - Running a MySQL service over network adds a lot of latencies and
>> issues, will load your switch, etc. (the one that will pretend that
>> running over Ethernet is faster than a Unix socket is just plain wrong).
>> So you want to avoid this if possible, especially if you don't have
>> enough load to need lets say 3/4 MySQL servers is master/slave mode.
> I'd say GigE latency is not an issue compared to running the server into
> swap because someone fired off a messy query. Ethernet might not be
> faster than a Unix socket, but running your database traffic to the next
> machine down the rack over the second gigabit port on a private (V)LAN
> reserved for just such traffic is really unlikely to be slower than
> trying to share out RAM sanely between your database processes and web
> traffic.

If you experience this, then it means you didn't configure the limits of
MySQL correctly, and allowing your customers to do dangerous things.
This will NOT change anyway if the server is remote, that server will
ALSO be loaded like crazy, and this WILL affect all other sites (as most
of them will also run MySQL). Again, tight monitoring and rules
enforcing is the solution here.

>> - Then you may say you want to run DNS on another server as well, but
>> you will also have to run a DNS server on the apache and mail server to
>> speed-up resolving (caching name server).
> Mmm, not quite IME. A caching server is different from an authoritative
> server, and most best-practice documents I've seen say you shouldn't mix
> the two.

First time I hear about this! Why is that?

> There's also the issue of distributing the
> authoritative data to a second machine anyway - no DNS system I've used
> knows how to distribute new zones to another server automagically.  (I
> know there *are* a few, but IIRC the overhead is far greater than you'd
> gain by running full DNS on every box....  and most rely on, yep, an SQL
> database backend.)

Are you talking about the *zone list*, eg, the list of domain name?
Well, that's quite a trivial thing to write in a control panel...

> Further, load spikes on one service can affect any others;  if you have
> physically separate boxes for different services, your authoritative DNS
> won't stall out when you get a flood of spam coming in, and a sudden
> order-of-magnitude spike in web traffic to one site (linked from eg
> slashdot) won't kill your POP mail service (and SMTP, and webmail...).

Unlikely to happen if you do incoming connection rate limiting for SMTP
(using iptables), and use mod_bwshare and mod_cband for apache. Again,
the issues you are having on a single server, you will have them on
multiple servers as well, you are just mitigating the consequences here,
not eliminating the issue, which you shouldn't be.


Reply to: