[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Framework for web control panel.



I think I see where we're getting different perspectives; I'm looking at this from the point of view of an ISP who happens to do some webhosting; you seem to be considering the view of a company whose main or only business *is* hosting. If that's not true... I think we'll just have to agree to disagree. <g>

(For instance, if you're already maintaining a moderately large system to handle your ISP-domain customer email, it's fairly trivial to scale that system to support more domains... and it makes the process of adding an email account exactly the same for *any* domain).

Thomas Goirand wrote:
Kris Deugau wrote:
Unless you've got enough hosting customers that one physical machine
can't handle all those sites

Then open a 2nd server and move customers around, not a big deal.

*nod* Except then you have to get in contact with those customers to tell them about any changes (I admit that with some careful thought you can probably eliminate most of these, but if the company has been sold three times and everyone forgot to let you as the hoster know...)

never mind the added memory load of a
leaky hog like BIND

Totally unrelated,

Not sure how; if something is chewing up memory when it shouldn't, other services *will* suffer for it sooner or later.

and frankly, stuffs like bind are always pointed as
bad and having issues, but this is really because everyone is using it,
so we hear about issues very loudly and fast. I never had any issue with
it myself.

*nod* Neither have I, personally, but for whatever reason the BIND-based DNS system we used to run *did* leak memory according to the seniour systems guy at head office, causing problems for the few other things on those machines by driving them into swap when they really only *needed* *half* the physical memory they had (and BIND had supposedly been configured to use only <x> amount of RAM).

plus the load of dealing with all the spam that
inevitably comes in to the one customer who *insists* on using a
catchall account.  (Never mind the mail load for the other 999 customers.)

That's were monitoring comes handy and very important. A shared host
without a good monitoring solution quickly becomes trashy. Separating
your server into multiple ones will NOT solve this issue anyway.

No, but if all of your mail services are on this cluster, web over there, DNS another set of servers... chances are each group will have enough spare capacity to deal with spikes and bumps, and this only gets more stable as the cluster grows.

You
should be able to point directly at that customer and let him migrate to
a VPS or a dedicated if he is taking all the resources. And by the way,
we never had issue with catchall either...

That was more of a generic comment; I've encountered a number of customers who were truly shocked at how much less junk mail they had to deal with (even after reasonably decent spam filtering) by not using a catchall on a domain with just 3 real, live addresses.

Mmm, not quite IME. A caching server is different from an authoritative
server, and most best-practice documents I've seen say you shouldn't mix
the two.

First time I hear about this! Why is that?

I haven't quite managed to wrap my head around the details, but supposedly you can end up poisoning your cache in some way.

There's also the issue of distributing the
authoritative data to a second machine anyway - no DNS system I've used
knows how to distribute new zones to another server automagically.

I should qualify this by noting that tinydns doesn't even really have the concept of "zones" the way nearly every other DNS server does; to copy an authoritative zone list from one server to another you just copy the data file (and, if they're same-arch/distro, the .cdb file) and the second machine now knows about the same zones as the first. (If you want a *different* zone list on the second, you have some fancy data manipulation to do.) But you still have to transfer that data between systems *somehow*, and I'm not aware of any non-database-based ones that can transfer a new zone/domain to a slave server without manual intervention on the slave in some way.

Are you talking about the *zone list*, eg, the list of domain name?
Well, that's quite a trivial thing to write in a control panel...

*nod* But if you're going to push the zone off to another machine anyway, why not declare machines X, Y, and Z as The Authoritative DNS Servers, and just run a lean cache on local machines?

Further, load spikes on one service can affect any others;  if you have
physically separate boxes for different services, your authoritative DNS
won't stall out when you get a flood of spam coming in, and a sudden
order-of-magnitude spike in web traffic to one site (linked from eg
slashdot) won't kill your POP mail service (and SMTP, and webmail...).

Unlikely to happen if you do incoming connection rate limiting for SMTP
(using iptables), and use mod_bwshare and mod_cband for apache. Again,
the issues you are having on a single server, you will have them on
multiple servers as well, you are just mitigating the consequences here,
not eliminating the issue, which you shouldn't be.

*nod* Monitoring is good and useful, and the mitigation methods you mention will certainly keep from overloading the server too badly.

But monitoring won't prevent sudden spikes from creating an overload; and if you rate-limit SMTP connections a sudden spam flood *will* leave legitimate mail stalled elsewhere. (We all know that SMTP is best-effort... sadly, many customers expect it to be instant messaging.) If you rate-limit web traffic in some manner, customers *will* call complaining that their site is slow. (Most commonly, customers who are lucky to get five hits a day on their site, three of which are them checking to see if it's up.)

I see separated services running on independent clusters as a good way to reduce the impact of a flood of any kind of traffic, and any web-based customer controls for their domains better be able to handle that configuration.

(There's also the issue, beyond a certain scale, of managing tens to hundreds or more of servers, without having to log into each one individually; if you're already doing RPC stuff to do this, why *not* at least *allow* services to be split?)

-kgd


Reply to: