[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Framework for web control panel.



Thomas Goirand wrote:
I read about this motivation many times, however, there are very rare
cases where the load makes it a real need. In such case, you typically
run a single website, and then using a control panel isn't really
justified, as you would run only apache and a single vhost per server,
and maybe a reverse proxy like HAProxy to do the balancing. In all other
cases, running all on the same server reduces so much complexity that
it's the best choice to make.

Unless you've got enough hosting customers that one physical machine can't handle all those sites, never mind the added memory load of a leaky hog like BIND, plus the load of dealing with all the spam that inevitably comes in to the one customer who *insists* on using a catchall account. (Never mind the mail load for the other 999 customers.)

I've done this dance of trying to wedge everything into one box. Even on a very small scale (~40 domains, at a time when spam was only ~10% - if that - of the overall mail volume), it works for a while, but sooner or later something will blow up and *all* services for a bunch of customers will go down. (Give them a pile of hemp strands, and some customers *will* industriously go about making a rope to hang themselves with... and in a fully shared-hosting all-in-one environment, they may well take a lot of others with them.)

- Running a MySQL service over network adds a lot of latencies and
issues, will load your switch, etc. (the one that will pretend that
running over Ethernet is faster than a Unix socket is just plain wrong).
So you want to avoid this if possible, especially if you don't have
enough load to need lets say 3/4 MySQL servers is master/slave mode.

I'd say GigE latency is not an issue compared to running the server into swap because someone fired off a messy query. Ethernet might not be faster than a Unix socket, but running your database traffic to the next machine down the rack over the second gigabit port on a private (V)LAN reserved for just such traffic is really unlikely to be slower than trying to share out RAM sanely between your database processes and web traffic.

- Then you may say you want to run DNS on another server as well, but
you will also have to run a DNS server on the apache and mail server to
speed-up resolving (caching name server).

Mmm, not quite IME. A caching server is different from an authoritative server, and most best-practice documents I've seen say you shouldn't mix the two.

Which means your authoritative zones shouldn't be local to the DNS cache on the web server. There's also the issue of distributing the authoritative data to a second machine anyway - no DNS system I've used knows how to distribute new zones to another server automagically. (I know there *are* a few, but IIRC the overhead is far greater than you'd gain by running full DNS on every box.... and most rely on, yep, an SQL database backend.)

FWIW, here we seem to getting along fine with our servers (10 that I can think of offhand that *need* responsive DNS service) all using the same two dnscache servers as our connectivity customers (dialup, DSL, cable, fibre, wireless, and various copper loop services). Two of our four authoritative tinydns servers happen to be on the same physical machines as the caches (largely due to power capacity issues, IIRC), but they could just as well be separate machines; the caches don't have any special knowledge about the authoritative zones.

- You will also be running a mail server on all of your servers to at
least receive mail for root.

mmmnope. nullmailer forwarding everything to a proper role account mailbox (or alias) is all you need (although nullmailer needs to be kicked once in a while IME). You *really* don't want to have to log into every single box to receive cronspam and other administrivia. (I've been there. It doesn't scale well beyond two or three machines.)

Further, load spikes on one service can affect any others; if you have physically separate boxes for different services, your authoritative DNS won't stall out when you get a flood of spam coming in, and a sudden order-of-magnitude spike in web traffic to one site (linked from eg slashdot) won't kill your POP mail service (and SMTP, and webmail...).

I think you are really under-estimating the programming work here.
There's nothing really complicated, just YEARS of work, literally. Be
prepared to having half of the needed functions in 2 or 3 years of time.
In the past, I heard so many people pretending that they would write
such a panel, but at the end, only a handful ended with something that
practically does what it is supposed to be, because its just too much work.

I have to agree with this. I wrote a simple set of scripts to manage virtual hosting at one time, but the interface was pretty simpleminded. It was functional enough for a small ISP (~1500 dialup customers and a bit of domain hosting), and I opened the email management for domains to customers (add, remove, change password, change forwarding), but it still lacked a few things on the internal side (delete domain, make changes of any kind to DNS after initial domain addition), and never got integrated with any of the billing.

Our current billing system has been in use for a couple of years, after several years in development (and, IIRC, one false release some time before it really got put into full use), but it's *still* under active development - of course, it has to cope with all the new services we keep adding on as we buy smaller ISPs. <G>

On the other hand, it's only in the last 6 months or so that it's had any visibility into email accounts for hosted domains.

And this is with a more or less dedicated group all working on various aspects of the billing/recordkeeping side of the system - and another few people working on automation and scripting on the mail servers.

-kgd


Reply to: