Re: How do you approach the problem of "MaxClients reached" with apache?
On Wed, Feb 22, 2012 at 3:49 PM, Bob Proulx <email@example.com> wrote:
> francis picabia wrote:
>> One of the most frustrating problems which can happen in apache is to
>> see the error:
>> server reached MaxClients setting
> Why is it frustrating?
Yes, maybe you don't know this condition. Suppose you have hundreds of
users who might decide to dabble in some php and not know much more
than their textbook examples? In this case, part of the high connection
rate is merely code running on the server. It comes from the server's IP,
so no, a firewall rate limit won't help. It is particularly annoying when
this happens after hours and we need to understand the situation
> Is that an error? Or is that protection against a denial of service
> attack? I think it is protection.
It does protect the OS, but it doesn't protect apache. Apache stops taking
new connections, and it is just as good as if the system had burned to
the ground in terms of what the outside world sees.
MaxClients isn't much of a feature from the standpoint of running a service
as the primary purpose of the server. When the maximum is reached,
apache does nothing to get rid of the problem. It can just stew there
not resolving any hits for the next few hours. It is not as useful say
as the OOM killer in the Linux kernel.
> The default for Debian's apache2 configuration is MaxClients 150.
> That is fine for many systems but way too high for many light weight
> virtual servers for example. Every Apache process consumes memory.
> The amount of memory will depend upon your configuration (whether mod
> php or other modules are installed) but values between 20M and 50M are
> typical. On the low end of 20M per process hitting 150 clients means
> use of 1000M (that is one gig) of memory. If you only had a 512M ram
> server instance then this would be a serious VM thrash, would slow
> your server to a crawl, and would generally be very painful. The
> default MaxClients 150 is probably suitable for any system with 1.5G
> or more of memory. On a 4G machine the default should certainly be
> fine. On a busier system you would need additional performance
Ours was at 256, already tuned. So the problem is, as I stated, not
about raising the limit, but about troubleshooting the source of
> Look in your access and error logs for a high number of simultaneous
> clients. Tools such as munin, awstats, webalizer and others may be
> helpful. I use those in addition to scanning the logs directly.
I hate munin. Too much overhead for the systems where we
are already close to performance issues when there is
peak traffic. I already poll the load and it had not increased.
I should add a scan of the memory usage.
I prefer cacti for this sort of thing.
Shortening the timeout is another useful thing as you mentioned.