[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: How do you approach the problem of "MaxClients reached" with apache?



francis picabia wrote:
> One of the most frustrating problems which can happen in apache is to
> see the error:
> 
> server reached MaxClients setting

Why is it frustrating?

Is that an error?  Or is that protection against a denial of service
attack?  I think it is protection.

The default for Debian's apache2 configuration is MaxClients 150.
That is fine for many systems but way too high for many light weight
virtual servers for example.  Every Apache process consumes memory.
The amount of memory will depend upon your configuration (whether mod
php or other modules are installed) but values between 20M and 50M are
typical.  On the low end of 20M per process hitting 150 clients means
use of 1000M (that is one gig) of memory.  If you only had a 512M ram
server instance then this would be a serious VM thrash, would slow
your server to a crawl, and would generally be very painful.  The
default MaxClients 150 is probably suitable for any system with 1.5G
or more of memory.  On a 4G machine the default should certainly be
fine.  On a busier system you would need additional performance
tuning.

On small machines I usually decide how much memory I am willing to
allocate to Apache and then set MaxClients accordingly.  For example
on a 512M ram virtual private server running a small site the best
MaxClient value is down in the range of 12 depending upon various
factors.  Hitting MaxClients is often required protection against
external attacks.

> After it, the server slowly spirals down.  Sometimes it mysteriously
> recovers.  This is difficult to diagnose after the problem appeared
> and went away.
>
> What have we for advice on :
> 
> a) diagnosis of the cause when the problem is not live

Look in your access and error logs for a high number of simultaneous
clients.  Tools such as munin, awstats, webalizer and others may be
helpful.  I use those in addition to scanning the logs directly.

> b) walling off the problem so one bad piece of code or data does not
> effect all sites hosted on same server

I haven't tried this and have no information but it would seem
reasonable to hack up some type of iptables rate limiting based upon
attack activity from the log files.  Something like fail2ban does for
ssh connections.

> Obviously a Virtual Machine could handle item b, but I am hoping for
> something interesting on the apache side.  Obviously there is
> increasing the limit, but this is typically not the solution - the real
> problem for apache is bad code or data outside of what the code
> can digest in a timely manner).

Usually the problem with MaxClients is an unfriendly Internet.  Robots
hammering away at sites rapidly in parallel can consume many server
resources.  The attack is usually external.

> One tip I've seen suggested is to reduce the apache timeout from 120
> seconds down to 60 seconds or so.

Reducing the Timeout value is also useful at reducing the attack window.

Bob

Attachment: signature.asc
Description: Digital signature


Reply to: