[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: we were attacked



On Sat, Apr 08, 2006 at 02:03:49AM +0300, Juha-Matti Tapio wrote:
> On Fri, Apr 07, 2006 at 11:41:28PM +0100, Steve Kemp wrote:
> >   No the appalling part was you having a machine compromised
> >  resetting it to a "good" state and then letting it get compromised
> >  again, and again, and again.

gotta agree with that.

a problem like is something that has to be fixed, not ignored - and if
you can't do it yourself, you should hire someone who can.


> Problems like this aren't simple to diagnose on webhosting
> environments.

actually, they're not that hard - you can find most of them by grepping
for half a dozen or so likely strings in the apache access log - "wget",
"curl", "snarf", "/bin/sh", "/bin/perl", ";", and as a last resort,
"%20" (for encoded space characters which nearly all shell exploits will
have in them)

> There could be a lot of requests in the logs 

use grep to both search for suspect strings and 'grep -v' to exclude the
"noise" (i.e. known good requests).

this is really basic sysadmin stuff - anyone who can't do it has no
business operating a publicly accessible server on the internet.

> and there could also be a lot of users whose scripts might have been
> the cause.

if you allow users to upload their own scripts, then you should make
clear in your terms and conditions that you can and will delete without
further notice any that are found to be a security or resource-hogging
problem.

there are numerous known bad scripts (e.g. the various Matt Wright
scripts) - start off by banning them outright. delete them on sight, and
make sure your users know you will do exactly that. 

similarly, there are several well known things that CGI scripts simply
shouldn't do (e.g. trusting user-supplied input, and especially passing
that input unchecked/unsanitised to external programs) - and, in many
cases, a cursory examination of a script will tell you whether they are
doing them or not. delete or disable any that are. if in doubt, disable
(e.g. "chmod -x").

be ruthless. running scripts on your server is a PRIVILEDGE, not a
RIGHT.



"it's too hard" is a lame excuse. if it's "too hard" for you to do
properly then you SHOULD NOT be providing the service. at all.

as a web hosting provider, you have a responsibility to yourself, to
your customers AND to the rest of the net to keep up to date with web
security issues. if you couldn't be bothered doing that then you really
shouldn't be running a web hosting service. there's more than enough
negligent and incompetent operators on the net already, we don't need
any more.


> I do not think it is reasonable to take more drastic countermeasures
> immediately if there are no signs of attempts to gain root. 

you have no way of knowing exactly what they've done, what they might do
in future, or what someone else might do in exploiting the same hole.
you have to assume the worst.

> On large webhosting systems it is pretty much a normal event that some
> client ends up with compromised scripts.

i've run dozens of web hosting servers with many hundreds of virtual
hosts each on them - not one of them has ever been compromised in this
way. the only machine i've ever had compromised in over 10 years of
using linux (which includes hundreds of debian servers i've built and
used over that time) was an old laptop i had out in the back room at
home and ignored for months (so, of course, the inevitable happened and
it got compromised. my own stupid fault for being so slack).

i've had so few compromised machines because a) i've regarded it as a
duty to keep my knowledge of security issues up-to-date, and b) i've
always been both strict and ruthless in what i've allowed to be run
on them. i've banned known bad scripts like MW's garbage (in fact,
banned whole classes of scripts - like all "formmail" type scripts
except for the one that I personally supplied), made rules restricting
certain behaviours (e.g. any script that sends mail has to conform to my
standards so that it cant be abused by spammers) - and regularly search
for scripts that send mail with cron jobs like:

	find /home/ -type f -print0 \ |
		egrep -zZ "/cgi-bin/|\.(cgi|sh|pl|php|phtml|....)" | \
		xargs -0 egrep -i "sendmail|Net::SMTP|MIME::Lite|Matt Wright|...." 

(made up example only - the find command was more complicated and the
actual regexp was longer, including the names of a few dozen well-known
bad scripts, and some particularly bad common code sequences. note:
use 'find -print0' and 'xargs -0' to cope with idiot users who like to
upload files with space chars in the filenames)

i also had cron jobs searching for suspicious activity in the access
logs.

all that would give me a list of scripts to check more thoroughly and
any that failed to make the grade were summarily deleted or disabled.
after a while, i got to know which customers were clued up (and thus
their scripts could be skimmed over quickly) and which were clueless
(and thus deserved more intensive scrutiny of their scripts).




> Stock Debian is not terribly well suited to this kind of environments (only
> little support for extra security and user separation), but I still do not
> think this kind of system management is appalling.

i do.  it's appalling and it's shameful.

craig

-- 
craig sanders <cas@taz.net.au>           (part time cyborg)



Reply to: