[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Common security checks for a base installation - packages reviewed.

On Mon, Dec 30, 2002 at 03:33:35PM +0100, Javier Fern?ndez-Sanguino Pe?a wrote:

> >  filesystem and create a database of all the files it finds:
> > 
> > 	name  size  ctime mtime atime user group perms
> 	But were would the database reside? Memory? Disk? If the situation
> changes (a file is modified) after, say, running a test that takes 30
> minutes to process the database then the next check might miss an important
> issue.

  Thats something that could come up even without a cache of information.

  If you scan the filesystem once looking for, say permissions, and then
 later scan to, say, test MD5 sums the first file you examine could have
 been modified just after you test it - at which point you won't find out
 until the next invocation.

  The advantage of having a lightweight scan, though, is that the scan
 could happen hourly without putting the system under undue load.

> >   (Hmmm, does locate/slocate store enough information in its database?)
> 	Not that I know of.

  I checked.  locate doesn't, and whilst slocate stores more information
 it doesn't store enough.

> 	I would say my preference in this are mail, syslog and snmp (only). 
> Note that pages could be programmed through any of these. Syslog has the
> advantaged of being logged (mail can get lost) and can also be sent to a
> remote system which cannot be tampered with easily. Snmp provides
> integration with Network Management tools which could provide more
> effective alarm mechanisms (ticket integration, automatic response, or
> whatever)

  That sounds reasonable, however I'm suprised that you consider syslog
 more reliable than mail.  Mail has well defined queuing and timeout
 behaviour before a message is dropped/bounced.
  Several of the more common syslog implementations make no guarantees
 about actually recording a message they're passed - and either way
 their is no notification to the invocing process whether the message
 was handled, dropped, or even recieved.

> 	The other one I have is system load. The perl interpreter is way
> more overhead than a shell script. This might not be an issue with big
> tests (going through all the filesystem) but probably is with small tests
> (just running nestat and looking the output).

  True, but for most of the tests other factors will probably come into
 play anyway.  Making fingerprints of files to do hash comparisions is
 going involve lots of IO, during which the additional overhead of loading
 perl compared to /bin/sh is probably minimal - for example.

> 	If I were to rewrite Tiger I would do it in C better than Perl.
> Would take quite more time too.

  I think C would be a worse choice, as the scripts are likely not processor
 bound - and the ease of modification goes away.  (Plus there are more
 issues like string and memory handling).  All the CPU intensive operations,
 such as SHA generation will be either a seperate process in the /bin/sh
 case, or a C module in the perl case, so chances are there's not too
 much to be gained.  (Just the removal of the fork() likely).


Reply to: