[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Using squidguard blacklists in skolelinux(.de)



Hi Dirk

(personal CC not necessary)

Am Donnerstag, 20. Januar 2005 11:44 schrieb Dirk Gómez:
...

> > We should consider delivering the hash codes only - to evade
> > potential juristical consequences (INAL)
>
> Could you explain please? Both legal issues and technical
> consequences?

Providing a Link collection of rated or illegal web sites could be an 
legal issue. Hash codes could save us these risks - however somewhere 
we'd need to put the source. Hard to decide. 
>
> >> Skolelinux installations would fetch the blacklist periodically
> >> (which time intervals make sense?) and then there will be some
> >
> > like once a week? once a day? differences versus whole lists?
>
> Whole lists - much easier to implement, less fragile in the long run
> - bandwidth is cheap these days. Do you know how often the lists
> should be fetched?

With rsync we could minimize the bandwidth anyway. As there are schools 
without flatrate, automated update is critical. How about having some 
debian-package like debian-edu-webmin-squidguard-simple-data :)
>
> >> postprocessing as well: teachers can add whitelist and blacklist
> >> entries through the webmin interface.
>
> I suggest this approach for starters: teachers can enter domains and
> each white- or blacklisted domain blocks out the whole domain. Or is
> this too "thorough"? Do you need to block specific URLs?

- white lists are necessary to deny "chat" but allow "schatten"
- words like chat - or special keywords - used to be locked anywhere in 
the URL (say: some.domain/badword.html)
- there are sometimes users providing "bad" pages beneath common 
providers like yahoo.com/~baduser/lockeverythinghere.

Hence I believe, we'd need regexpr.
>
> If you use regexes to block URLs/domains you inevitably have to
> explain those to users. And you also have to prepare for "weird" and
> "unexpected" side effects.

I know, so we have to limit this by options that we translate to regexp 
(an back): 

Lock all Sites that
- start with
- end with
- contain
- match (expert users only)
_______

This rule applies to 
o entire URL (default)
o domain name only (before first '/')
o path name only (after first '/')

The latter differentiation could be discussed.
>
> A simple solution as described by me is easy to explain. It is
> obviously imperfect, but no URL-blocker is perfect.
>
> > We should consider the definition of a set of black-/whitelists
> > that can be activated one by one. Blacklists you find on the web
> > often are ckassified into
> > - adult
> > - violence
> > - illegal
>
> Yes, the default squidguard blacklist is a set of files which are
> categorized. One could show the categories in the webmin interface
> and teachers could select which ones to use and which ones not.
>
> > We could add a group "chat" and "mail" to disallow such services
> > for special occasions. In praxis, teachers shouldn't tweak around
> > with singular selcetions, but select one of a a list of predefined
> > profiles (allow everything / ban adult stuff / kids only / ...)
>
> I prefer this approach to offering a fine-grained selection.
>
> I'll play around a little bit - especially with webmin - and then
> report back to this list.
>
Thank you. If this is not too intricated (as there have been requests 
for it, too): Is it possible to distinguis different user groups that 
operate on different rule sets? Unteraged - adult - teachers? Would 
this make it necessary to use a squid auth to ldap - or could one use 
the 'last' command to see who is surfing from what IP?

> -- Dirk

Regards
Ralf.



Reply to: