[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: (crazy?) idea for blocking p2p

Felipe Figueiredo <philsf@ufrj.br> writes:

> since I am fairly new to iptables, this may be old news to many of the
> gurus here. Consider it some food for thought.
> Since one can create rules that limit quantity of packages (say) per
> second, one could use this feature to limit [in|out]bound traffic from
> EVERY port (except specific ones).
> The idea would be to block the downloading of big files/too much
> information, from non-permited services.
> Maybe something like: permit any quantity for HTTP, FTP, SMTP/POP (for
> email attachments), SSH (for sftp), (others?), and limit every other
> traffic to a reasonable quantity per [sec|min|...].

That wouldn't be terribly difficult to implement; iptables supports rate
limits quite well.

Be aware, though, that most Peer to Peer applications (like Instant
Messaging clients) take the circumvention approach to the whole system:
they will work very, very hard to ignore any firewall you put in place,
and may well detect that using the standard ports like HTTP is a
performance improvement...

> However, I heard of people having crashing problems when limiting
> amount of ssh connections, in some kernel version. Aparently some sort
> of memory leak. It may very well be fixed by now, but I never really
> looked into it, since I resorted to userspace scripts for the job (in
> my case, I use fail2ban to limit ssh connections).

The standard rate limiting is fine.  Perhaps the 'ipt_recent' module,
which is often discussed to limit SSH brute force attacks, is what
caused the problems.


Digital Infrastructure Solutions -- making IT simple, stable and secure
Phone: 0401 155 707        email: contact@digital-infrastructure.com.au

Reply to: