[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: loading huge number of rules in iptables (blocklist)



Andrew Sackville-West wrote:
> On Wed, Mar 21, 2007 at 02:30:06PM -0400, H.S. wrote:

> 
> okay, I follow... and you want otherwise unfettered p2p operating, but
> security from these particular sites. ugh. nasty problem.

Nasty problem, yes. But I can live without it since I don't do much p2p.
But the problem did present me a chance of reading IP ranges from a file
and blocking them. I have been playing around with this idea for
blocking some IP addresses that seem to bombard you with ssh attempts.
But this is probably overkill for that and is surely to give many false
positives (Using a differnt port that 22 seems to be easiest solution).


>> The result was the experiment to use the massive blocklist and to
>> automate the process in iptables firewall on a router -- needs iptables,
>> bash, curl and maybe pythong or perl. I am giving it a shot. As I said
>> before, this is the first attempt.
> 
> so, is there some other way to use this info besides a massive
> iptables rule set? I'm in territory I don't understand, so feel free
> to ignore me ;). What about a proxy? instead of a ruleset in the
> firewall, run the whole thing through a proxy that is set up to read
> the list of denies. then a simple update of the list can result in new
> blocking without reloading a whole set of rules. I don't have a clue
> as to the mechanics of this.


I am already playing with similar idea: but it needs to bootstrap the
iptabels rules once. After that, I can work out a script that downloads
the ipranges block list and diffs it with the previous one and deletes
or inserts the different rules. However, even starting up the rules once
is taking impractically long (I think it will take around 2~3 days to
load all the rules on my old machine).


> Another possibility, from my reading it appears this is supposed to
> work with a program called moblock. moblock seems to do some parsing
> to eliminate duplicates and so forth. I've done a little grepping
> through the list and can can tell that it could be done more
> efficiently. First, there are duplicates as shown here:
> 
> andrew@basement:~$ zcat level1.gz  | wc -l
> 151663
> andrew@basement:~$ zcat level1.gz | cut -d: -f2 | uniq | wc -l
> 150695
> 
> that's 1000 rules gone right there.  (the cut eliminates the name,
> giving just the ip range.

Interesting.

> 
> Also, a little scripting could probably concatenate a lot of the
> ranges. just a cursory look through shows that there are contiguous
> ranges specified on different lines. I don't have time today to hack
> at it, but I think you might be able to cut as much as 25% out of the
> list that way. 

25% reduction won't cut it, I am afraid. If we are talking about 80%
reduction in the eventual number of rules, then it is workable. Or a
different method is needed to make up these many rules.

I am not going to follow up on my current method. A better one is
definitely needed.

In any case, an interesting experience.
->HS



Reply to: