[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Optimizing Kernel for huge iptables ruleset

On 19 Oct 2004, Martin G. H. Minkler wrote:
> AMD 1600 XP w/ 640 MB RAM @ 100MHZ FSB, one 3COM 905B eth1 connected to
> LAN, one 3COM 905C connected to ADSL Modem (1024/128 line).
> Two iptables rulesets:
> The first 'normal' ruleset is pretty restrictive against connetions from
> the outside, more or less open towards connections opened from the LAN.
> The second ruleset inserted after the first is a huge IP blacklist 
> (1.4MB iptables script!) that takes nearly half an hour to be inserted 
> into the running ruleset.

Yes, it would.  This is because your number of rules is *insanely*
larger than the kernel was designed to cope with.

There was some (non-merged) work a year or so back aimed at improving
the performance of iptables with a large number of decisions, but
nothing really substantial.

Aside from the performance of searching a set of rules that long, even
through a binary search, this will pin a *huge* quantity of kernel
memory on the system, all of which cannot be swapped out.

Couldn't you aim at reducing or removing that blacklist entirely, either
by aggregating the blocked hosts, or by moving the rules out of the

Perhaps the tcpd system, which is user-space and can swap out the data
tested, would work better for some or all of the rules?

Anyway, if you actually tell us *why* you want 1.4MB of script building
the rules, maybe we can help you solve the problem of wanting that many

Species membership in Homo-sapiens is not morally relevant. If we compare a
dog or a pig to a severely defective infant, we often find the non human to
have superior capacities.
        -- Peter Singer

Reply to: