[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Mixing firewall tools



On 26/02/17 03:19, Dan Ritter wrote:
> On Sat, Feb 25, 2017 at 07:54:32PM +1300, Richard Hector wrote:
>> I have a machine with a hand-rolled firewall script, which just runs
>> iptables commands - all well and good.
>>
>> The trickiest bits are for my LXC containers; I need to forward ports
>> etc - but that's ok.
>>
>> The complications start when I add fail2ban - now I have an extra bit in
>> my init script that reloads fail2ban after reloading my script, because
>> my script does a flush of all existing rules. This is now getting ugly,
>> but it still worked.
>>
>> Does anyone have better ideas for that stage? Do any of the many
>> firewall tools cope with this adequately?
> 
> 
> Take a step back and describe your topology, please.

A remotely hosted VPS (yes, many layers ...) with a single IP address
allocated. Getting more from this provider is tedious at best.

At the moment, it runs a few things itself, but most public services are
in LXC containers. Everything public-facing is supposed to be in a
container. It does however run nginx, proxying to the internal web
services, and that does all the TLS. There's a mail container running
postfix and dovecot; that all gets forwarded by iptables.

There's a postgresql and an internal-only BIND on the host, running on
the host's internal ip.

SSH is accessible on the host, on a non-standard port; containers are
sshable via forwarded non-standard ports. There is or will be a git
container which needs the standard port.

> Remember that fail2ban needs to run on the "machine" (host or
> container or VM or whatever) where both the daemon logs are
> stored and iptables decisions can be made. In order to cleanly
> accomplish that, you should have a fail2ban instance and an
> iptables instance inside each machine, and leave the host
> firewall to take care of the host and generically handle any
> needed NAT.

That would mean fail2ban needs to run on the host (for ssh, nginx) _and_
on each container. It still needs to coexist.

However, AFAIK it doesn't make sense to run iptables inside a container;
it's all one kernel. It currently doesn't work, anyway. That means that
any fail2ban instances running in containers would have to somehow
control iptables outside, which is probably not desirable. It's probably
better to have fail2ban monitoring the container filesystems (which are
visible from the host) instead.

> If you can, it's cleaner and easier to give an IP address to
> each machine and use routing instead of NAT. Push NAT to the
> perimeter of your network.

The only available public IP address is on the host. Everything else is
RFC1918.

>> Now the biggie: I want to add Docker. Docker wants to do its own thing
>> with iptables. Do I need to resort to just telling Docker to keep its
>> hands off, and do everything myself?
> 
> Doing Docker and LXC at the same time is oddly duplicative. But
> the same principles apply.

It is. I've been learning Docker more recently than the machine was set
up, and I'm starting to see benefits - partly the fact that much of the
work might have been done for me, and partly I think it'd use less disk
space overall due to the layering.

I would certainly consider migrating the other containers from LXC to
Docker.

Mind you, I'm also starting to think the poor thing's getting a bit
overloaded ...

Thanks,
Richard


Attachment: signature.asc
Description: OpenPGP digital signature


Reply to: