[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: How does "making VPN accessible" works?



On 05/09/2018 05:37 AM, Yuri Kanivetsky wrote:
> To investigate I added the following rules to iptables:
> 
> iptables -A INPUT -p tcp --dport 443 -j LOG
> iptables -A OUTPUT -p tcp --sport 443 -j LOG
> 

> While it was running, I ran a https request to the server from my
> local computer. It hang, I waited for a while, then interrupted it. It
> didn't reach nginx running on the server, and there where no entries
> from iptables in systemd journal.

> Then, I found another solution
> (https://unix.stackexchange.com/questions/182220/route-everything-through-vpn-except-ssh-on-port-22/242808#242808).
> I undid the four commands above and ran:
> 
> ip rule add table 128 from eth0.ip
> ip route add table 128 default via eth0.gw
> 
> This time it worked. And I saw log entries from iptables in systemd journal.
> 
> My understanding is that before tinkering with iptables and routing
> table, packets from my computer arrived to the server, but the ones
> coming back were routed via VPN server's gateway (tun0.gw). There they
> were probably NAT'ed (different source IP), and as such were not
> recognized as a response by my local computer. But why then were there
> no entries from iptables in systemd journal?
> 
> The last solution supposedly made packets return via hoster's default
> gateway (eth0.gw). But when I do "traceroute 8.8.8.8", I see them
> going via tun0.gw. And "curl -sS ipinfo.io/ip" returns my VPN server's
> public IP. So, some packets are going via eth0.gw, some via tun0.gw.

>From what I can understand, the last set of commands made responses to
packets entering the server from its public interface will always leave
through the public interface. Since your regular traffic (not originally
intended to be sent to the server, only routed through) are not received
through the public interface, but through tun0, they are not affected by
the above rule and do whatever they were doing before you ran the above
commands.

> How does it decide? This is routing table before connecting to VPN
> server:
> 
> default via eth0.gw dev eth0 onlink
> eth0.subnet/25 dev eth0  proto kernel  scope link  src eth0.ip
> 
> This is what VPN server added:
> 
> vpn.server.ip via def.gw dev eth0
> tun0.subnet/24 dev tun0  proto kernel  scope link  src tun0.ip
> 0.0.0.0/1 via tun0.gw dev tun0
> 128.0.0.0/1 via tun0.gw dev tun0

It's quite weird that return packets were being routed via the vpn in
the first place. I don't think I've had to do any special setup on my
VPN server to achieve what you've done here. What you describe should be
the default behavior, not something that requires extra configuration.

On my setup, all connections except those that go to the vpn server's
public IP are routed through the vpn. If I were to connect to the server
ip from my local machine, it will just exit and reach the server like a
normal connection, not going through any VPN, and the server, seeing
that the connection came in through its public interface, has no reason
to route the response through another interface.

>From the routing table shown above, you have the same setup on the
client side, connections to vpn.server.ip are sent through eth0, others
(0.0.0.0/1 and 128.0.0.0/1) are sent through the vpn.

If a https request from the client never reached the server, there might
be some other issue somewhere else in your setup. Might be useful to see
iptables-save output from both sides of the connection, client and server.

My setup, for reference:

client:
No particular setup on the firewall, just a regular filtering firewall.
routes:
# Route the rest of the traffic through the VPN tunnel
0.0.0.0/1 via 10.8.0.1 dev tun0
128.0.0.0/1 via 10.8.0.1 dev tun0
# Route VPN traffic through the VPN tunnel
10.8.0.0/24 dev tun0 proto kernel scope link src 10.8.0.2
# Bypass vpn for direct connections to server (avoid routing loops)
{server ip} via 192.168.0.1 dev wlx88366cf24c67
# Defaults used before vpn/local network.
default via 192.168.0.1 dev eth0 onlink
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.2
server:
firewall has these extra lines:
# NAT so client packets routed through the server seem to be coming from
# the server ip.
-A POSTROUTING -s 10.8.0.0/24 -o enp0s20 -j SNAT --to-source {server ip}
# allow incoming vpn traffic
-A udp -p udp -m udp --dport 1194 -j ACCEPT
# Allow forwarding tun0 traffic through the public interface
-A FORWARD -i tun0 -o enp0s20 -m conntrack --ctstate NEW -j ACCEPT

routes:
default via {server ip} dev enp0s20
10.8.0.0/24 dev tun0 proto kernel scope link src 10.8.0.1
{server net}/24 dev enp0s20 proto kernel scope link src {server ip}

Plus the sysctl net.ipv4.ip_forward=1.

This seems to be the setup that makes the most sense (to me), and what
seems to be default behavior of openvpn. Try to see if there's some part
in your setup before running those extra commands that is significantly
different from mine.

> These lines seem like they should route most if not all packets
> originating from the server to go via eth0.gw:
> 
> ip rule add table 128 from eth0.ip
> ip route add table 128 default via eth0.gw

As explained above, only if they came in through eth0.ip, which is not
true for packets that were sent from the client through the VPN tunnel.

> On second thought, it may depend on source IP chosen. But how does it choose?

I don't really understand what you mean here... sorry.

> To sum it up, I'm not sure if I understand what was off. And I
> certainly don't understand how it works now. Any help is welcome.
> Including suggestions where to ask for help, and what to read. Thanks
> in advance.
> 
> Regards,
> Yuri
> 

I'm glad you managed to fix your problem, hopefully I was of some help
in figuring out what may have been wrong in the first place.


Reply to: