[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: How does "making VPN accessible" works?



Hello.

Sorry for taking too long to reply. I had a lot going on lately. And
thanks for your reply. I thought I'm on my own with this.

I decided to go over it once again. Last time it was the other hosting
(VMWare virtual machine). This time it's virtual machine on a VPS
hosting with Xen virtualization. And I've changed VPN provider: from
Golden Frog (https://www.goldenfrog.com/vyprvpn) to NordVPN
(https://nordvpn.com/).

So, when I connect my server to VPN server, I'm unable to ssh into the former.

my.ip - IP of my local machine
srv.eth0.ip - public IP of my server
srv.eth0.gw - default gateway of my server
srv.eth0.net - my server's network
srv.tun0.ip - public IP of my server supplied by VPN server
srv.tun0.gw - default gateway of my server supplied by VPN server
srv.tun0.net - VPN's network
vpn.ip - VPN server's public IP

checking if ping is logged (case 1)
-----------------------------------

On the server, I add the following rules:

$ iptables -t nat -A PREROUTING -p icmp -j LOG --log-prefix="nat: PREROUTING: "
$ iptables -t nat -A INPUT -p icmp -j LOG --log-prefix="nat: INPUT: "
$ iptables -t nat -A OUTPUT -p icmp -j LOG --log-prefix="nat: OUTPUT: "
$ iptables -t nat -A POSTROUTING -p icmp -j LOG --log-prefix="nat:
POSTROUTING: "

$ iptables -t mangle -A PREROUTING -p icmp -j LOG
--log-prefix="mangle: PREROUTING: "
$ iptables -t mangle -A INPUT -p icmp -j LOG --log-prefix="mangle: INPUT: "
$ iptables -t mangle -A FORWARD -p icmp -j LOG --log-prefix="mangle: FORWARD: "
$ iptables -t mangle -A OUTPUT -p icmp -j LOG --log-prefix="mangle: OUTPUT: "
$ iptables -t mangle -A POSTROUTING -p icmp -j LOG
--log-prefix="mangle: POSTROUTING: "

$ iptables -t security -A INPUT -p icmp -j LOG --log-prefix="security: INPUT: "
$ iptables -t security -A FORWARD -p icmp -j LOG
--log-prefix="security: FORWARD: "
$ iptables -t security -A OUTPUT -p icmp -j LOG
--log-prefix="security: OUTPUT: "

$ iptables -t raw -A PREROUTING -p icmp -j LOG --log-prefix="raw: PREROUTING: "
$ iptables -t raw -A OUTPUT -p icmp -j LOG --log-prefix="raw: OUTPUT: "

$ iptables -t filter -A INPUT -p icmp -j LOG --log-prefix="filter: INPUT: "
$ iptables -t filter -A FORWARD -p icmp -j LOG --log-prefix="filter: FORWARD: "
$ iptables -t filter -A OUTPUT -p icmp -j LOG --log-prefix="filter: OUTPUT: "

Then:

$ echo {{{ ---------- | systemd-cat

Locally:

$ tcpdump -i wlo1 icmp and host srv.eth0.ip
$ ping -c 1 srv.eth0.ip

On the server:

$ echo }}} ---------- | systemd-cat

And see echo request and reply both in tcpdump and `journalctl -e` output:

raw: PREROUTING: IN=eth0 OUT= SRC=my.ip DST=srv.eth0.ip
mangle: PREROUTING: IN=eth0 OUT= SRC=my.ip DST=srv.eth0.ip
nat: PREROUTING: IN=eth0 OUT= SRC=my.ip DST=srv.eth0.ip

mangle: INPUT: IN=eth0 OUT= SRC=my.ip DST=srv.eth0.ip
filter: INPUT: IN=eth0 OUT= SRC=my.ip DST=srv.eth0.ip
security: INPUT: IN=eth0 OUT= SRC=my.ip DST=srv.eth0.ip
nat: INPUT: IN=eth0 OUT= SRC=my.ip DST=srv.eth0.ip

raw: OUTPUT: IN= OUT=eth0 SRC=srv.eth0.ip DST=my.ip
mangle: OUTPUT: IN= OUT=eth0 SRC=srv.eth0.ip DST=my.ip
filter: OUTPUT: IN= OUT=eth0 SRC=srv.eth0.ip DST=my.ip
security: OUTPUT: IN= OUT=eth0 SRC=srv.eth0.ip DST=my.ip

mangle: POSTROUTING: IN= OUT=eth0 SRC=srv.eth0.ip DST=my.ip

checking if packets reach my server when openvpn is running (case 2)
--------------------------------------------------------------------

On the server:

$ echo {{{ ---------- | systemd-cat
$ sleep 60; pkill openvpn
# switch to the other tmux window
$ openvpn provider.ovpn

Locally:

$ ping -c 1 srv.eth0.ip

tcpdump shows my request, but no response. After server becomes
available `journalctl -e` shows:

raw: PREROUTING: IN=eth0 OUT= SRC=my.ip DST=srv.eth0.ip
mangle: PREROUTING: IN=eth0 OUT= SRC=my.ip DST=srv.eth0.ip
nat: PREROUTING: IN=eth0 OUT= SRC=my.ip DST=srv.eth0.ip

So, it's like packet disappears while making a routing decision.
https://upload.wikimedia.org/wikipedia/commons/3/37/Netfilter-packet-flow.svg

marking packets (solution 1)
----------------------------

$ iptables -t mangle -A PREROUTING -i eth0 -m conntrack --ctstate NEW
-j CONNMARK --set-mark 1
$ iptables -t mangle -A OUTPUT -m connmark --mark 1 -j MARK --set-mark 2
$ ip rule add fwmark 2 table 3
$ ip route add table 3 default via srv.eth0.gw

My understanding is that first command marks new connections coming to
eth0 with mark 1. Second marks outgoing packets belonging to that
connection with mark 2. Third makes kernel look in table 3 when
routing those packets. And fourth adds default route for the packets.

But again tcpdump shows no reply packet. And after server becomes
available I see:

raw: PREROUTING: IN=eth0 OUT= SRC=my.ip DST=srv.eth0.ip
mangle: PREROUTING: IN=eth0 OUT= SRC=my.ip DST=srv.eth0.ip
nat: PREROUTING: IN=eth0 OUT= SRC=my.ip DST=srv.eth0.ip

So, nothing has changed.

I thought it has to do something to do with this:

> FIXME!! Don't forget to point out that fwmark with ipchains/iptables is a decimal number, but that iproute2 uses hexadecimal number. Thanks to Jose Luis Domingo Lopez for his post to the LARTC list!

http://linux-ip.net/html/adv-rpdb.html

But using numbers less than 10 didn't help.

using from (solution 2)
-----------------------

$ ip rule add from srv.eth0.ip table 1
$ ip route add table 1 default via srv.eth0.gw

It works. tcpdump shows the reply packet. I can see in `journalctl -e`
all those records as in the case when openvpn is not running. And my
ssh session stays responsible.

As far as I know routing tables are stateless

> One other item to remember is that routing decisions are stateless.

http://linux-ip.net/html/adv-multi-internet.html#adv-multi-internet-inbound

That is, you can't route (outgoing packet) based on a packet you're
replying to (incoming packet).

My only conjecture is probably that for reply packets source IP is
already known (set before routing decision), so the first command
makes kernel look in table 1 for (outgoing) packets that are replies
to (incoming) packets accepted via srv.eth0.ip. And the second one
adds default gateway for such (outgoing) packets.

And the fact that such route exists makes incoming packets not
disappear when making a routing decision.

But I've just checked it with http, and the results are the same. So
my conjecture is wrong.

what's still unclear
--------------------

What happens with packets in case 2 (no solutions applied, openvpn is running)?

Why doesn't solution 1 work?

I can see that openvpn doesn't add any iptables rules. And it only
adds to routing table main:

0.0.0.0/1 via srv.tun0.gw dev tun0
srv.tun0.net dev tun0  proto kernel  scope link  src srv.tun0.ip
vpn.ip via srv.eth0.gw dev eth0
128.0.0.0/1 via srv.tun0.gw dev tun0

Which is processed after tables added in solution 1 and 2. That is, of
lower priority.

I've found a couple of guides:

http://linux-ip.net/html/index.html
http://lartc.org/howto/index.html

But they didn't help to reveal all the misteries.

Now as for your comments:

On Wed, May 9, 2018 at 1:01 AM, likcoras <likcoras@riseup.net> wrote:
> On 05/09/2018 05:37 AM, Yuri Kanivetsky wrote:
>> To investigate I added the following rules to iptables:
>>
>> iptables -A INPUT -p tcp --dport 443 -j LOG
>> iptables -A OUTPUT -p tcp --sport 443 -j LOG
>>
>
>> While it was running, I ran a https request to the server from my
>> local computer. It hang, I waited for a while, then interrupted it. It
>> didn't reach nginx running on the server, and there where no entries
>> from iptables in systemd journal.
>
>> Then, I found another solution
>> (https://unix.stackexchange.com/questions/182220/route-everything-through-vpn-except-ssh-on-port-22/242808#242808).
>> I undid the four commands above and ran:
>>
>> ip rule add table 128 from eth0.ip
>> ip route add table 128 default via eth0.gw
>>
>> This time it worked. And I saw log entries from iptables in systemd journal.
>>
>> My understanding is that before tinkering with iptables and routing
>> table, packets from my computer arrived to the server, but the ones
>> coming back were routed via VPN server's gateway (tun0.gw). There they
>> were probably NAT'ed (different source IP), and as such were not
>> recognized as a response by my local computer. But why then were there
>> no entries from iptables in systemd journal?
>>
>> The last solution supposedly made packets return via hoster's default
>> gateway (eth0.gw). But when I do "traceroute 8.8.8.8", I see them
>> going via tun0.gw. And "curl -sS ipinfo.io/ip" returns my VPN server's
>> public IP. So, some packets are going via eth0.gw, some via tun0.gw.
>
> >From what I can understand, the last set of commands made responses to
> packets entering the server from its public interface will always leave
> through the public interface.

I believe, routing tables are stateless. You can't make decisions
based on previous packets. "from" refers to outgoing packet's source
address. So if we assume that source address is set before making
routing decision, then you're right. But that doesn't explain
everything.

> Since your regular traffic (not originally
> intended to be sent to the server, only routed through) are not received
> through the public interface, but through tun0, they are not affected by
> the above rule and do whatever they were doing before you ran the above
> commands.
>
>> How does it decide? This is routing table before connecting to VPN
>> server:
>>
>> default via eth0.gw dev eth0 onlink
>> eth0.subnet/25 dev eth0  proto kernel  scope link  src eth0.ip
>>
>> This is what VPN server added:
>>
>> vpn.server.ip via def.gw dev eth0
>> tun0.subnet/24 dev tun0  proto kernel  scope link  src tun0.ip
>> 0.0.0.0/1 via tun0.gw dev tun0
>> 128.0.0.0/1 via tun0.gw dev tun0
>
> It's quite weird that return packets were being routed via the vpn in
> the first place. I don't think I've had to do any special setup on my
> VPN server to achieve what you've done here. What you describe should be
> the default behavior, not something that requires extra configuration.
>
> On my setup, all connections except those that go to the vpn server's
> public IP are routed through the vpn. If I were to connect to the server
> ip from my local machine, it will just exit and reach the server like a
> normal connection, not going through any VPN, and the server, seeing
> that the connection came in through its public interface, has no reason
> to route the response through another interface.

I'm not sure if the packets were ever routed via VPN. That was a
conjecture, and most likely wrong one at that. It seems like incoming
packets disappear when making routing decision. And you must have
probably noticed that I'm using third-party VPN service. Most likely
targeted at end users.

>
> >From the routing table shown above, you have the same setup on the
> client side, connections to vpn.server.ip are sent through eth0, others
> (0.0.0.0/1 and 128.0.0.0/1) are sent through the vpn.
>
> If a https request from the client never reached the server, there might
> be some other issue somewhere else in your setup. Might be useful to see
> iptables-save output from both sides of the connection, client and server.
>
> My setup, for reference:
>
> client:
> No particular setup on the firewall, just a regular filtering firewall.
> routes:
> # Route the rest of the traffic through the VPN tunnel
> 0.0.0.0/1 via 10.8.0.1 dev tun0
> 128.0.0.0/1 via 10.8.0.1 dev tun0
> # Route VPN traffic through the VPN tunnel
> 10.8.0.0/24 dev tun0 proto kernel scope link src 10.8.0.2
> # Bypass vpn for direct connections to server (avoid routing loops)
> {server ip} via 192.168.0.1 dev wlx88366cf24c67
> # Defaults used before vpn/local network.
> default via 192.168.0.1 dev eth0 onlink
> 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.2
> server:
> firewall has these extra lines:
> # NAT so client packets routed through the server seem to be coming from
> # the server ip.
> -A POSTROUTING -s 10.8.0.0/24 -o enp0s20 -j SNAT --to-source {server ip}
> # allow incoming vpn traffic
> -A udp -p udp -m udp --dport 1194 -j ACCEPT
> # Allow forwarding tun0 traffic through the public interface
> -A FORWARD -i tun0 -o enp0s20 -m conntrack --ctstate NEW -j ACCEPT
>
> routes:
> default via {server ip} dev enp0s20
> 10.8.0.0/24 dev tun0 proto kernel scope link src 10.8.0.1
> {server net}/24 dev enp0s20 proto kernel scope link src {server ip}
>
> Plus the sysctl net.ipv4.ip_forward=1.

I can't connect with client setup like in your case. Client here
refers to my server. And as you probably understand now I'm in no
control of VPN server.

>
> This seems to be the setup that makes the most sense (to me), and what
> seems to be default behavior of openvpn. Try to see if there's some part
> in your setup before running those extra commands that is significantly
> different from mine.
>
>> These lines seem like they should route most if not all packets
>> originating from the server to go via eth0.gw:
>>
>> ip rule add table 128 from eth0.ip
>> ip route add table 128 default via eth0.gw
>
> As explained above, only if they came in through eth0.ip, which is not
> true for packets that were sent from the client through the VPN tunnel.
>
>> On second thought, it may depend on source IP chosen. But how does it choose?
>
> I don't really understand what you mean here... sorry.

I was thinking about what happens first, packet gets source address
(for outgoing packets), or route gets chosen. Actually "from" clause
looks more like for packets to be forwarded. Here's an example:
http://lartc.org/howto/lartc.rpdb.html#LARTC.RPDB.SIMPLE

>
>> To sum it up, I'm not sure if I understand what was off. And I
>> certainly don't understand how it works now. Any help is welcome.
>> Including suggestions where to ask for help, and what to read. Thanks
>> in advance.
>>
>> Regards,
>> Yuri
>>
>
> I'm glad you managed to fix your problem, hopefully I was of some help
> in figuring out what may have been wrong in the first place.
>


Reply to: