[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

[Resolved] Re: Persistent sshfs mount from inside a Buster virtual machine?



Well, I think this is resolved and it turns out to be self-inflicted.

Yesterday I did a clean Stretch installation in a Qemu VM and did the
minimum to get it up so I could do an sshfs mount from it and it too
would disconnect the mount after five minutes or so.  On a whim I copied
it to my laptop running Bullseye this morning and the sshfs connection
stayed up for several hours.  I then copied the Buster VM over and it
too stayed up for well over an hour.

My attention turned to differences in the SSH configuration of each host
computer.  Several years ago when it appeared I may need to use a
Cellular 4G router for Internet access I did not want to lose my ability
to SSH into the home LAN remotely.  The 4G router provided a less than
reliable connect and on the cell network it was placed behind Carrier
NAT which meant the IP address it was assigned from the cell network was
not reachable from the greater Internet.  

I set up an AWS host at the time and created an SSH tunnel into it from
here that I could attach to with the laptop from elsewhere.  The flaky
connection caused me to set the sshd_config options of
ClientAliveInterval to a value of 300 (five minutes) and
ClientAliveCountMax to 1.  The idea was to have SSH timeout quickly when
the router's 4G connection was reset and I had a script that would
attempt a reconnect upon this closure.  Commenting these options and
letting them be at their defaults of 0 and 3 respectively have
apparently resolved my issue.

Some months later the WISP built out a new system in this area, which
they at first said they weren't going to do that had prompted the 4G
experiment.  Once they did that I no longer needed the AWS host and SSH
tunnel.  As I had not seen any issue with connections from over the LAN
or Internet with these options set as noted above, I forgot about them,
until now.  Funny how that works...

My guess is that the way Qemu sets up the network bridge that my host
could not send the keep-alive message to the guest and simply dumped the
connection as it was configured to do, assuming the guest had gone away.
In the guest, systemd seeing the closed connection dutifully unmounted
the mounts.  No big bad bugs to report after all.

- Nate

-- 

"The optimist proclaims that we live in the best of all
possible worlds.  The pessimist fears this is true."

Web: https://www.n0nb.us
Projects: https://github.com/N0NB
GPG fingerprint: 82D6 4F6B 0E67 CD41 F689 BBA6 FB2C 5130 D55A 8819

Attachment: signature.asc
Description: PGP signature


Reply to: