[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Efficient remotely backed, locally encrypted block devices?



Hello

There are a number of interests in my usage of remote storage that I
haven't figured out how to combine without conflict yet: (if you find
these cryptic, see my explanation of my current solutions below first)

* encryption locally, so that when the remote storage server is
hacked, my data is still protected (solved using dmcrypt on a block
device that's somehow backed by the remote storage server)

* speed: since encryption happens locally already, there's no need to
do additional encryption over the wire (accessing the remote storage
through sshfs fails here by only offering encrypted tunnels). Avoiding
encryption on the wire becomes even more important because my server
is computationally weaker than some of my clients.

* stability (sshfs seems to fail here again: I've had at least one
case where I oops'ed my kernel and had to reboot to again be able to
access fuse; I guess that using losetup on files served by sshfs isn't
supported well (yet)--maybe using a file as storage backend for a
filesystem is similar to swapping, something that NBD took some time
to get right)

* at least *some* protection against malicious destruction of my data
- this is for local networks only, so I'd be satisfied with requiring
authentification (with some challenge/response scheme, not clear text
password) on new TCP connections. (NBD through nbdserver fails here,
the only offer is restriction on source IP; admittedly I'd become
secure againt man-in-the-middle only with statically configured
IP->MAC tables anyway, at which point restriction by IP address would
be enough, *except* that this doesn't protect against rogue (non-root)
users on the *client* machine.)

* relatively easy to use (nbd fails here, because both nbdserver and
nbdclient are complicated to set up / handle)

I've currently got two (non-)solutions to these requirements: in both
I'm using dmcrypt on the client; where they differ is how the backing
of dmcrypt is served from the server:

(A) using sshfs to mount part of the filesystem of the server, then
"losetup -fs $some_path_on_the_sshfsmount/some.imgfile", then dmcrypt
on the loop device.

(B) running nbdserver on $some_path_on_the_filesystem/some.imgfile on
the server, and nbdclient on the client, then dmcrypt on the nbd
device

As I've mentioned above, both methods fail my requirements in some ways.

sshfs would need to become kind of an rshfs (unencrypted connection),
but still with safe authentification, and the oops issue(s) would need
to be solved.

NBD would need safer authentification, and some simplification of the
setup. Note that I'm using many such files, often from disks added to
the server temporarily, so configuring just a handful of files
statically in a global configuration doesn't cut it for me.

Before I'm seriously starting to think about starting to code
anywhere, I'd like to know what best to do. Is there another good
solution? I've never used NFS, would this work better for losetup of
mounted files? Would it be less of a pain to setup? (I like that
*technically*, NBD is simple, thus it's got better theoretic potential
to be stable, secure and efficient.)

Thanks for any ideas.
Christian.


Reply to: