[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#989462: About bumping Vagrant box disk image to 1TB

Emmanuel Kasper:
I did some testing around
(not merged in master yet) and I am still reluctant to merge the branch.
I am OK to bump the default disk size to something like 40GB but not to 1TB.

The problem with the disk size of 1TB is as such: when you do a lot of
write / erase cycles, the deletion of blocks is not propagated to the
qcow2 backing disk image, so even though the OS in the VM reports only
2GB of block usage, the disk image could grow to 1TB, without the user
knowing it.

Yeah, I also thought about that. Would it be possible to ship the images with a disk/partition size of 1TB but keep a filesystem size of 20GB? It is easy to expand the filesystem as needed. The hard part is expanding the disk/partition size.

I could reproduce this behavior running `fio` in the guest in a loop.
I find this behavior dangerous.

At that point I see three possibilities:
- you add to your pull request a change of the virtualized disk
controller from virtio-blk to virtio-scsi and to the default libvirt
vagrantfile the "unmap" option so that deletion of blocks in the guest
are propagated om host storage

This sounds like the ideal solution. I have no idea how much work it would be. Do you?

- you're fine with a disk image size of 40, or let's say 80GB

Chromium builds can take more than 100GB, so either of those would mean we still need to make our own basebox.

- you use a shared folder for the builds. I just noticed vagrant-libvirt
has also support for virtio-fs which according to its author has native
host performance. If they are security concerns, let's discuss that in
details and involve upstream if needed. virtio-fs is mature enough that
it's use in production for Kata Containers in Kubernetes and OpenShift
Sandboxed containers in the Red Hat Kubernetes offering.

We like the security isolation of throwing away all things that the build process has written to, so this option is less appealing, though perhaps workable. It there was a host-controlled method of resetting the virtio-fs to a previous snapshot, then it could work.


Reply to: