Re: What's the most Accessible Linux VM Server Platform?
Hi Al,
On Wed, Jul 12, 2023 at 01:09:43PM -0400, Al Puzzuoli wrote:
>I am thinking I'll run a Linux virtual machine and in that machine, I'll
>run several small docker containers such as Pihole, Plex, and a few other things.
No problem. This can be done with kvm / qemu and libvirt without any problem
regarding accessibility. You can setup the VM via a virtual serial console
connected and perform all steps you would normaly do in the installer in
this virtual terminal. Another approach is to use preseeding and let the
installer perform all steps automaticaly. I am working with both aproaches
very ofthen and I can provide instructions how to start the installer with
serial console connected or with preseeding. Also I can provide a preseed
file with all instructions for the installer.
If you have installed the virtual machine you can ssh into it and install
docker and docker compose to get all applications you like to use up and
running.
Thats the setup we have at work and it works very well. We install virtual
linux machines more or less automated, configure them with ansible, ssh into
the virtual machines and setup our docker based applications by providing
the necessary docker-compose-yml files.
>I'll also want to be able to run a Windows environment with at least one
>virtualized domain controller, so that will be its own full virtual machine.
Thats a little bit more tricky because you can't setup the Windows VM full
automated. In this case I'd do the installation manualy on a local machine,
e.g. your workstation or laptop and transfer the VM to a central server
later, if everything in the VM is working and properly configured. the most
difficult part to install a virtual Windows with kvm / qemu is to integrate
the virtio drivers. Those drivers are not included in the Windows installer
per default. It is possible to setup a Windows VM without those drivers, but
the virtio drivers offer the best performance. You should use the virtio
drivers for the virtualized harddisk, there a scsi disk is emulated, and for
the network interfaces of the VM. The drivers are all stored on a ISO file
which can be downloaded and added to the VM when it is started the first
time for installation. After the Windows installer has started you can start
Narrator, the Microsoft screen reader, and tell the Windows installer to use
the virtio drivers for the harddisk and network interface. this is the most
difficult part, but if this has been done the Windows installation can be
finished with the Microsoft screen reader like on a normal computer.
When the VM is up and running localy you can configure your services and
also install another screen reader, e.g. NVDA or Jaws. Also you can
configure the remote access via spice. Spice is the technique to connect to
virtual machines running with kvm / qemu. If this is working and you can
connect from your local linux machine to the still localy running VM via
spice and use the screen reader inside the VM, you can stop the VM and
transfer it to your server. After the transfer is done you can connect to
the remote VM either by its new ip or name, or you can configure the local
management software for virtual machines running with libvirt to connect
also to the libvirt running daemon on your server.
I've done all those things without sigted help. The biggest issue was to get
the Windows VM up and running and to include the virtio drivers, but this
was more a problem because I am not so familiar with Windows, the Narrator
screen reader and so on and not because it is a problem regarding
accessibility. I've done all this things with Windows 10, but I think it
should also work with Windows 11.
And of course you do not to use spice to connect to the remot VM, you could
also use Nvda remote or the Jaws tandem technique or whatever you like and
know and what is working with a screen reading software.
>Do I understand correctly that there's not much of a performance hit if
>you run docker containers within a VM as opposed to on a bare metal host
>system?
IMHO this depends on your hardware. If your host where the virtual machines
are running on is powerfull enough you will not have any trouble regarding
performance. We sometimes have 20 docker deployments run inside a VM and
this is OK as long as the host has enough RAM, fast disks (the best is to
use ssds) and enough cpu power and as long you have configured enough
ressources for the virtual machine. Especialy if you really want to use ZFS
as the host filesystem the more RAM you have for the host system the better
for the performance...
bTW.: We use ZFS for the data storage for our VMs and for the VMs itself we
are using ext4 without LVM. If you make sure that the root file system is
the last partition of all your VMs you can easily resize the harddisk, the
root partition and the filesystem of the virtual machines without LVM. We do
not have a seperate partition for the data stored inside VMs, just the boot
partition, the efi partition and the big root partition, which is always the
last partition. If you want to split your VMs into more partitions just make
sure that the partition, which maybe has to be resized, is the last
partition of your virtual harddisk. Also we do not use a seperate swap
partition for the VMs. Either we have no swap space configured at all, or we
are using a swapfile, but if a VM is swapping, just give it more RAM and
make sure that no swapping is done, because swapping for a virtual system
can really be a performance killer.
Ciao,
Schoepp
Reply to: