I get going again on your statement of last Jan 25:
"I recently enabled a few virtualization extensions in the Kernel, SCSI controller driver for vmware
for example, in order to get Knoppix running faster in virtualized environments."
As a matter of fact, I am also experimenting strange behavior with Knoppix 6.4.4 on vmware server 2.0.2 around disk.
When installed in a vm as host os, it works perfectly fine.
But when used as a rescue CD to restore Bare Metal backup of linux (knoppix as well as centos, ubuntu or suse),
I feel that the version is not stable.
Basically, I run a perl automated procedure that chains mkfs, vgcreate, untar and grub install.
sometimes umount fails, sometime it works but nothing is actually mounted..
sometimes vgchange fails.
more often, grub fails. it can't see some of its files, though I can access them from another shell..
or it complains for some syntax errors in scripts, but finally claims it worked,
and then, at reboot time: black screen!
The point is that when I run it quietly step by step from perl debuger, it works fine..
Also, doing some 10s sleep within the script at some points improves the process, while it still fails, sometimes.
I wondered if the optimized scsi drivers for vmware would not be involved, here ? what do you think?
The same procedure works perfectly fine all times with Knoppix 5.1, but it is too old, now, some linux features
are not covered..
Any cheatcode to twik the scsi drivers with conservative tunables? (or better, to switch back to previous drivers?
Thanks in advance for you help..
I wondered if the