[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

[Solved (kind of)] Re: Installing Wheezy on btrfs only (multi-device)



On Mon, 2012-05-07 at 03:11 +0200, Steven Post wrote:

It's been a while since I replied to this thread, and since I solved it
(well.. kind of...) I thought I'd mention how I got this working in the
end and also what happens when a drive fails.
I don't have a blog, so I'll give it all here, it'll be a pretty long
read. Perhaps only interesting to some.

> Another attempt, mixed success.
> I created the btrfs filesystem using the ubuntu live cd (2 subvolumes, 1
> for the root fs, 1 for /home, and set rootfs as the default subvolume),
> then started the Debian installation again from the daily netinstall
> iso.
[...]
> 
> The installer only fails to install the grub bootloader, I think because
> it cannot detect the multi-device btrfs file system.
> I then opted for skipping installing a bootloader, figuring I could do
> it afterwards in rescue mode. Installer finishes up without any further
> problems.


As far as I know GRUB supports only single device btrfs file systems,
not multi-device. Having a single device btrfs for /boot would defeat
the idea of raid for /boot so I opted for something else.

All 6 drives have a 3 partitions, a very small one for BIOS boot, a
larger one (1GB) so /boot will fit (with a lot of space left), and the
rest is a single partition of about 2.7 TB. The last 2 partitions are
Linux partitions, the default gdisk suggests.

Next I use an Ununtu 12.04 install disk to setup my raid 10 btrfs volume
using the third (large) partition from each drive.
I create a couple of subvolumes for different purposes (one for the root
file system (/), one for /home etc..
Next I mark the subvolume for the root fs as the default subvolume (when
working with subvolumes this is needed because of the limitations in the
Debian installer).

Once btrfs is set up I restart the machine with the Wheezy netinstaller.
I choose a standard install and proceeded through the installer until I
got to the partitioner. When I'm in the partitioner, I use ctrl+alt+F2
to switch to a console. After pressing enter I'm greeted with a root
prompt, from here I issued a btrfs device scan using "btrfsctl -a" as
the installer doesn't have the btrfs command. I can then switch back to
the partitioner using ctrl+alt+F1 and mark the 3rd partition from one of
the 6 drives as my root filesystem (/) using btrfs as the filesystem,
making sure to have the option "keep existing data" enabled.

Still in the partitioner I choose to setup raid, I choose a md raid1
setup using 3 devices and 3 spares, these are the second partitions from
each drive. This array I mark for use with ext4, although I think btrfs
on top of this raid array would work, although I haven't tried it. I
proceed to mark this ext4 partition as /boot.

With both / and /boot I can continue with the install, notice I don't
have a swap partition, but in this case I felt I didn't need one,
everyone is free to add that if they want to or I can add it later
should the need arises. Anyway, I just continue the install as with any
system and once it is completed, I tell the installer to install grub on
every device, should a device fail, I can still boot the system.

After installation is complete I still need the installer (rescue mode)
or some live cd to fix /etc/fstab so it doesn't hang trying to do an
fsck on the btrfs root fs. When all this is done I can finally boot the
new system.

Now when a drive fails, as happened earlier today (I don't know why, but
I'll get a replacement this Tuesday), the system will fail to boot
properly after removing that drive, you are dropped in a busybox shell
and options are limited. Don't be alarmed, this is because the root
filesystem refuses to mount with a missing drive. In the mean time my
raid1 for /boot has taken a space device and is syncing it with the
other 2. I let it sync before continuing. When the syncing is done, I
reboot and when I get to the GRUB boot prompt I press 'e' to edit.
I look for the line starting with 'linux /vmlinuz' and add a rootflag
giving me something like this:
linux /vmlinuz-3.2.0-2-AMD64 root=UUID={some long uuid}
rootflags=degraded ro quiet

Adding this 'degraded' rootflag allows you're kernel to mount the root
filesystem again. Once started you can remove the missing drive from the
btrfs array and issue a btrfs filesystem balance / (provided you still
have enough space to mirror everything with that missing drive and at
least 4 drives), leaving you with a raid 10 system with 5 drives.

Later on I can shutdown the system (it's not hot swappable in my case),
add the new hard drive, partition it like before, add one partition to
the /boot raid array as a spare device, and add the large partition to
the btrfs array, balance again and I'm good to go.
If you have a replacement lying around, or not enough free space for the
balance with a drive out, you can omit that step and just add it right
away. But always balance after adding the device.

Should anyone have any comments or improvements, please them know. I'm
not such an expert on this as I might appear (or not) from this post.

Kind regards,
Steven

Attachment: signature.asc
Description: This is a digitally signed message part


Reply to: