[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Possible to add an LVM to existing Wheezy box



Ron Leach <ronleach@tesco.net> wrote:
> Actually, the disk we are rescuing is the surviving member of a RAID1
> pair[1].  I realised, today, that I cannot simply install that in our
> Wheezy box because (I think) it needs a software RAID layer in order
> to read it (fstab refers to md1, md2 etc).

It depends on the RAID management level (0.9, 1.0, 1.1, 1.2, ddf,
imsm). Since your disk is from an older system the chances are quite
good that you *can* mount the partition directly as a non-RAID disk
(sdaX, etc.). Once you've done that, though, it gets quite exciting
trying to grow it back into a RAID1 configuration. You can do it - it's
a matter of creating a RAID1 configuration with an "initially missing"
second disk and then adding it later - but you have to be careful of
the existing on-disk RAID1 definition.


>> The RAID1+LVM solution can be implemented as part of your migration to
>> a new server

> OK.  If we were to do this (and I think we ought to, we do have 2 
> spare 2TB drives), I've a couple of quick planning queries.

> (a) Will Wheezy happily run RAID1 'only' on the 2 x 2TB disks, leaving 
> the OS on its non-RAID 250GB disk?

Yes, if that's what you want. You can choose to apply RAID and/or LVM
to any partition and/or disk.


> (b) More seriously, though, because we will need to expand the space, 
> there's only room on the motherboard for 4 SATA drives and to expand a 
> RAID1/LVM scheme I'll need another 2 drives, making 5 drives overall. 
>  I'll run out of SATA ports.  Happy to listen to any suggestions.

A couple of options spring to mind

1. Buy a pair of 3TB disks instead of 2TB ones. Proceed as before.

2. Buy three 3TB disks and use RAID5 instead of RAID1. Otherwise proceed
as before.

3. Buy four 3TB disks, discarding the existing 250GB disk. Use either
RAID1+0 (6TB ) or RAID5 (9TB) for your LVM/data partition.

You can replicate your OS across as many of the disks as you like. If
you do this, use either a simple /boot partition of 100MB or so or a
relatively small 20-50GB OS installation. You can RAID1 this across all
the disks if you want, but you have to remember to install Grub on each
disk individually.

Once you've done that, create a new partition for remaining space on
all the available data disks (1.95TB or whatever). You can RAID1 or
RAID5 these data partitions. Then create an LVM layer on top of the
resulting metadevice.


In my home situation I've got an HP ProLiant with four slots. One contains
the originally supplied 250GB disk; two of the other three contain 3TB
disks. I've got my OS on all the disks replicated with RAID1 (actually
across two with a hot spare). The remaining chunk of the 250GB disk has an
LVM Volume Group called "noraid", which I use for temporary allocations
that really don't need RAID. The remains of the two big disks are sliced
up into 500GB partitions. I can pair the corresponding partitions using
either RAID0 or RAID1. These then get added either to my "raid0" VG
or my "raid1" VG. Backups go to a partition created from the "raid0"
LV. Important stuff goes to a partition created from the "raid1" LV.


> A recent 
> posting on (I think) this list pointed to an Adaptec HW RAID card with 
> 4 ports, which might solve the ports problem and let us expand to 2 x 
> 2TB and (say) 2 x 3TB drives, albeit with some reconfiguration away 
> from software RAID.

The only "gotcha" I can see with hardware RAID is how to recover data from
the disks in the event of the card's failure. If it was a commercially
maintained corporate system (Dell, HP, IBM, whatever) under supplier
warranty then I'd not worry about this. For SOHO use I'd want to know
how to handle recovery from this situation.


> From an implementation point of view, presumably the steps are:
> - Build a RAID1 layer on the 2 2TB drives
> - Then build the LVM with pvcreate, etc,

Essentially, yes.


> [1] We did try to rebuild the RAID1 but we couldn't get the rebuild
> to work.  The disks have multiple partitions, each is a separate RAID1
> (I think this isn't recommended) and while we were partitioning the
> replacement disk, we ended up confusing mdadm.

I don't see anything wrong with having multiple RAID1 slices. You do
have to be careful when rebuilding a dead RAID configuration, though,
and you possibly just needed to tell mdadm the new minor device numbers
for the devices you were (re)building. BICBW of course.

Chris


Reply to: