[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: OT: harddrive addition for RAID [Warn: long message]



>18/10/2011 01:11, Stan Hoeppner wrote:
>> On 10/17/2011 5:09 PM, Raf Czlonka wrote:
>>> On Mon, Oct 17, 2011 at 06:12:00PM BST, Camaleón wrote:
>>>> 1. does the HD need to be exactly the same as the one its being paired
>>>> with ?
>>>
>>> Not necessarily, but you will lose the remainder difference space between 
>>> the smallest and the bigger of the disks. If you were referring to the 
>>> brand/model/serial number of the disks some people think is better they 
>>> exactly match (me) others think the opposite.
>>
>> It's not just size that matters ;^)
>> If you'd like your RAID array to perform better it's always better to
>> have the disks identical - cache size, speed, etc. If you have drives
>> which don't match, essentially your RAID will perform as good as your
>> worst drive.
> 
> Also keep in mind that with software RAID you won't be mirroring
> "drives" but partitions, since you're looking to mirror your boot/system
> drive.  Getting your BIOS, boot loader and mdraid setup correctly so
> that the surviving drive boots the system after the other fails can be
> very very tricky, especially for a Linux RAID novice.
> 
> If this is what you want to accomplish, then you have a lot of reading
> and research ahead of you, and likely some trial and error, along with
> headaches.
> 
> Given the costs, learning curve, and "ease of use" issues, if I were
> you, I'd simply purchase a good cheap real RAID0/1 card and two new
> matching 500GB drives.  Something like this combo:
> 
> 1 x http://www.newegg.com/Product/Product.aspx?Item=N82E16816116075
> 2 x http://www.newegg.com/Product/Product.aspx?Item=N82E16822136073
> 
> Setting up a RAID1 set will be pretty easy with this card, and if one
> drive fails the card simply boots the other automatically and writes the
> failure to a log file and/or sends you an email.  No hoops you have to
> jump through as with mdraid.  And you'll also get a nice little speed
> bump due to the 128MB of cache on board.  If your system is connected to
> a good working UPS you can enable write caching for even better
> performance.  Total cost of these parts from Newegg is about
> $270+shipping.  All you need is a free PCIe x1 slot.
> 
> If the cost isn't prohibitive, you'll be much happier with this solution.
> 

While everything that's been said is true, such migration isn't that bad
and you'll learn a lot in the process, and mdadm raid is a lot more
flexible than hardware raid at the cost of a little overhead (only makes
a difference if the system is very busy AND you can afford a real high
hand card). You should buy two raid cards, just in case the first one
fails in between two backups and isn't available anymore...

Easiest way is to backup, wipe old drive, reinstall from scratch to a
raid system and restore data as needed. Debian installer will let you do
that just fine. You can even throw lvm and/or encryption in at the same
time and get a fresh start !

If you really want it the hard way, I'll try to break it down for you
from memory:

_BACKUP

_CHECK BACKUP CONSISTENCY

_Put the new drive in, partition as needed without creating any file-system.

_Install/check mdadm, initramfs-tools and busybox on your running system

_Create a degraded raid setup

mdadm --create /dev/md0 --auto=md --level=1 --raid-devices=2 missing
/dev/sdb1

Here "sdb1" would be your new drive partition, "missing" being the slot
where your old drive partition will fit in the end. "auto=md" is a
preference of mine, since the partition layout is set before the raid
creation, I create a "non partition-able" raid (See "man mdadm" and the
"-a" option in the "create" section).
There are several options that can be interesting to set at creation
time ("chunks", "name" ...), they are not necessary and your raid will
run just fine with the defaults.

_Check that the raid is started ("run" it if necessary), and format it:

cat /proc/mdstat

mkfs.ext4 /dev/md0

Repeat the process for other partitions.

>From single user mode or from a live cd, copy over you existing data to
the raid volumes (cp, rsync, whatever you like to use. Make sure to keep
all permissions and file attributes.)


_mount new system, bind mount /proc, /sys, /dev from the running system
to the raid one. Mount /boot from the new system if it's an autonomous
partition.

_chroot on the raid system

_populate /etc/mdadm/mdadm.conf with:

su -c "mdadm --misc --detail --brief /dev/md* 2> /dev/null | tee -a
/etc/mdadm/mdadm.conf"

_Check mdadm hook scripts presence:

find /etc/initramfs-tools/ -name mdadm

If they aren't there, copy them over from "/usr/share/initramfs-tools/".

As an extra safety measure (this should not be necessary), you can do:

echo -e "raid1\nmd_mod" >> /etc/initramfs-tools/modules

It's not needed if you have "MODULES=most" in
/etc/initramfs-tools/initramfs.conf.


_Take note of filesystems UUID's:

blkid /dev/md*

Compare with the output of: ls -l /dev/disk/by-uuid | grep md


_Adapt /etc/fstab, arrays will be assembled in initramfs, so you want
only file-systems UUID's in there, no mdadm arrays or partitions ones.
This is the main catch when new to raid, it's easy to get mixed up. Do
not use udev devices names (like /dev/md0), they are not reliable.

_Update initramfs

_update grub config and install grub on disk device ("/dev/sd?" or
"(hd?)" level), you'll install it on the other one too at the end of the
process. Take a look at grub.cfg to make sure the "set root=" and linux
"root=" stanzas are valid. If /boot is on raid then "set root=" should
be using the array UUID (mduuid/)

>From here you can shutdown, unplug (or disable in bios) the "old" drive,
and try to boot from your new raid setup. If the raid isn't started at
boot time, try adding "start_dirty_degraded=1" on the grub kernel line
(the one starting with "linux").
If you are stuck because of grub, use a grub recovery live-cd
(supergrubdisk, rescatux) or installer rescue mode. As an alternate
method you can load grub from the "old" drive, and then change the "set
root=" and linux "root=" from the grub menu to the new raid system /.

When in your new system make sure everything is running smoothly, data
are all here. Wipe the old drive, partition exactly like the raid "new"
one (no filesystem, use reliable tools like (c|g)fdisk or parted, beware
the Xparted), you can dump the current layout with (for /dev/sda):

sfdisk -d /dev/sda

Add partitions on the "old" drive to raid volumes. Watch rebuild in
/proc/mdstat, install grub on the newly added drive, be done !

Of course there is plenty of room for gotchas along the way, and I may
have forgotten/mixed up something (my last raid "migration" was more
than a year ago).


Do read all the "README's" in /usr/share/doc/mdadm and the man page.

Disclaimer: As other said, this is a tricky and risky operation, if you
end up screwing your system because of my instructions I won't buy you a
Window$ license, ISomething device or a hardware raid card ! ;-)
DO NOT even think about doing this "live" without a proper, recent,
double-checked backup.

Have fun.


Reply to: