[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: software raid



>
> hi ya lucas
>
> collection of um
> 	http://www.1U-Raid5.net
>
> "good sw raid" is already part of the linux kernel..
> you dont need anything else ... other than to turn on the raid options
in the kernel  and create your raid config files
>
>
> minimum testing process..
> 	http://www.1U-Raid5.net

>> good idea
> 	- do you raid just the data .. or the OS too ?? ( root raid )
I was planning to raid the root, and setup additional mirros for data. If
you create smaller raid volumes in raid, it's faster to sync when your
volume get's fubared.
Which happens on my redhat systems occasionally.
> 	- if you use raid5 ( you supposedly cannot boot off / that is
> 	raid5 .. but i think if you have a proper initrd, it works )
I'm pretty sure you cannot boot off raid5.
Everything I've seen seems to indicate it.
>
> and redundancy also comes from monitoring the raid setup
> 	- lots of scripts you can write to monitor the raid system
>
I like the mdadm package as it includes a raid monitoring script.
Although I am stilling trying to get a bootable root raid working.

> for setup of a new system ( the right way?? )
> 	- make sure the partition type is FD(raid) not 82(linux)
>
> 	- install the / system as root  ( unfortunately, fortunately,
> 	redhat/suse makes installing onto hda/hdc trivially simple )
>
Works on my redhat systems, want it to work on my debian system.
I am on teh positive side, learning how raid works which should help me
when it fubars.
> 	- debians new installer should allow for root raid installs
Don't care about the new installer, as I want something now, on stable.

>
> 	- simpler/faster/easier to copy the exisitng data to a backup data
>
So far I've just been trying to sync existing data with a degraded disk,
trying to get it to boot, and when it boots syncing the first disk with
the second disk. Have not lost any data yet.

> 	or just start with 2 fresh disks and leave the current disk alone and
retire it after your new raid setup is testing/working and
> 	been running a few months
>
> 	- new disks is $70 or less for 40GB... thats 30 minutes of time or less
... ( cheaper/faster to get to 2 new disks )
>
> - for existing systems ..
> 	- boot a standalone media ...
> 	- partition both target disks as FD partition types
> 	- config both disks and format
> 	- install as usual
Got the FD stuff done.
Seems somehow that cfdisk does not completelly clean the partiion up. What
is the most comprehensive clean command for completelly rewriting all the
partitions and file systems on a disk?

>
> - make sure your final raid config is:
>
> 	# allow you to boot off hda or hdc
> 	#
> 	boot=/dev/md0
> 	...
>
> 	# root raid
> 	root=/dev/md0

>> I have read most of the documentation...but they lack items.
>
> yup.. lack the key problems ...
Saw this item for mdadm which appear to be boot switches that I have not
seen on ANY other documentation.
> Support /dev/hda1 was your live boot drive, and /dev/hdc1 was second
> partition that you eventually wanted to raid1 together with /dev/hda1.
> Then
>   1/ create a degraded raid1 using /dev/hdc1 only:
>      mdadm -C /dev/md0 --level raid1 --raid-disks 2 missing /dev/hdc1
>   2/ create a filesystem on /dev/md0 and mount it:
>      mkfs /dev/md0
>      mount /dev/md0 /mnt
>   3/ copy everything from / to /mnt
>       cp -ax / /mnt
>   4/ modify /mnt/etc/fstab to think that / is on /dev/md0
>   5/ reboot with a kernel-parameter of:
>         md=0,/dev/hdc1 root=/dev/md0
>   6/ If this all seems to work properly, then add /dev/hda1 to the
>      raid1 array:
>        mdadm /dev/md0 -a /dev/hda1
>      and change the kernel-paramter line to
>         md=0,/dev/hda1,/dev/hdc1 root=/dev/md0
>


>
>> I can recompile the kernel; use a new kernel if necessary.
Better to have a kernel with raid built in, so you don't have to bang
around with initrd.

>
> since these are for production .. you should to it the right way ... vs
copying an existing system to a 2nd disk  ( it's NOT raid
> until you get the partition type to be "raid" )
Was easy to switch partition type, didn't even lose data on a partition
switch. Just cfdisk blam blam blam. Done.


>
> raid when properly setup will be able to boot and keep running
> even if the any 1 other disk is pulled out of your system
Assuming you are using autot-detect of the file system types. Which you
are referring to.

I am going to re-attempt this process, continue to attempt this process
until I get it figured out. I'm pissed so I'm not stopping.
Then post some notes for exactly what I am doing:
"Use only debian stable, switch an existing system to boot / off  a raid-1
partition without using rescue disks, or any losing data. Use ext3 or
reiserfs, and mdadm tools as the raid tools."
Exact notes along the lines of the EXACT,EXACT steps to accomplish,
skipping no steps.

Then when I get it working once, wipe my disks and following my writeup do
it again. (Right after partying, because I completed the task.)

Going to continue to re-attempt this at work.
Alvin thanks for the feedback.

--Luke
> alvin



Reply to: