[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

RE: Requesting advice on s/w RAID-1 install




On Wed, 28 Jul 2004, Steven Jones wrote:

> If you lose the first disk the machine's bios wont pick the second disk, 
> so the machine will not be bootable anyway. 

when the raid disks and kernel and grub/lilo are configured properly,
than it will boot of either/any number of disks in raid1  when the
other one is dead
 
> My advise, go to hardware raid pci plugin card for the OS. 

not worth the extra $$$ ... cheaper to buy 3-5 new spare disks
in lieu of one $300 hw raid card

c ya
alvin
 
> Sorry for asking this 

better to ask than to dive in blindly

> already, but I'm getting in to information overload

yup

> Short and to the point: the goal is to get software RAID-1 going on a new
> installation to be used as a home server for files and e-mail.  I need to
> decide how I'm going to go about doing this.

dont bother with your disk and wife's disk 

keep both disks separate, and backup the other's data to each others disks
on 2 different PCs

>  Should I install the system
> first, then get the RAID going?  Or would it be easier to do RAID from the
> start?

it is 10x easier to install the first time into raid ..
	- you will need to configure /dev/md0  for / manually
	along with /dev/md1 for /tmp, /dev/md2 for /var, /dev/md3 for
	/usr, etc, etc before installing the distro

> I want *everything* mirrored (including /boot and the root).  The
> idea is that if the primary drive crashes, I can take it out, smash it with
> a hammer, re-plug the cable and boot from the mirror drive as if nothing
> ever happened

if its properly configured, you do NOT need to touch anything other
than pull the dead disk and smash it the hammer

and hopefully install the replacement disk before the 2nd disk dies

- usually, if disks dies, identical disks will all die within 30-60 days
  of each other under the same conditions

> (yeah, I know the RAID will be degraded until I can replace
> the smashed drive, this is fine).  Also, should I look at LVM or EVMS, or
> are they overkill?

lvm just allows you to [dynamically] grow /home if you run out of space
and grow it onto your new pair of [raid'd] disk drives

> My wife and I had a server (running Sarge) with a 60G and a 40G drive in it
> that we were using for file storage and e-mail.  The drives were getting a
> bit full, and I had been planning on adding another drive to the mix.

- delete the spam ...
- compress last years emails 

> However, last week the 60G drive crashed. 

:-)

> Our most recent backup is about 5
> weeks old (yes, a better backup plan is also definitely on my to-do list...

find /home -mtime -7 -type f | tar zcf /mnt/backup/todays.date.tgz -T -
	- run it from cron

raid will NOT solve your backup probelms..
	- you're assuming raid is working properly too when you haven;t
	check that its raiding all your data

> I'll figure that out as soon as the system is working again).  My wife used
> to work in the mainframe world, and her comment was "I told you we should
> have had a tandem system."  D'oh!

tandem was bought by ??? than bought by cray and bought by sgi than compaq
than hp ... ( it's back in palo alto where it all started )

> I tend to agree, so I've scrapped my idea about adding more drives, and now
> I'm planning on just getting two 200G drives and hooking them up in a RAID-1
> array to provide mirroring.

more data to mirror and mroe data to be lost :-)

>  The 40 will be coming out once I copy the files
> off of it.  The drives will be parallel ATA, as I'm on a budget and don't
> need the performance of S-ATA or SCSI.

leave the 40GB in there for backups .... 
	- lots of jpgs/mpegs wont be compressed much, but text files
	easily compresses 10x

> There is a bit of a drawback in that the motherboard I'm using is a bit old.
> It works and it's free, but the on-board controller won't support any drive
> larger than 80G.  I'm going to get a Promise controller for the new drives.

promise is good for sw raid ...

but if oyu use 200GB raid disks... you will NOT be able to boot from
it .. bios just too old

> I figure I'll have one drive on each bus of the add-on controller, and leave
> the CD burner on the motherboard controller.

good idea ... or better still ... get a free (new) motherboard when you
buy 2x 200GB disks

> Even though we only have two users (both trusted), I like to have things
> like /boot, /var, /tmp, /home, /usr and so forth split off in their own
> partitions.

always a good idea and those partitions is NOT an issue with people
but how long to fix the machine when it breaks

>  I always figured that if I needed to re-size any partitions, I
> would use Partition Magic.

bad idea ... just use fdisk/cfisk 
	- make sure all raid partitions are "FD" type

>  It works great for me, and downtime is not a
> huge problem.  However, I'm guessing that PM isn't really a good idea on a
> RAID (anyone know for sure?).

yup and pm costs $$$

>  For this reason, I've been looking at either
> LVM on RAID,

lvm wont solve partition problems
lvm wont solve raid problems but adds to it

> or just using EVMS for everything.  EVMS in particular sounds
> pretty cool from what I've read, but it might be overkill for me.  There's
> also the question of / and /boot... but I have seen documentation that talks
> about options for dealing with those on either RAID or EVMS, so I know it
> can be done.

/boot is NOT needed as its own partion when ...
	- its NOT needed on its own partition when your / is < 1024
	cylinders ( 512MB )
	( / == /boot /root /bin /sbin /lib == aka "root" )

	- its NOT needed in new bios that knows how to talk to large disks
	> 132MB

> I do have an old 8G drive that I could put in there temporarily if needed.

good thing to do ..

boot it ... build up the sw raid from the 2 new 200GB disks ... 
and install onto the raid drives

> I was thinking about installing to that, then creating a RAID on the 200G
> drives, then copying everything over to them.  I thought it might be easier
> to set up the RAID on clean drives than it would be if the drives were in
> use.

good idea .... save yourself ooodles of headaches that way
 
> I've also noticed that the beta debian-installer for Sarge supports RAID and
> LVM at install time.

testing new toys is always fun

>  I've seen lots of bug reports on it though, so I'm a little hesitant to
try it.

best to check it yourself for true reports vs just another report

> Alternatively, I've read
> http://www.linuxmafia.com/faq/Debian/installers.html and noticed that a
> number of those support RAID and/or LVM installations.  Perhaps someone can
> recommend a good installer for what I need?

install anything on the 8GB disks
	- build your software raid out of the new disks
	- install your new deb distro into the new raid boxes

>  I have a couple of workstations
> still up and running with CD or DVD burners, and a nice fast DSL connection,
> so I'm fine with net installs, downloading ISO images, or whatever.

too much fun to play :-)

> About the bootloader, I've always used LILO in the past, as it worked for me
> and I never had any good reason to switch to Grub.  However, it looks like
> Grub understands /boot being in a RAID better than LILO does, so it might be
> time to go with Grub.  I know it's possible to do this with LILO, but if
> Grub is easier to work with that makes it good in my book.

makes no difference ... between grub/lilo for booting ..
	- both needs to have the proper parameters for booting off the 2nd
	disk when one of the raid1 disk died

lilo is a safer bet ... if you can avoid the 10101010101

> Anyway, I'd appreciate any opinions, suggestions, ideas or pointers to good
> information that would be relevant to my situation.  I've quickly read
> through the mailing list archives as well as the HOWTOs for software RAID,

not much activity in the raid mailing list lately ... problems all solved
;-)

have fun
alvin



Reply to: