[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

RE: Requesting advice on s/w RAID-1 install



If you lose the first disk the machine's bios wont pick the second disk, 
so the machine will not be bootable anyway. 

It will probably stay up provided the disks are on sepearate channels, if they are on the same one there is a good chance when one disk dies it will lock out the ide channel it is on and hence take out the second disk.

My advise, go to hardware raid pci plugin card for the OS. 

regards

S

-----Original Message-----
From: Jason Bleazard [mailto:jason.debian@bleazard.net]
Sent: Wednesday, 28 July 2004 2:45 p.m.
To: debian-user@lists.debian.org
Subject: Requesting advice on s/w RAID-1 install


Sorry for asking this when I know there's a lot of documentation out there
already, but I'm getting in to information overload and I'd appreciate any
suggestions or opinions on the "best" or "easiest" or "most efficient" or
"most reliable" way to do what I want.  I write these things in quotes
because I'm expecting opinions to vary, which is fine.

Short and to the point: the goal is to get software RAID-1 going on a new
installation to be used as a home server for files and e-mail.  I need to
decide how I'm going to go about doing this.  Should I install the system
first, then get the RAID going?  Or would it be easier to do RAID from the
start?  I want *everything* mirrored (including /boot and the root).  The
idea is that if the primary drive crashes, I can take it out, smash it with
a hammer, re-plug the cable and boot from the mirror drive as if nothing
ever happened (yeah, I know the RAID will be degraded until I can replace
the smashed drive, this is fine).  Also, should I look at LVM or EVMS, or
are they overkill?

More info:

My wife and I had a server (running Sarge) with a 60G and a 40G drive in it
that we were using for file storage and e-mail.  The drives were getting a
bit full, and I had been planning on adding another drive to the mix.
However, last week the 60G drive crashed.  Our most recent backup is about 5
weeks old (yes, a better backup plan is also definitely on my to-do list...
I'll figure that out as soon as the system is working again).  My wife used
to work in the mainframe world, and her comment was "I told you we should
have had a tandem system."  D'oh!

I tend to agree, so I've scrapped my idea about adding more drives, and now
I'm planning on just getting two 200G drives and hooking them up in a RAID-1
array to provide mirroring.  The 40 will be coming out once I copy the files
off of it.  The drives will be parallel ATA, as I'm on a budget and don't
need the performance of S-ATA or SCSI.

There is a bit of a drawback in that the motherboard I'm using is a bit old.
It works and it's free, but the on-board controller won't support any drive
larger than 80G.  I'm going to get a Promise controller for the new drives.
I figure I'll have one drive on each bus of the add-on controller, and leave
the CD burner on the motherboard controller.

Even though we only have two users (both trusted), I like to have things
like /boot, /var, /tmp, /home, /usr and so forth split off in their own
partitions.  I always figured that if I needed to re-size any partitions, I
would use Partition Magic.  It works great for me, and downtime is not a
huge problem.  However, I'm guessing that PM isn't really a good idea on a
RAID (anyone know for sure?).  For this reason, I've been looking at either
LVM on RAID, or just using EVMS for everything.  EVMS in particular sounds
pretty cool from what I've read, but it might be overkill for me.  There's
also the question of / and /boot... but I have seen documentation that talks
about options for dealing with those on either RAID or EVMS, so I know it
can be done.

I do have an old 8G drive that I could put in there temporarily if needed.
I was thinking about installing to that, then creating a RAID on the 200G
drives, then copying everything over to them.  I thought it might be easier
to set up the RAID on clean drives than it would be if the drives were in
use.

I've also noticed that the beta debian-installer for Sarge supports RAID and
LVM at install time.  I've seen lots of bug reports on it though, so I'm a
little hesitant to try it.  I suppose I don't really have anything to lose
if it doesn't work, as I can always switch to a different plan.  Also, I'd
kind of prefer EVMS to LVM, and I don't see anything about EVMS in the
installer.  At this point I think EVMS has some nice features that make it
look like the better choice, but as long as things work it's not a huge
issue.

Alternatively, I've read
http://www.linuxmafia.com/faq/Debian/installers.html and noticed that a
number of those support RAID and/or LVM installations.  Perhaps someone can
recommend a good installer for what I need?  I have a couple of workstations
still up and running with CD or DVD burners, and a nice fast DSL connection,
so I'm fine with net installs, downloading ISO images, or whatever.

About the bootloader, I've always used LILO in the past, as it worked for me
and I never had any good reason to switch to Grub.  However, it looks like
Grub understands /boot being in a RAID better than LILO does, so it might be
time to go with Grub.  I know it's possible to do this with LILO, but if
Grub is easier to work with that makes it good in my book.

Anyway, I'd appreciate any opinions, suggestions, ideas or pointers to good
information that would be relevant to my situation.  I've quickly read
through the mailing list archives as well as the HOWTOs for software RAID,
LVM and EVMS, but like I mentioned I'm getting a bit of information overload
and I'm not sure which way to go.  Just ask if you'd like any further
clarification on anything.

Thanks in advance,
Jason



-- 
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org



Reply to: