[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: LVM root?



Le 12.10.2006 16:09:26, dtutty@porchlight.ca a écrit :
Thanks Len,

comments embedded below.

On Thu, Oct 12, 2006 at 09:26:53AM -0400, Lennart Sorensen wrote:
> On Wed, Oct 11, 2006 at 05:20:58PM -0400, dtutty@porchlight.ca
wrote:
 >
> > The board itself has hardware SATA raid available.  If I go for
raid,
> > then I'll ask here for the advantages/disadvantages.
>
> Unless you have a high end server board, you do not have onboard
> hardware raid.  You have onboard fake raid (which is software raid
done
> in the bios and the windows driver).  Linux's software raid is
faster,

Board is an Asus M2N-SLI Deluxe (AM2), says it has hardware raid
(Raid0,raid1, raid0+1, raid 5, and JBOD via the onboard NVIDIA
MediaShield RAID controller.  This sounds like hardware raid to me and
is configured via the bios menus.

*Real* hardware raid doesnt need an OS layer / driver to work.
This kind of raid relies on the BIOS *and* on a Windows driver.
It is more a raid feature enabled in the BIOS and managed by the Windows driver.
Linux can or not support this BIOS feature depending of the chipset.

Most of the time, you have to disable the raid in the BIOS and use pure software raid.


> > More to the point for me, though, is where can I get current
howtos or
> > guides on fixing problems when things are in raid or LVM?  Its a
whole
> > new world for me and the LDP HOWTOs are too out of date, and
> > debian-reference doesn't cover it.
>
> The installer supports setting it all up.  It isn't very hard...

Can you give me either a URL or a thumbnail sketch of how to deal with
a
disk failure if I set it up as you suggest?
In case if a raid1, if a disk fails you get a message from the system.
If you have spare disks (configured and installed as so in the raid), the raid is rebuilt on the spare disk. You will notice disk activity related to this mirroring. Then you can wait until you can shutdown your system and remove/replace the defective disk. Then restart the system and use mdadm to reinstall the disk in the array. If you have no spare, the raid is degraded and you are running on the safe disk without any redundancy. You have, the same way as before, to shutdown your system and remove / replace the defective disk.

In case of a raid0, you have no redundancy and the filesystem relying on this raid will die.

Remarks :
- SATA is told to be hotplug but most of the motherboards dont support hotplugging of disks on their SATA controller. This is why you have to shutdown your system. - There are disk failures and there are controller failures. If both your disk are on the same controller, your system will crash. - The swap has to be on the raid also (or on a logical volume of LVM which is built over the raid) otherwise, you will probably crash your system at the failure.

You can download and install mdadm : the doc files in /usr/share/doc/mdadm contain valuable informations.



You suggest ext3 for the / system.  Why would I not just use JFS for
everything?
It is often easier to repair have access to ext3 (which is ext2+journal) from a system you have booted from a live CD, just in case of a weird problem on the filesystem.

Regards

Jean-Luc

Attachment: pgpheDQBkErpS.pgp
Description: PGP signature


Reply to: