[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: "big" machines running Debian?



On Fri, Feb 27, 2009 at 07:51:29AM +1100, Alex Samad wrote:
> my rule of thumb is to always have atleast 2 partitions on the first 2
> drives (3 if I have them), for a raid1 /boot and a raid1 /. the rest of
> the space is put into a raid device then into lvm.  That gets rid of the
> interesting tweaks.

Even with software raid1, setting up reliable boot from either drive
if one fails can be interesting, but it has gotten a lot better than it
used to be.

> is that monitoring of the raid drives or the actual drives underneath, I
> like having smartctl to give me access to the actual drive health

Well monitoring of raid health would be minimum.  Getting more details
would be nice.

> > The biggest advantage to software raid is that it is hardware independant.
> > You can move all the disks to another controller type on another system,
> > and linux's software raid will still work.  Hardware raid setups are
> > often very specific to one controller type so recovery from a controller
> > failure can be tricky if you don't have access to spares.
> 
> I have gone through a few cycles of changing the underlying drive sizes,
> ie a 3 disk raid5 made up of 3 x 500Gb and replacing in line with 3 x
> TB.  pop 1 disk replace with 1 TB once it has settled you can do an
> online expansion.  Not sure if you can do that on a HW raid.

Some hardware raids can do lots of things.  Some can do no resizing at
all.  I have certainly used hardware raid cerads where adding a disk to a
raid5 and expanding it was no problem.  It just did it in the background,
and when done you could reboot the system and the disk was suddenly bigger
and software could do whatever it wanted to resize to the new larger disk.
It also dealt with moving to larger disks in the raid by rebuilding one
drive at a time, and then when all where replaced you could increase
the raid to the size of the new disks.

-- 
Len Sorensen


Reply to: