[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [SOLVED] Re: Partitioning And Formatting A Large Disk (2086.09GB)



Just my 2 cents:

- I have 4 300GB disks in a RAID5 array. On a hardware RAID
controller. This gives me 900 GB space. Secure. If a drive fails, the
array is degraded but my data is safe. Just replace the drive, have
the array rebuilt and you're done.

- Second layer is LVM. LVM is not there to protect your data. Anyone
who makes you think that, ignore. LVM is there to give you
flexibility.

- I use this 900GB array as one big pool. I can create LV's on it,
resize them, etc. Also, what I find very nice about LVM: I can give
them logical names. Reads a lot easier than /dev/sda7.

- If you happen to have a *lot* of disks in multiple hardware RAID5
arrays, you can use LVM to shuffle around LV's. You can move and LV
from one array to another, as long as they are both part of your VG.

Anyway, do some reading on LVM in the LVM-Howto. But don't confuse LVM
with RAID. My Advise: keep your RAID array in place, put LVM on top.
Do some thinking on the lay-out of your volumegroups and logical
volumes.

Oh and for the sake of it: don't forget to create your ext3 partitions
(if you use this) with the -O resize_inode option, or you will not be
able to grow your fs... See the other thread I started on this issue.

Pim

On 2/1/07, Douglas Allan Tutty <dtutty@porchlight.ca> wrote:
On Thu, Feb 01, 2007 at 01:06:54PM -0500, Michael S. Peek wrote:
> So the consensus seems to be that LVM is the way to go.
>
> So what's the cutoff between building arrays of varying size versus
> grouping them under LVM?
>
> I.e. Right now I've got two large arrays.  Should I maybe break that
> down unto just a bunch of disks and then use LVM to group them together
> (not use hardware RAID at all), or should I break the disks into each
> bundles of three and make as many small raid5 arrays as I can and then
> group them under LVM?
>
> What's the general consensus on actually using LVM with hardware RAIDs?

Separate the two concepts:
        raid protects you from drive failure

        LVM allows you to move partitions from one block device (drive,
        raid array, whatever) to another and to resize those partitions.

So do both.

If you're using hardware raid and one disk starts failing, I would think
that you would want one port free on your hardware controller where you
can add a new disk (or leave a hot spare) to allow swapping out a
failing (but not totally failed) drive without degrading the array by
pulling the failing drive first.

Since I've never had a hardware raid card, I could be wrong on this.

If I recall correctly, the drive-space efficiency goes up the more
drives you have in a raid5 array.  If you broke it up into 3-drive
arrays, you'd only have 66% efficiency vs whatever you have now.

I'm assuming that your two large arrays aren't one huge array because of
using two hardware raid controllers.

Personally, I'd use both arrays as PVs for LVM.  Then the LVs can be
made stripe for better performance (since raid protects you from drive
failures).

FYI, remember that /boot can't be on lvm.  I have mine on a raid1.

I have no idea how to set this up after install if you're wanting / on
LVM.  I did mine during Etch install.

Since you're going to all this trouble, choose a good FS like XFS or
JFS.

Enjoy your terror-bites of storage :-)

Doug.


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org





Reply to: