[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Debian+LVM+RAID



On Thu, Jul 09, 2009 at 03:33:01PM +1000, Alex Samad wrote:

> > Creating a swap partition on a software RAID device isn't ideal.
> 
> what happens if there is something important swapped out and that drive
> dies ?  I could understand not wanting to put it on lvm and then raid1.

When the swap partition quits working, the system might stop
working. So the question is what's more important, increased
reliability or faster swap.

Personally, I'd feel awkward about having swap partitions on
a software RAID, but when setting up something like a server to
provide important services for a company, I would insist on using a
good hardware RAID controller and likely put the swap partition onto
the RAID.

> > BTW, wouldn't you rather create a partitionable RAID from the whole
> > disks and then partition that? If not, why not? (Letting aside where
> > to put the swap partition ...)
> 
> I have had to rescue machines and having a simple boot + / make like a
> lot simpler  - why make life hard

Aren't you making it hard by having to partition all the disks
involved and then creating RAIDs from the partitions? It's more work
when setting it up, and it can turn into a disaster when something
with discovering the RAID goes wrong.

For example, when I put the disks containing my RAID-1 into another
computer and installed the raid tools, I was asked questions about
starting the raid and answered that I wanted to start the RAID arrays
when booting. I had the disks partitioned and had created RAID arrays
from the partitions.

The result was that the RAID was started immediately (which I consider
as a bug) instead when booting, before I had any chance to check and
to configure the RAID arrays correctly so that they would be detected
as they should. It started resyncing the md devices in a weird way. I
was only lucky that it didn't go wrong. If it had gone wrong, I could
have lost all my data.

Now when I got new disks, I created the RAID arrays from the whole
disks. In this case, I didn't partition the RAID array, but even if I
did, the number of md devices was reduced from the three I had before
to only one. The lower the number of md devices you have, the less
likely it seems that something can go wrong with discovering them,
simply because there aren't so many.

To me, it seems easier to only have one md device and to partition
that, if needed, than doing it the other way round. However, I went
the easiest way in that I have another disk with everything on it but
/home. If that disk fails, nothing is lost, and if there are problems,
a single disk is the simplest to deal with. --- I might have done it
otherwise, but it has been impossible to install on SATA disks because
the modules required to access SATA disks are not available to the
installer. Maybe that has been fixed by now; if it hasn't, it really
should be fixed.


In which way having many md devices made it easier for you to perform
rescue operations? Maybe there are advantages I'm not thinking of but
which would be good to know. I want to get rid of that IDE disk and
might have a chance to, so I'm going to have to decide if I want to
install on a RAID. If it's better to partition the disks rather than
the RAID, I should do it that way.


Reply to: