[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Debian+LVM+RAID



On Thu, Jul 09, 2009 at 02:19:44AM -0600, lee wrote:
> On Thu, Jul 09, 2009 at 03:33:01PM +1000, Alex Samad wrote:
> 
> > > Creating a swap partition on a software RAID device isn't ideal.
> > 
> > what happens if there is something important swapped out and that drive
> > dies ?  I could understand not wanting to put it on lvm and then raid1.
> 
> When the swap partition quits working, the system might stop
> working. So the question is what's more important, increased
> reliability or faster swap.

I am not sure that raid1 is noticeable slower than raid0 (or jbod), raid
5 maybe or any other parity raid


> 
> Personally, I'd feel awkward about having swap partitions on
> a software RAID, but when setting up something like a server to
> provide important services for a company, I would insist on using a
> good hardware RAID controller and likely put the swap partition onto
> the RAID.

Depends on how much you are going to spend on the controller and weather
or not you are going to have battery backed up cache - if you not you
might aswell go software raid (only talking raid1 here).

If you do spend the money and have multiple machines then you might as
well go for a san......

I would suggest for most commercial situations a software raid setup or
2 raid 1 disk is a far better solution that a proprietary hardware raid
controller

> 
> > > BTW, wouldn't you rather create a partitionable RAID from the whole
> > > disks and then partition that? If not, why not? (Letting aside where
> > > to put the swap partition ...)
> > 
> > I have had to rescue machines and having a simple boot + / make like a
> > lot simpler  - why make life hard
> 
> Aren't you making it hard by having to partition all the disks
> involved and then creating RAIDs from the partitions? It's more work
> when setting it up, and it can turn into a disaster when something
> with discovering the RAID goes wrong.



sfdisk -d <master device> > raidlayout.out

sfdisk <new raid device> < raidlayout.out

you could wrap it inside a for loop if you want 

> 
> For example, when I put the disks containing my RAID-1 into another
> computer and installed the raid tools, I was asked questions about
> starting the raid and answered that I wanted to start the RAID arrays
> when booting. I had the disks partitioned and had created RAID arrays
> from the partitions.
> 
> The result was that the RAID was started immediately (which I consider
> as a bug) instead when booting, before I had any chance to check and

but you said above you gave the okay to start all raid devices, so why
complain when it does it ?

> to configure the RAID arrays correctly so that they would be detected
> as they should. It started resyncing the md devices in a weird way. I
> was only lucky that it didn't go wrong. If it had gone wrong, I could
> have lost all my data.

And this why we have backups

> 
> Now when I got new disks, I created the RAID arrays from the whole
> disks. In this case, I didn't partition the RAID array, but even if I
> did, the number of md devices was reduced from the three I had before
> to only one. The lower the number of md devices you have, the less
> likely it seems that something can go wrong with discovering them,
> simply because there aren't so many.

I don't think you are going to have overflow problems with number of
raid devices

> 
> To me, it seems easier to only have one md device and to partition
> that, if needed, than doing it the other way round. However, I went
> the easiest way in that I have another disk with everything on it but
> /home. If that disk fails, nothing is lost, and if there are problems,

well except for /etc/

> a single disk is the simplest to deal with. --- I might have done it
> otherwise, but it has been impossible to install on SATA disks because
> the modules required to access SATA disks are not available to the

if you have a look at the latest installer i think you will find it has
all the necessary modules now 

> installer. Maybe that has been fixed by now; if it hasn't, it really
> should be fixed.

I was suggesting to put / on its own partition as well as /boot. boot I
do out of habit from long time ago, with busybox you can access the
system even if the other partition as corrupted and still try and
salvage stuff 


The suggest I have made are for reducing risk, the gain made by having a
separate root and boot  in my mind a re worth it.

In a production environment you have change management procedures or
at least some documentation.

> 
> 
> In which way having many md devices made it easier for you to perform
> rescue operations? Maybe there are advantages I'm not thinking of but
> which would be good to know. I want to get rid of that IDE disk and
> might have a chance to, so I'm going to have to decide if I want to
> install on a RAID. If it's better to partition the disks rather than
> the RAID, I should do it that way.

You have missed the point the advantage is have a separate / and a
separate /boot to protect them you can even mount /boot as ro.

All the partitions are raided again for protection.  If you are worried
about a little complexity then managing HA and/or production system
might not be up your alley.

> 
> 

Having said all that, there is no golden rule, each situation needs to
be addresses individually weighing you the risks compared to the gains

For me as a creature of habit I usually follow the layout I specified
before - because it has worked and it has save me time and it is very
flexible, your mileage might vary.


Alex

-- 
"Kosovians can move back in."

	- George W. Bush
04/09/1999
CNN interview

Attachment: signature.asc
Description: Digital signature


Reply to: