[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Debian+LVM+RAID



On Thu, Jul 09, 2009 at 08:43:13PM +1000, Alex Samad wrote:
> On Thu, Jul 09, 2009 at 02:19:44AM -0600, lee wrote:
> > On Thu, Jul 09, 2009 at 03:33:01PM +1000, Alex Samad wrote:
> > 
> > > > Creating a swap partition on a software RAID device isn't ideal.
> > > 
> > > what happens if there is something important swapped out and that drive
> > > dies ?  I could understand not wanting to put it on lvm and then raid1.
> > 
> > When the swap partition quits working, the system might stop
> > working. So the question is what's more important, increased
> > reliability or faster swap.
> 
> I am not sure that raid1 is noticeable slower than raid0 (or jbod), raid
> 5 maybe or any other parity raid

Maybe you're right ... When you have all partitions on a RAID to
improve reliability, it doesn't make sense to make an exception for
swap partitions. And RAM isn't as much an issue as it used to be
because the prices have come down so much that it is affordable to
have so much RAM that swapping rarely occurs.

> > Personally, I'd feel awkward about having swap partitions on
> > a software RAID, but when setting up something like a server to
> > provide important services for a company, I would insist on using a
> > good hardware RAID controller and likely put the swap partition onto
> > the RAID.
> 
> Depends on how much you are going to spend on the controller and weather
> or not you are going to have battery backed up cache - if you not you
> might aswell go software raid (only talking raid1 here).
> 
> If you do spend the money and have multiple machines then you might as
> well go for a san......

Maybe --- I need to learn more about SAN. I'll have to read up on it
and find out what it can do.

> I would suggest for most commercial situations a software raid setup or
> 2 raid 1 disk is a far better solution that a proprietary hardware raid
> controller

Hm, interesting. What makes you think so? Getting a good balance
between reliability and cost?

> > > I have had to rescue machines and having a simple boot + / make like a
> > > lot simpler  - why make life hard
> > 
> > Aren't you making it hard by having to partition all the disks
> > involved and then creating RAIDs from the partitions? It's more work
> > when setting it up, and it can turn into a disaster when something
> > with discovering the RAID goes wrong.
> 
> sfdisk -d <master device> > raidlayout.out
> 
> sfdisk <new raid device> < raidlayout.out
> 
> you could wrap it inside a for loop if you want 

I wouldn't do that. I don't have that much trust into software and
hardware.

> > For example, when I put the disks containing my RAID-1 into another
> > computer and installed the raid tools, I was asked questions about
> > starting the raid and answered that I wanted to start the RAID arrays
> > when booting. I had the disks partitioned and had created RAID arrays
> > from the partitions.
> > 
> > The result was that the RAID was started immediately (which I consider
> > as a bug) instead when booting, before I had any chance to check and
> 
> but you said above you gave the okay to start all raid devices, so why
> complain when it does it ?

I told it to start the RAID *when booting*, not any time before ---
and I didn't boot. I wanted to install the raid tools and then make
sure that the configuration was ok, and only after that I would have
started the md devices. But they were started immediately before I
could do anything, they didn't wait for a reboot.

What would you expect when you're being asked "Should X be done when
booting?"? When I say "Yes", I expect that X will be done when
booting, not that it will be done immediately.

You could say that in that case, I trusted the software too
much. Never do that ...

> > to configure the RAID arrays correctly so that they would be detected
> > as they should. It started resyncing the md devices in a weird way. I
> > was only lucky that it didn't go wrong. If it had gone wrong, I could
> > have lost all my data.
> 
> And this why we have backups

I didn't have a backup. I used to have tape drives, but with the
amount of data to backup steadily increasing with the disk sizes, you
get to the point where that gets too expensive and where there isn't
any affordable and good solution. You can't buy tape drives and tapes
that fast ... I still don't have a backup solution. I'm making backups
on disks now, but that isn't a good solution, only a little better
than no backup.

> > Now when I got new disks, I created the RAID arrays from the whole
> > disks. In this case, I didn't partition the RAID array, but even if I
> > did, the number of md devices was reduced from the three I had before
> > to only one. The lower the number of md devices you have, the less
> > likely it seems that something can go wrong with discovering them,
> > simply because there aren't so many.
> 
> I don't think you are going to have overflow problems with number of
> raid devices

I'm not afraid of that. But the point remains that having less md
devices reduces the chances that something goes wrong with their
discovery. The point remains that reducing the complexity of a setup
makes it easier to handle it (unless you reduced the complexity too
much).

> > To me, it seems easier to only have one md device and to partition
> > that, if needed, than doing it the other way round. However, I went
> > the easiest way in that I have another disk with everything on it but
> > /home. If that disk fails, nothing is lost, and if there are problems,
> 
> well except for /etc/

There's nothing on /etc that isn't replaceable. It's nice not to lose
it, but it doesn't really matter. If I lost my data of the last 15
years, I would have a few problems --- not unsolvable ones, I guess,
but it would be utterly inconvenient. Besides that, a lot of that data
is irreplaceable. That's what I call I a loss. Considering that, who
cares about /etc?

> > a single disk is the simplest to deal with. --- I might have done it
> > otherwise, but it has been impossible to install on SATA disks because
> > the modules required to access SATA disks are not available to the
> 
> if you have a look at the latest installer i think you will find it has
> all the necessary modules now 

That would be great :)

> You have missed the point the advantage is have a separate / and a
> separate /boot to protect them you can even mount /boot as ro.

Oh, ok.

What I was wondering about is what the advantage is of partitioning
the disks and creating RAIDs from the partitions vs. creating a RAID
from whole disks and partitioning the RAID?

> All the partitions are raided again for protection.  If you are
> worried about a little complexity then managing HA and/or production
> system might not be up your alley.

Well, I've done it for quite a while. But I found it very helpful to
make things no more complex than needed. The simpler something is, the
lower the chances are that problems can come up --- and if problems
come up, it's the easier to deal with them the less complex things
are.

If you're making things more complex than they have to be, there has
to be a very good reason to do so. I would like to know that reason
since I could learn something from you I'd never have thought of.

> Having said all that, there is no golden rule, each situation needs to
> be addresses individually weighing you the risks compared to the gains

indeed

> For me as a creature of habit I usually follow the layout I
> specified before - because it has worked and it has save me time and
> it is very flexible, your mileage might vary.

Is it more flexible when you make partitions and then RAIDs from them
than it is to make a RAID from the disks and partition that?

I also tend to stick with my habits, but that isn't always a good
thing. One needs to keep learning, and that can mean to change a habit
because there's a better way of doing something.


Reply to: