[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Debian+LVM+RAID



[snip]

> > > > what happens if there is something important swapped out and that drive
> > > > dies ?  I could understand not wanting to put it on lvm and then raid1.
> > > 
> > > When the swap partition quits working, the system might stop
> > > working. So the question is what's more important, increased
> > > reliability or faster swap.
> > 
> > I am not sure that raid1 is noticeable slower than raid0 (or jbod), raid

ie for example the old 2 * physical ram size = swap size. I have a
machine with 256G of ram I don't need 512G of swap space.

> > 5 maybe or any other parity raid

[snip]

> > 
> > Depends on how much you are going to spend on the controller and weather
> > or not you are going to have battery backed up cache - if you not you
> > might aswell go software raid (only talking raid1 here).
> > 
> > If you do spend the money and have multiple machines then you might as
> > well go for a san......
> 
> Maybe --- I need to learn more about SAN. I'll have to read up on it
> and find out what it can do.

being stuck to dedicated hardware and firmware. look at the large data
center building all moving towards white box 1ru boxes - generic
hardware keeping it simple.  Can you take your smartraid (HP raid
controller) and attach it to a dell perc controlller - if you use
software raid then yes you can.

> 

[snip]

> > > involved and then creating RAIDs from the partitions? It's more work
> > > when setting it up, and it can turn into a disaster when something
> > > with discovering the RAID goes wrong.
> > 
> > sfdisk -d <master device> > raidlayout.out
document it and have a change management doco/protocol. At some point in
time you have to trust something!

> > 

[snip]

> 
> I told it to start the RAID *when booting*, not any time before ---
> and I didn't boot. I wanted to install the raid tools and then make
> sure that the configuration was ok, and only after that I would have
> started the md devices. But they were started immediately before I
> could do anything, they didn't wait for a reboot.
> 
> What would you expect when you're being asked "Should X be done when
> booting?"? When I say "Yes", I expect that X will be done when
> booting, not that it will be done immediately.

my apologies I miss read

> 

[snip]

> > And this why we have backups
> 
> I didn't have a backup. I used to have tape drives, but with the
> amount of data to backup steadily increasing with the disk sizes, you
> get to the point where that gets too expensive and where there isn't
> any affordable and good solution. You can't buy tape drives and tapes
> that fast ... I still don't have a backup solution. I'm making backups
> on disks now, but that isn't a good solution, only a little better
> than no backup.

comes down to how much you value your data.  My home server has 10 x 1T
drives in it a mix of raid1 + raid5 + raid6, I have a second server with
9 x 1T drives in it (in the garage) to do my backups - because it would
take too long to send off site and I don't want to spend money on a tape
system - i value my data - well I could afford to throw money at the
problem.  But I have some important info there, photos & video of my
daughter etc ...

> 

[snip]

> 
> I'm not afraid of that. But the point remains that having less md
> devices reduces the chances that something goes wrong with their
> discovery. The point remains that reducing the complexity of a setup
> makes it easier to handle it (unless you reduced the complexity too
> much).

Ok to take this analogy even further why have 1T drives, why not stick
with 1G hard drives - less data less chance of errors.

If you are building a large system or !!complex!! system, bit of
planning before hand, I set mine up and haven't had a problem with md, I
have lost some drives during the life of this server - the hardest thing
is matching drive letter to physical drive - I didn't attach them in
incremental order to the mother board (silly me)

> 
> > > To me, it seems easier to only have one md device and to partition
> > > that, if needed, than doing it the other way round. However, I went
> > > the easiest way in that I have another disk with everything on it but
> > > /home. If that disk fails, nothing is lost, and if there are problems,
> > 
> > well except for /etc/
> 
> There's nothing on /etc that isn't replaceable. It's nice not to lose
> it, but it doesn't really matter. If I lost my data of the last 15
> years, I would have a few problems --- not unsolvable ones, I guess,
> but it would be utterly inconvenient. Besides that, a lot of that data
> is irreplaceable. That's what I call I a loss. Considering that, who
> cares about /etc?

really what about all your certificates in /etc/ssl, or your machines
ssh keys, or all that configuration information for your system mail,
ldap, userids, passwords, apache setup, postgress setup.

Admittedly you could re create these from memory but, there are some
things that you can't 

> 
> > > a single disk is the simplest to deal with. --- I might have done it
> > > otherwise, but it has been impossible to install on SATA disks because
> > > the modules required to access SATA disks are not available to the
> > 
> > if you have a look at the latest installer i think you will find it has
> > all the necessary modules now 
> 
> That would be great :)
> 
> > You have missed the point the advantage is have a separate / and a
> > separate /boot to protect them you can even mount /boot as ro.
> 
> Oh, ok.
> 
> What I was wondering about is what the advantage is of partitioning
> the disks and creating RAIDs from the partitions vs. creating a RAID
> from whole disks and partitioning the RAID?

I have to admit I have evaluated partitioning + raid v's raid +
partitioning, I think I would go with the previous, more system (old
linux box, windows boxes, mac boxes ) understand partitions - where as
not all OS understand raid + partitioning. And currently I don't see the
advantage to raid + partitioning 

> 
> > All the partitions are raided again for protection.  If you are
> > worried about a little complexity then managing HA and/or production
> > system might not be up your alley.
> 
> Well, I've done it for quite a while. But I found it very helpful to
> make things no more complex than needed. The simpler something is, the
> lower the chances are that problems can come up --- and if problems
> come up, it's the easier to deal with them the less complex things
> are.
> 
> If you're making things more complex than they have to be, there has
> to be a very good reason to do so. I would like to know that reason
> since I could learn something from you I'd never have thought of.

I believe the complexity is not that high and the returns are worth it,
I haven't lost any information that I have had protected in a long time.

> 
> > Having said all that, there is no golden rule, each situation needs to
> > be addresses individually weighing you the risks compared to the gains
> 
> indeed
> 
> > For me as a creature of habit I usually follow the layout I
> > specified before - because it has worked and it has save me time and
> > it is very flexible, your mileage might vary.
> 
> Is it more flexible when you make partitions and then RAIDs from them
> than it is to make a RAID from the disks and partition that?
> 
> I also tend to stick with my habits, but that isn't always a good
> thing. One needs to keep learning, and that can mean to change a habit
> because there's a better way of doing something.

true, which is why I throw the book away and try different things some
times, always good to learn new stuff



> 
> 

-- 
"Now, there are some who would like to rewrite history--revisionist historians is what I like to call them."

	- George W. Bush
06/16/2003
Elizabeth, NJ

Attachment: signature.asc
Description: Digital signature


Reply to: