[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Debian+LVM+RAID



On Sat, Jul 11, 2009 at 04:40:02PM -0600, lee wrote:
> On Fri, Jul 10, 2009 at 07:04:26AM +1000, Alex Samad wrote:
> 
> > comes down to how much you value your data.
> 
> It comes down to how much money you can spend on securing it.

just about the same thing


> 
> > My home server has 10 x 1T drives in it a mix of raid1 + raid5 +
> > raid6, I have a second server with 9 x 1T drives in it (in the
> > garage) to do my backups - because it would take too long to send
> > off site and I don't want to spend money on a tape system - i value
> > my data - well I could afford to throw money at the problem.  But I
> > have some important info there, photos & video of my daughter etc
> > ...
> 
> Well, that's like $3000+ you spent on the drives alone, plus about
> another $2000 or so for the controller cards. About $8k in total? Then
> replace at least the disks about every three years. I don't have that
> kind of money (and not that much data). And the more drives you have,
> the more disks can fail.

Umm 1TB @ $115 =  ~$2K 

motherboard with 6 sata ~$200 + 2 x Adaptec SATA controllers ~$120 each


The drives were expensive yeap, but I have that much stuff (well once
you take out the backup server so 10 drivees and then raid6 - another 2
drives = ~8Tb worth of data which really equates to about 6T of data to
leave some head room)

so roughly ~$3K (aus dollars)


> 
> > > I'm not afraid of that. But the point remains that having less md
> > > devices reduces the chances that something goes wrong with their
> > > discovery. The point remains that reducing the complexity of a setup
> > > makes it easier to handle it (unless you reduced the complexity too
> > > much).
> > 
> > Ok to take this analogy even further why have 1T drives, why not stick
> > with 1G hard drives - less data less chance of errors.
> 
> Yes --- but you probably have a given amount of data to store. In any
> case, the more complexity your solution to store the data involves,
> the better the chances are that something goes wrong. That can be
> hardware or software as well as the user making a mistake. The more
> complex a system is a user is dealing with, the easier it is to make a
> mistake --- and software or hardware you are not using can't give you
> problems.

yes

> 
> > If you are building a large system or !!complex!! system, bit of
> > planning before hand, I set mine up and haven't had a problem with md, I
> > have lost some drives during the life of this server - the hardest thing
> > is matching drive letter to physical drive - I didn't attach them in
> > incremental order to the mother board (silly me)
> 
> Yeah, I know what you mean. The cables should all be labeled and
> things like that ...

it all about making assumption we do it all the time, based on our
previous experiences.

> 
> > > There's nothing on /etc that isn't replaceable. It's nice not to lose
> > > it, but it doesn't really matter. If I lost my data of the last 15
> > > years, I would have a few problems --- not unsolvable ones, I guess,
> > > but it would be utterly inconvenient. Besides that, a lot of that data
> > > is irreplaceable. That's what I call I a loss. Considering that, who
> > > cares about /etc?
> > 
> > really what about all your certificates in /etc/ssl, or your machines
> > ssh keys,
> 
> There are certificates and ssh keys? I didn't put any there.

You don't run any https site nor use ldaps or ssl postgress connections.
I think you will find you system ssh keys are there :)

> 
> > or all that configuration information for your system mail,
> > ldap, userids, passwords, apache setup, postgress setup.
> 
> It's easy to keep a copy of the configuration file of the mail server
> on the /home partition --- and it's easy to re-create. There are only
> two userids, no ldap, no postgres, and the config for apache is
> totally messed up on Debian anyway since they split up the config file
> so that nobody can get an idea how it's configured.
> 
> Anyway, you can always have backups of /etc; it's not changing very
> frequently like /home.
> 
> > Admittedly you could re create these from memory but, there are some
> > things that you can't 
> 
> If you have data like that on /etc, you need a backup.

I would say that you are very lucky to not have to backup your /etc

> 
> > > What I was wondering about is what the advantage is of partitioning
> > > the disks and creating RAIDs from the partitions vs. creating a RAID
> > > from whole disks and partitioning the RAID?
> > 
> > I have to admit I have evaluated partitioning + raid v's raid +
> > partitioning, I think I would go with the previous, more system (old
> > linux box, windows boxes, mac boxes ) understand partitions - where as
> > not all OS understand raid + partitioning. And currently I don't see the
> > advantage to raid + partitioning 
> 
> Hm, is it possible to read/use a partition/file system that is part of
> a software-RAID without the RAID-software? In that case, I could see
> how it can be an advantage to use partitions+RAID rather than
> RAID+partitions. But even then, can the "other systems" you're listing
> handle ext4fs? I still don't see the advantage of partitioning+RAID.

with raid you can mount the single partition and use it as is if you want, 
but I would suggest to only do that in emergancies

> 
> > I believe the complexity is not that high and the returns are worth it,
> > I haven't lost any information that I have had protected in a long time.
> 
> Maybe that's because we made different experiences ... To give an
> example: I've had disks disconnecting every now and then that were
> part of a RAID. The two disks were partitioned, RAID-1s created from
> the partitions. Every time a disk would lose contact, I had to
> manually re-add all the partitions after I turned the computer off and
> back on and the disk came back.

I have had faulty hardware - I got it fixed

> 
> Since there were three partitions and three md devices involved, I
> could have made a mistake each time I re-added the partitions to the
> RAID by specifying the wrong partition or md device.

if you did it on such a regular basis why not write  a script ?

> 
> Now having only one md device instead of three doesn't offer that kind
> of chance to get it wrong. If one disk would be missing, there's only
> one disk I could add, and I don't have to worry at all about assigning
> the right partitions to the right md devices.

true

> 
> To me, that is already an advatage of RAID+partitions. It may be a
> small improvement, but things like that can add up. It's just so much
> easier to maintain X simple things than it is to maintain X complex
> things which provide the same functionality. The day might come where
> two or three complex things go wrong at the same time and overwhelm
> you with their complexity while you could have fixed them easily if
> you had used simpler solutions.

raid is not a backup solution.  there is always a chance !!something!!
might go wrong, I have tried to reduce that by having my !!complex(i
diss agree that it is)!! solution with a backup server to reduce the
chance of me loosing my vital stuff, for my really important data I also
copy to a server off site.

> 
> > true, which is why I throw the book away and try different things some
> > times, always good to learn new stuff
> 
> Hehe, try RAID+partitions ;)

for now I still don't see any benefit, but at least I know about that
option for now


> 
> 

-- 
BOFH excuse #405:

Sysadmins unavailable because they are in a meeting talking about why they are unavailable so much.

Attachment: signature.asc
Description: Digital signature


Reply to: