[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: PostgreSQL+ZFS



On Saturday 01 January 2011 20:30:31 Stan Hoeppner wrote:
> Boyd Stephen Smith Jr. put forth on 1/1/2011 2:16 PM:
> > Is your problem with RAID5 or the SSDs?
> 
> RAID 5
> 
> > Sudden disk failure can occur with SSDs, just like with magnetic media. 
> > If
> 
> This is not true.

This is true.  While single-block failures are most likely, controller 
failures will cause a whole-disk to fail.  This is similar to a daughter card 
failing.  While rare, I've certainly seen it happen, and some NAS steps use 
multipath across two HBAs to avoid the downtime associated with a HBA failure.  
This is very similar to RAID 1 across 2 SSDs.

> The failure modes and rates for SSDs are the same as
> other solid state components, such as system boards, HBAs, and PCI RAID
> cards, even CPUs (although SSDs are far more reliable than CPUs due to
> the lack of heat generation).

Agreed.

> > you are going to use them in a production environment they should be
> > RAIDed like any disk.
> 
> I totally disagree.

Respectfully disagree.  However, I do see your point that RAIDing SSDs is not 
*as* critical as RAIDing magnetic media.

> 
> > RAID 5 on SSDs is sort of odd though.  RAID 5 is really a poor man's
> > RAID; yet, SSDs cost quite a bit more than magnetic media for the same
> > amount of storage.
> 
> Any serious IT professional needs to throw out his old storage cost
> equation.  Size doesn't matter and hasn't for quite some time.  Everyone
> has more storage than they can possibly ever use.  Look how many
> free*providers (Gmail) are offering _unlimited_ storage.

I know I don't have all the local storage I need, and I have 6TB attached to 
my desktop.  It's currently full to the point where I can't archive data that 
I acquire on less reliable media.

I think the old equations are still valuable.  If capacity is not a priority 
or easily satisfied, your observations are particularly valuable.

> Also, I really, really, wish people would stop repeating this crap about
> mdraid's various extra "RAID 10" *layouts* being RAID 10!  They are NOT
> RAID 10!
> 
> There is only one RAID 10, and the name and description have been with
> us for over 15 years, LONG before Linux had a software RAID layer.

> Also, it's not called "RAID 1+0" or "RAID 1/0".  It is simply called
> "RAID 10", again, for 15+ years now.

Simply not true.  The correct naming for layered RAID has never been 
standardized.  I frown on the "RAID 10" naming because it looks like it should 
be pronounced "RAID Ten".

> It requires 4, or more, even
> number of disks.  RAID 10 is a stripe across multiple mirrored pairs.
> Period.  There is no other definition of RAID 10.  All of Neil's
> "layouts" that do not meet the above description _are not RAID 10_ no
> matter what he, or anyone else, decided to call them!!

While, this is pedantically true, it is a rather silly distinction to make.  
With all the layouts, the disks are divided into a number of blocks, then 
pairs of these blocks are mirrored and the data is striped across all the 
mirrors.  This builds a RAID 1/0, where the "disks" are just parts of the 
physical disks.  The "D" in RAID refers to physical disks, but for quite a 
while RAID is put into practice on top of various abstraction layers, so the 
mdadm blocks certainly qualify.
-- 
Boyd Stephen Smith Jr.           	 ,= ,-_-. =.
bss@iguanasuicide.net            	((_/)o o(\_))
ICQ: 514984 YM/AIM: DaTwinkDaddy 	 `-'(. .)`-'
http://iguanasuicide.net/        	     \_/

Attachment: signature.asc
Description: This is a digitally signed message part.


Reply to: