Re: HDD vs. RAID (was Re: Lilo Q)
hi ya anthony
yes... good point on MTBF...
- and if the drives gonna fail... i say its more likely to die
within the first 30 days ... ( some disks more likely to die than
others irrespective of the MTBF and name-brands..
- i have a pile of "bad/flaky IBM disks" ...
about 1-5% failure rates (basically not good as one would expect)
- but if one does have 4 drives raid5 and a disk dies..
thats still recoverable and you're still limping along until you
can replace that dead disk and get back to "normal operation"...
- what's the likelyhood of 2 drives that fail ...
rendering the raid subsystem to be just blank disks..
( hopefully one can rest a little better after the first disk
( dies... or is more of the same fate to happen to the rest of
( the disks ...
- i still prefer 1 large disks.. instead of many small ones...
- if the server needs to stay up 24x7 ... than i'd like to have 2 or 3
servers to be looking like 1 server...
magic...
c ya
alvin
On 10 Jun 2002, Anthony DeRobertis wrote:
> On Sun, 2002-06-09 at 20:33, Alvin Oga wrote:
>
> > if you have a nearly full 80GB disks ... it wont matter
> > if you have 1x 80GB or 4x 20GB( stripping )
>
> No, it does matter. You can expect at least one of four 20GB drives to
> fail much sooner than one 80GB drive, assuming same MTBF numbers on all
> drives.
>
> The MTBF for one 50,000hr MTBF disk is 50,000hr. For four of them, it is
> 13,500Hr.
>
> [ And, if you operate the four for a year, you can expect 1 to fail. ]
>
> > best best...
> > ===
> > === backup data regularly to DIFFERENT systems ..
> > ===
>
> Or tape. But whatever you do, make sure you:
>
> 1) Test your ability to restore data. Do this regularly. You'd hate
> it if you couldn't.
> 2) Verify your backups. Very important for tape.
>
>
--
To UNSUBSCRIBE, email to debian-user-request@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Reply to: