[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: "big" machines running Debian?



Alex Samad <alex@samad.com.au> writes:

> On Sat, Feb 28, 2009 at 09:50:06AM +0100, Goswin von Brederlow wrote:
>> Alex Samad <alex@samad.com.au> writes:
>> 
>
> [snip]
>
>> > true, depends on whos rule of thumb you use. I have seen places where
>> > mandate fc drives only in the data center - get very expensive when you
>> > want lots of disk space.
>> 
>> The only argument I see for FC is a switched sorage network. As soon
>> as you dedicate a storage box to one (or two) servers there is really
>> no point in FC. Just use a SAS box with SATA disks inside. It is a)
>> faster, b) simpler, c) works better and d) costs a fraction.
>
> The problem I have seen is the person who controls the purse strings
> doesn't always have have the best technological mind.  There was a while
> back where have fibre meant fibre to the disk. So managers wanted fibre
> to the disk, so they paid for fibre to the disk.

And now they have to learn that we have new technologies. New
requirements and new solutions. What was good 5 years ago isn't
neccessarily good today. Saddly enough a lot of purse strings seem to
be made of stone and only move in geological timespans. :)

>> And hey, we are talking big disks space for a single system here. Not
>> sharing one 16TB raid box with 100 hosts.
>> 
>> > Also the disk space might not be need for feeding across the network, db
>> > aren't the only thing that chew through disk space.
>> >
>> > the op did specific enterprise, I was think very large enterprise, the
>> > sort of people who mandate scsi or sas only drives in their data centre
>> 
>> They have way to much money and not enough brain.
>
> I would have to dissagree, some times the guidelines that you set for
> your data storage network mandate having the reliability (or the
> performance) of scsi (or now sas), they could be valid business
> requirements.

Could be. If you build storage for a DB you want SAS disks and
raid1. If you build a petabyte storage cluster for files >1GB then you
rather want 3 times as many SATA disks. An XYZ only rule will always
be bad for some use cases.

> Traditionally scsi drives had a longer warranty period, were meant to be
> of better build that cheap ata (sata) drives.
>
> Although this line is getting blurred a bit.

There surely is a difference between a 24/7, 5 year warranty, server
SCSI disk and a cheap home use SATA disk. But then again there are
also 24/7, 5 year warranty, server SATA disks.

I don't think there is any quality difference anymore between the scsi
and sata server disks.

> Unless we talk about a specific situation, storage as other areas of IT
> are very fluid, and there are many solutions to each problem.

Exactly.

> Look at the big data centers of google and such that use pizza box's
> machine dies who cares its clustered and they will get around to fixing
> it at some point. to 4-8 nods clusters of oracle that are just about
> maxed out, one server goes down and ....

Same here. Nobody builds HA into a HPC cluster. If a node fails the
cluster runs with less node. Big deal.

Saddly enough for storage there is a distinct lack of
software/filesystems that can work with such a lax reliability. With
the growing space requirements and stalling size increase in disk size
there are more and more components in a storage cluster. I feel that
redundancy has to move to a higher level. Away from the disk level
where you have raid and towards true distributed redundancy across the
storage cluster as a whole.

MfG
        Goswin


Reply to: