[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Reasonable maximum package size ?



On Mon, Jun 11, 2007 at 10:11:25PM +0100, Steve McIntyre wrote:
> In article <[🔎] 1181593352.3560.6.camel@tomoyo> you write:
> >-=-=-=-=-=-
> >
> >Le lundi 11 juin 2007 à 21:16 +0100, Wouter Verhelst a écrit :
> >> The point wasn't that you can't set up a professional RAID array using
> >> cheap desktop hard disks; you can, if you really want to, though I
> >> wouldn't recommend it. And yes, you're completely free to ignore that
> >> particular advise, so long as you don't expect me to become a customer
> >> of yours.
> >
> >You seem to strongly believe the cheap desktop hard disk is different
> >from the server hard disk. This is entirely wrong. Apart from 10k and
> >15k rpm disks, these are all strictly the same. Only the electronics
> >change.
> 
> Sorry, but you're utterly wrong.

  FWIW who is wrong does not matter. If you're not ftpmaster.debian.org
or the primary debian mirror or let's say one of the 5 or 6 primary
debian mirrors, you don't _need_ to be safe, you just need to be always
online. You can achieve that with a simple array of usual desktop (sorry
that works well) SATA drives (or even 10k desktop grades sata drives,
yes that exists) in a sufficientely redundant raid array. If you choose
your hardware properly:
  * it will be hotpluggable (yes even desktop sata drives supports
    this),
  * you will be able to monitor it.
  * you will be able to change the drives before they fail. Even if you
    burn 2 500Go disks every 6 months, it will still be cheaper in the
    end than the "carrier-grade" hardware that is sold. The really funny
    part here, is that when time passes, your replacement disks become
    bigger and bigger, and faster than the archive growth. Isn't it nice ?

  And even if all of that should fail, rebuilding a debian mirror is
fast (I build my x86+amd64 in a few hours behind a dsl line), and costs
a fragment of the bandwidth this mirror would consume in the long term
anyway.

  My point is: disk space is expensive because people didn't realized
that disks are expendable. Well, some people, google did realize.

  And would I need to build a very efficient mirror, I'd put my money on
the RAM so that the very used parts of the mirror would stay in cache
anyways.


  The other fun part was that my real point was that there is a real
problem that is bandwidth, not really for the mirrors sync, but because
of the downloads from the clients (I've no real data to backup that
claim of course, but if a mirror uses more BW to be kept in sync that
what his usual clients use, then its worthless).

  And yes, unlike disks, bandwidth is still a real real real constraint.
Though, I'd say that we could work on distributed mirrors
infrastructures, because disk is cheap for our users too, and even a
smallish fragment of their bandwidth could be of use. As soon as such a
distribution service exists, I've for sure some dozens of gigabyte and
10 to 20 Mbits on a server of mine to be part of the network. _that_
would be 100x more productive than to try to take shortcuts on the
archive for bad reasons.


PS: Oh and I don't say it's a good idea to see the archive grow just
    because we have space. I've 2 RM: bugs on old packages of mine that
    are worthless and uselessly bloat the archive.

-- 
·O·  Pierre Habouzit
··O                                                madcoder@debian.org
OOO                                                http://www.madism.org

Attachment: pgpwvoFFe1G16.pgp
Description: PGP signature


Reply to: