[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Offtopic : Large hostings and colocations ?where?



On 5/12/07, Douglas Allan Tutty <dtutty@porchlight.ca> wrote:

No appolgies needed.  Useful insight.

Could you define describe cold storage?  Since tape is "lame and dead",
how do you handle drives for archival storage?  Do you put a raw drive
in a cardboard box on a shelf, have them in removeable carriers?


It all depends.  Currently cold storage consists of cardboard boxes full of drives that have been scanned with a barcode reader.  We have manifests of which files/md5s were on which drive by serial number.

In a more general case, I think it all depends on your architecture and how much space and power (and money) you have.  You can spin disks down, or power off entire nodes and only wake them in batches via WOL once a month to scrub the disks and make sure what you thought you had is still there.

Removable carriers are an option, but they are expensive, as is hot-swap.  Behold:

1500 servers * 4 drives/server * $15/hotswap-carrier = $90,000.

Those $90K go a long way towards paying someone for 1 hour of labor to swap our two disks/day. (assuming it takes about 30 minutes to swap a disk out of a non-hotswap capable box)

A good way to do the math for this is to look at how much you spend per block device.  Case/CPU/RAM/PSU divided out by how many drives/box.  You get interesting numbers like $overhead/drive, megabits/GB-of-storage, RAM/GB, Watts/drive, etc.  How you adjust it all depends on your application.  Some of our servers use VIA Epia motherboards.  Those, loaded with 4x750GB disks mean we can get 3TB online and spinning for less than 100 Watts.  2 15 AMP circuits can power a 120TB rack (decent storage, way speedier than tape, and you get 40 CPU's to do whatever you want with)



On a side note, Lennart, when you have 1500 servers, how do you arrange
for console access to a server if required to solve a problem with bios
or early booting (before ssh is available)?


I just got in something from Lantronix, ~$4-500 for one box for KVM over IP.  It's not the best, but ti's ok.  I'll get the guy at the datacenter to hook it up to a machine that I want to play with.  I don't use RAID, and if it were, it certainly wouldn't be hardware RAID, so that's not a big issue.

I usually set machines to PXE boot, and have an NFS Rooted image that any box can boot into for troubleshooting if the local install is failing or some drives are bad, etc.

Reply to: