[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Desktop user: Etch or the next testing?



Douglas Allan Tutty wrote:
On Fri, Apr 06, 2007 at 11:42:06AM +0800, Bob wrote:
Douglas Allan Tutty wrote:
On Fri, Apr 06, 2007 at 09:39:28AM +0800, Bob wrote:
8< snip

An experiment I want to run is to get a 4200 rpm SATA laptop drive and 10,000 RPM SATA Raptor of similar size and compare the responsiveness of the same install on the same hardware on different drives and then figure out what mountpoints should go on a CF card in a DMA capable IDE (or SATA) adapter to achieve similar speed, my hunch is I should put at least /var and /home on the harddrive and the rest on the CF card.
How fast do you need OO to load?

Personally not particularly but I'm planing of putting my Dad on etch next month as the remote windows admin workload is too much.

Then again, I'm a CLI person (except for some GUI frontends that make
life a little easier).  For a while, I had a box with a 171 MB drive
that only held the stuff that couldn't easily come over NFS (like /boot,
/).  Everything else came over NFS, the catch was the network was a
serial line running PPP (no NIC on the box).  I ran icewm.  Sure mozilla
took a while, but partly that's because it was a 486.  It was still
faster than running mozilla via ssh X forwarding.

Soon I'm going to move 2 P!!! 667 network media players (plus a bit of web browsing and AbiWord for guests) onto net boot and remove their harddrives, that way the only moving parts on them will be the PSU fans which only spins up if the temperature hits 50C. I'm undecided on weather to disable swapping or reduce swappiness and have a small swap on a local ramdisk (useful indicator of ram running out) with a larger lower priority one on a remote disk or ramdisk mounted via NFS or some other option. they only have 256 MBs of ram so I'm reluctant to turn off swapping

8< snip
RAID is not a substitute for backup anyway, it's meant to be a tool to increase availability by adding redundancy, in this case I could slap in a new drive, do a reinstall, rebuild my RAID1 array, and I'm back up without having to do all that tedious mucking about with DDS tapes (or DVDRs, Rsync, Gmail, whatever you use) but even if your data is hosed you can restore from backup (you *do* backup right) and your fine.

For the cost of one disk, if you put the system stuff on raid1 too,
there is not downtime other than that required to physically change the
drive.  No tedius mucking about with /etc.

And while these days you do see some speed increase on reads from RAID1 I don't think it's as much as with nonRAID0, plus you've got a serious penalty on writes.

Sure I back up.  Anything that apt didn't put there, I back up.  The
first place my backup sets go is /var/local/backup (protected by raid1),
the second place they go is another box via rsync.  Then they go to CD.
I don't yet have a removable hard drive.  I've looked into backup media
and it _seems_ that hard drives in a ruggedized case are more rugged
than tape, cheaper, and don't require an expensive tape drive.

I actually do a similar thing until everything ends up on my server where I eventually slap it on tape, although rather than backing up my config files in /etc I document the changes I make in a copy paste friendly way. As for the relative price and longevity of tapes vs hdds for me now I'd slap big drives in my media frontend box and have them rsync on boot then unmount and spin down, but for price per GB tape media in so cheap that it hard to beat, and if you want a serious backup strategy, you want multiple copies of things in different locations, which until harddrive manufacturers realize we don't all want or need our big drives to be fast (I'd happily buy a drive with quadruple the seek time and quarter the throughput if it was 4 times the capacity of todays biggest 750GB power sucking rattlers) means well stored tapes is probably the best way.

Actually I don't mean to sound quite so sanctimonious about it, my backup frequency is dictated by how much it going to piss me off if I loose everything science the last one, when I get the the point of losing sleep over it I dig out my DDS drive and SCSI card, drop them in and do a backup, (I really need a bigger box) then I sleep well for a couple of months.

Perhaps instead of a bigger box, you need a spare box dedicated to doing
backups.  There's supposed to be a way to backup from box A to the tape
drive attached to box B without having the backup set sit on box B's
hard drive.

Heh yes, that sounds like a job for the very splendid and worthwhile netcat. My ideal backup solution involves changing tapes once or twice a day with a quarterly full backup day/hell where you train a monkey to change the tapes for you, (I sooo want to by one of those graduate project robot arms and automate the whole process on the cheap) I have 3 24/7 machines here, file server (dual P!!! 700), firewall (VIA samual 700 that used to run off a CF card and will again when I have time), webserver (Geode1500, nice) and I'm adding a Duron 700 MythTV backend that has no spare PCI slots for SCSI and will be very busy, (I expect to see high load averages but I'm not using a frame grabber to that's OK) the only appropriate place for the tapedrive is in the file server, unless I start using WOL and shutdown scripts.

I read an article on debian administration about virtual machines and now I find my self eying up Operon boxes while thinking about cutting down to 2 24/7 boxes and flogging the rest, how much is a fairly well lunged Duron 700 with a gig of ram worth these days, not a lot I think.

Hay ho, have a good weekend.



Reply to: