[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: WinBlow$ Home Server equivalent

On Saturday 11 April 2009, Alex Samad wrote:

> availability - mainly done by looking at raid levels
> manageablility - lvm / mdm
> backups - only way to do this is backuping up info, raid doesn't help
> with rm -fr /

Not just answering your post in particular, but I thought I would jump 
into this thread at this point mainly because you raised backups.

Somewhile ago after looking at all the options, I came to the conclusion 
that raid1 + lvm + daily cron driven backups to another machine was my 
best solution for keeping.  Fortunately, I haven't had the need to test 
the full hypothosis, since todate none of my disks have failed. Most 
problems have come from finger trouble (your rm -fr comment above), and 
whilst I have had situations where key data was siting on a single 
unraided backup disk I did manage to recover it (you have no idea how 
nervous I was and how often I checked the command to restore it - when I 
did recover it) 

I had the situation where I have a old server with a range of different 
size PATA drives - this was the complex system, and a newer desktop 
machine with a pair of SATA drives (both same size).

Firstly, my desktop system.  Each drive has 4 partitions (/boot @ 100MB, 
swap @ 2G, / @ 4G and the remainder (160GB drive) an LVM partion).  Grub 
is installed on both /boot partitions separately, so theoretically (as I 
said it has not been tested) I can boot from either one if the other 
fails).  Other than that and swap, they are all RAID 1 pairs (including 

I allocated and re-allocate Logical Volumes on the LVM partition all the 
time, as I balance my needs for working storage (particularly videos) 
and backup storage (I have a six month+ chain of copies of deleted data 
waiting to archive)

My server is similar for boot, swap and root, but then because of the 
sizing at some balance, some of the rest is a RAID 1 device with a PV on 
top of it to give LVM and some is just a set of lVM PVs on the raw 
partitions that combine into a single Volume group.  There is one 
exception to this - I made /var/log be a raid0 pair of disks - my 
reasoning being was this was where I expected most writes to take place 
and this should improve performance (I have no idea whether it was 
needed - my server is an old Celeron 1.7GHz, and despite being Apache, 
Tomcat, Postgres, Asterisk, Exim, DHCP, Bind, SSH and Rsync servers and 
the firewall and Internet Gateway for the house it barely has any 
loading above about 2%)

Here, the important point is that Logical Volumes are allocated from a 
single PV first before the next - so failure of one disk on the none 
raided part will take out only a subset of the files. [I have also 
thought somewhat about failure modes of a whole disk failure, taking out 
bits of raid on one partition and bits of an unraided PV on another.  I 
was dealt the disk sizes I had, so I just had to make the best of what I 
had, its not perfect - but optimal]

Overnight, I run automated rsync backups that copy key datastores from 
one machine to another (including taking dumps of some postgres 
databases and transfering those from server to desktop). I'm also the 
admin of a site held at a hosting company, and I also back that up 
(rsync over ssh) to my server to. I backup my work laptop in the same 

I try and organise things so the unraided stuff on the server is backups 
of raided stuff on my desktop.  The raided stuff on the server is 
operational + backups from the hosted system, work laptop etc which are 
not raided. 

I am currently planning my next upgrades - my desktop machine has run 
out of disk space, my server is getting old and has also almost run out 
of disk space so I need to plan an upgrade path for that too.  I think 
raid 1 will allow me to "fail" one disk, and then replace it with a 
larger one, then "fail" the other add replace that too.  The only step I 
am not sure of is how to grow into the new spare space (but I am sure I 
read an article somewhere on the internet about it). 

Alan Chandler

Reply to: