[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Happy success with LVM and RAID



Buried in this is a little bit of grumpiness, but only a little. See if you
can find it. Mainly, this is about a success that I thought was worth
sharing.

Starting in 1999 or so I had Debian running on a PowerCenter Pro 180 (PPC
604 180MHz, 1997 vintage Mac clone) maxed out to 512MB of memory. I ran an
8-drive RAID5 on an Adaptec SCSI card in an HP RAID rack with 18GB 10K RPM
disks, for a total of a (loud) 118GB of disk space. Just for fun, I also
ran a cryptoloop on top of it. On top of that, of course, I had LVM (1.0,
since 2 wasn't out yet and neither was the 2.6 kernel) and ext3. I even
threw in a Tulip-based "fast" (100baseT) ethernet card.

This machine served files via NFS (fast enough to play audio and video from
it directly, which meant it was plenty fast for my purposes), served IMAP,
ran spam filtering, retrieved email from a number of remote accounts, ran a
web server (with very minimal traffic, but also webmail), and basically
chugged along like a good little server.

Then, some time in 2003 or 2004, the motherboard fried. No idea what
happened, exactly, but it was toast. I had a Celeron box lying around
(already running Debian, even, though it had started out running Knoppix),
so I tried to get everything working on it. Moving the PCI cards was
trivial, but the RAID array wasn't recognized. It turns out that Linux
software RAID was endian-specific until kernel 2.6.13, so I couldn't use an
Intel box to run my existing array. Fortunately, I had a 2000-era dual G4
800 I'd been using as my primary machine, and I got Linux installed on it
with tolerable ease and that became my server. I even gave it the same name
and IP address, so it wound up being a drop-in replacement from the
perspective of the other machines depending on it.

Eventually, in 2005, I got tired of the noise and heat of 8 10K RPM drives.
If I'd been smart, I would have gotten tired of it before moving them 400
miles, but hindsight is 20/20. So I went and bought a couple of 250GB
Firewire drives, planning on setting them up in a RAID1. When I plugged one
in, it was noticed immediately by the kernel. When I plugged the other one
in, however, the kernel thought the serial number had changed on the first
one and was still convinced that there was only one drive. It didn't matter
which I plugged in first. It turns out that the firewire enclosures were
cheap, non-compliant pieces of crap and were indistinguishable. One
firewire enclosure and some unscrewing and rescrewing later, and I had a
workable pair. Setting up the RAID1 was trivial, and this time it shouldn't
be endian-specific. Creating a dm-crypt loop instead of the legacy
cryptoloop was almost trivial. Setting up LVM2 was trivial. Copying the
volumes from the old RAID to the new one was tedious, but not as slow as I
would have expected. Thus I doubled my capacity, cut my heat and power
consumption by 3/4, and reduced the noise level to a bare minimum.

Recently, I started running low on space on a couple of volumes. Since I'd
copied them from the old RAID, I'd left them the same size they'd been
there. This meant that I had some 130GB of unused disk space that I could
allocate to them. Two lvextend commands later (one for each volume) and I
had enough space. One ext2resize command later, and I was pissed:

ext2resize v1.1.19 - 2001/03/18 for EXT2FS 0.5b
Can't resize on big endian yet!

If ext2resize is its own package, and it doesn't work on big endian, why is
it even available for the ppc architecture? I then dug around and looked for
e2fsadm. Turns out it's in the lvm10 package, which I didn't have
installed because I am using lvm2. Installed it, looked at the e2fsadm man
page, and see that it can use resize2fs, which I already have installed.
I looked at the resize2fs man page, uninstalled lvm10, and resized my
filesystems to fill the newly extended volumes.

In summary, I've been having success managing my home digital storage with
Debian GNU/Linux on PPC since around 1999, and the only hiccups have been
broken hardware (dead motherboard, non-compliant firewire enclosures) and
endianness issues. Not bad.

--Greg



Reply to: