[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Using Alioth Opteron net install to config MD raid1



On Fri, May 13, 2005 at 01:05:21PM +0100, Rupert Heesom wrote:
> I'm using an Alioth net install CD to put Debian onto a new dual opteron
> PC using 2 SATA drives.
> 
> I'm getting confused during the drive setup process.....
> 
> What I'm wanting to do is use the raid 1 setup with each disk having 3
> partitions (as in a workstation install):  root, swap, home.
> 
> How do I setup md to work with these partitions?
> 
> I seem to be able to use EITHER the 3-partition structure OR the RAID 1
> structure (the install puts 1 ext3 partition into the RAID1 device).
> 
> Can I do what I want to with this install?
> 
> If I am not able to change these options, can I at least change the ext3
> file system to an XFS system?  (I think XFS is cool!)

I just went through the hassle of converting a couple of partitions from
XFS to ext3 (i386 running 2.6 kernel) because I had so frequent crashes
where xfs leaked so many buffers the OS ran out of ram and died.

No more XFS for me for a long time, at least when running nfs and samba
of it on top of LVM and MD raid.

With my setup, it seems ext3 has about 5 times the throughput of xfs,
and the meta data access is probably 50 times faster than xfs.
Something is really wrong with xfs in 2.6.5-2.6.10.  Not sure about
2.6.11 yet, and I hopefully won't have to find out now that I managed to
change to ext3 instead.

As for the setup, I have done this:

rceng02:~# fdisk -l /dev/sd[ab]

Disk /dev/sda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
   /dev/sda1   *           1          16      128488+  fd  Linux raid autodetect
   /dev/sda2              17        3663    29294527+  fd  Linux raid autodetect
   /dev/sda3            3664       30401   214772985   fd  Linux raid autodetect

Disk /dev/sdb: 250.0 GB, 250059350016 bytes
55 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
   /dev/sdb1   *           1          16      128488+  fd  Linux raid autodetect
   /dev/sdb2              17        3663    29294527+  fd  Linux raid autodetect
   /dev/sdb3            3664       30401   214772985   fd  Linux raid autodetect

rceng02:~# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda2[0] sdb2[1]
      29294400 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      128384 blocks [2/2] [UU]

md2 : active raid1 sda3[0] sdb3[1]
      214772864 blocks [2/2] [UU]

md0 is mounted as /boot with ext2, md1 is mounted as / with ext3 and md2
is a pv device for lvm with 3 lv's inside like this:

rceng02:~# pvscan
  PV /dev/md2   VG MainVG   lvm2 [204.82 GB / 0    free]
  Total: 1 [204.82 GB] / in use: 1 [204.82 GB] / in no VG: 0 [0   ]
rceng02:~# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "MainVG" using metadata type lvm2
rceng02:~# lvscan
  ACTIVE            '/dev/MainVG/Swap' [2.00 GB] inherit
  ACTIVE            '/dev/MainVG/Home' [20.00 GB] inherit
  ACTIVE            '/dev/MainVG/Data' [182.82 GB] inherit

I didn't like / in LVM yet (although now that grub supports raid for
/boot I have a much simpler setup than I used to have).  It might be
possible to just have /boot as one raid and then the rest of the disk as
another raid with lvm running to allocate space to each volume as
needed.  Being able to add space and resize volumes and filesystems
easily is really nice, so lvm is great.

I suspect you can make partitions on md devices but I never had much
luck with it since there aren't actually device names allocated for
that, so it is simpler to use lvm to make partitions on an md device and
have a few md devices for special partitions (like /boot).

Len Sorensen



Reply to: