[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Very slow LVM performance



Arcady Genkin put forth on 7/12/2010 12:45 PM:
> I just tried to use LVM for striping the RAID1 triplets together
> (instead of MD).  Using the following three commands to create the
> logical volume, I get 550 MB/s sequential read speed, which is quite
> faster than before, but is still 10% slower than what plain MD RAID0
> stripe can do with the same disks (612 MB/s).
> 
>   pvcreate /dev/md{0,5,1,6,2,7,3,8,4,9}
>   vgcreate vg0 /dev/md{0,5,1,6,2,7,3,8,4,9}
>   lvcreate -i 10 -I 1024 -l 102390 vg0
> 
> test4:~# dd of=/dev/null bs=8K count=2500000 if=/dev/vg0/lvol0
> 2500000+0 records in
> 2500000+0 records out
> 20480000000 bytes (20 GB) copied, 37.2381 s, 550 MB/s
> 
> I would still like to know why LVM on top of RAID0 performs so poorly
> in our case.

I'm curious as to why you're (apparently) wasting 2/3 of your storage for
redundancy.  Have you considered a straight RAID 10 across those 30
disks/LUNs?  Performance should be enhanced by about 50%  or more over your
current setup (assuming you're not hitting your ethernet b/w limits
currently), and you'd only be losing half your storage to fault tolerance
instead of 2/3rds of it.  RAID 10 has the highest fault tolerance of all
standard RAID levels and higher performance than anything but a straight stripe.

I'm guessing lvm wouldn't have any problems atop a straight mdadm RAID 10
across those 30 disks.  I'm also guessing the previous lvm problem you had was
probably due to running it atop nested mdadm RAID devices.  Straight mdadm
RAID 10 doesn't create or use nested devices.

I'm also curious as to why you're running software RAID at all given the fact
than pretty much every iSCSI target is itself an array controller with built
in hardware RAID.  Can you tell us a little bit about your iSCSI target devices?

-- 
Stan


Reply to: