[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Very slow LVM performance



On Mon, Jul 12, 2010 at 02:05, Stan Hoeppner <stan@hardwarefreak.com> wrote:

> lvcreate -i 10 -I [stripe_size] -l 102389 vg0
>
> I believe you're losing 10x performance because you have a 10 "disk" mdadm
> stripe but you didn't inform lvcreate about this fact.

Hi, Stan:

I believe that the -i and -I options are for using *LVM* to do the
striping, am I wrong?  In our case (when LVM sits on top of one RAID0
MD stripe) the option -i does not seem to make sense:

test4:~# lvcreate -i 10 -I 1024 -l 102380 vg0
  Number of stripes (10) must not exceed number of physical volumes (1)

My understanding is that LVM should be agnostic of what's underlying
it as the physical storage, so it should treat the MD stripe as one
large disk, and thus let the MD device to handle the load balancing
(which it seems to be doing fine).

Besides, the speed we are getting from the LVM volume is more than
twice slower than an individual component of the RAID10 stripe.  Even
if we assume that LVM manages somehow distribute its data so that it
always hits only one physical disk (a disk triplet in our case), there
would still be the question why it is doing it *that* slow.  It's 57
MB/s vs 134 MB/s that an individual triplet can do:

test4:~# dd of=/dev/null bs=8K count=2500000 if=/dev/md0
2500000+0 records in
2500000+0 records out
20480000000 bytes (20 GB) copied, 153.084 s, 134 MB/s

> If you specified a chunk size when you created the mdadm RAID 0 stripe, then
> use that chunk size for the lvcreate stripe_size.  Again, if performance is
> still lacking, recreate with whatever chunk size you specified in mdadm and
> multiply that by 10.

We are using chunk size of 1024 (i.e. 1MB) with the MD devices.  For
the record, we used the following commands to create the md devices:

For N in 0 through 9:
mdadm --create /dev/mdN -v --raid-devices=3 --level=raid10 \
  --layout=n3 --metadata=0 --bitmap=internal --bitmap-chunk=2048 \
  --chunk=1024 /dev/sdX /dev/sdY /dev/sdZ

Then the big stripe:
mdadm --create /dev/md10 -v --raid-devices=10 --level=stripe \
  --metadata=1.0 --chunk=1024 /dev/md{0,5,1,6,2,7,3,8,4,9}

Thanks,
-- 
Arcady Genkin


Reply to: