Re: Very slow LVM performance
On Mon, Jul 12, 2010 at 20:06, Stan Hoeppner <email@example.com> wrote:
> I had the same reaction Mike. Turns out mdadm actually performs RAID 1E with
> 3 disks when you specify RAID 10. I'm not sure what, if any, benefit RAID 1E
> yields here--almost nobody uses it.
The people who are surprised to see us do RAID10 over three devices
probably overlooked that we do RAID10 with cardinality of 3, which, in
combination with "--layout=n3" is almost an equivalent of creating a
three-way RAID1 mirror. I'm saying "almost" because it's equivalent
in as much as each of the three disks is an exact copy of the others,
but the difference is in performance.
We found out empirically (and then confirmed by reading a number of
posts on the 'net) that MD does not implement RAID1 in, let's say, the
most desirable way. In particular, it does not make use of the data
redundancy for reading when you have only one process doing the
reading. In other words, if you have a three-way RAID1 mirror, and
only one reader process, MD would read from only one of the disks, so
you don't get performance benefit from using the mirror. If you have
more than one large read, or more than one process reading, then MD
does the right thing and uses the disks in what seems to be a round
robin algorithm (I may be wrong about this).
When we tried using RAID10 with n=3 instead of RAID1, we saw much
better performance. And we verified that all
three disks are bit-to-bit exact copies.
> I just hope the OP gets prompt and concise drive failure information the
> instant one goes down, and has a tested array rebuild procedure in place.
> Rebuilding a failed drive in this kind of setup may get a bit hairy.
Actually, it's the other way around because you get quite a bit of
redundancy from doing the three-way mirroring. You are still
redundant if you loose just one drive, and we are planning to have
about four global hot spares standing by in case a drive fails.