[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#607327: mount: Performance issues with losetup (and therefore XEN)



Control: tags -1 + moreinfo

Hi,

On Fri, Dec 17, 2010 at 02:04:01AM +0100, Gregory Auzanneau wrote:
> Package: mount
> Version: 2.17.2-3.3
> Severity: important
> 
> Hello all,
> 
> I'm currently playing with xen and some intensive parallel I/O requests on disks.
> I've remark some performance issues on XEN bring by losetup which drop the NCQ/TCQ/Queuing functionnality (which is really useful in parallel random access disk)
> 
> I'm using this C program to measure the performance impact : http://box.houkouonchi.jp/seeker_baryluk.c (found on this website : http://www.linuxinsight.com/how_fast_is_your_disk.html )
> 
> Please find some benchmark :
> 
> Performance of /dev/dm-2 (lvm) with 1 thread : 210 seeks/secs
> root@srv-xen1:~# ./seeker_baryluk /dev/dm-2 1
> [1 threads]
> Results: 210 seeks/second, 4.755 ms random access time (33493245 < offsets < 60740308117)
> 
> Performance of /dev/dm-2 with 32 threads : 699 seeks/secs (at least 3x times better)
> root@srv-xen1:~# ./seeker_baryluk /dev/dm-2 32
> [32 threads]
> Results: 699 seeks/second, 1.430 ms random access time (8670248 < offsets < 60740120558)
> 
> We are mapping /dev/dm-2 on /dev/loop0 (Yes, just a mapping of LVM drive without any FS interaction)
> root@srv-xen1:~# losetup /dev/loop0 /dev/dm-2 
> 
> Performance of /dev/loop0 with 1 thread : exactly the same performance as direct random seek (good point here)
> root@srv-xen1:~# ./seeker_baryluk /dev/loop0 1
> [1 threads]
> Results: 210 seeks/second, 4.757 ms random access time (4255332 < offsets < 60739140845)
> 
> Performance of /dev/loop0 with 32 threads : 211 seeks/mins <- Here we have "catastrophic" performance issues because we completly lost performance bring by NCQ/TCQ/Queuing !!
> root@srv-xen1:~# ./seeker_baryluk /dev/loop0 32
> [32 threads]
> Results: 211 seeks/second, 4.735 ms random access time (14948337 < offsets < 60737675221)

Assuming this is not reproducible (anymore), or is this still the
case? I'm for now closing the bugreport, please feel free to reopen in
case the issue could still be triggered.

Regards,
Salvatore


Reply to: