[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: dd performance test differences



Forgot to mention both run Debian (7.1 and 9.5) and filesystems are ext4 on both.


On 02/11/18 11:58, Adam Weremczuk wrote:
Hi all,

Can somebody explain this huge difference between 2 (almost) identical servers:

-------------------------------------------------------------------------------------

dd if=/dev/zero of=test.bin bs=512 count=1024 oflag=sync

524288 bytes (524 kB) copied, 0.00133898 s, 392 MB/s

vs

524288 bytes (524 kB, 512 KiB) copied, 0.3026 s, 1.7 MB/s

-------------------------------------------------------------------------------------

With "hdparm -tT /dev/sda" discrepancies are much smaller but still noticeable:

 Timing cached reads:   15976 MB in  2.00 seconds = 7996.66 MB/sec
 Timing buffered disk reads: 2134 MB in  3.00 seconds = 710.98 MB/sec

vs

 Timing cached reads:   14282 MB in  1.99 seconds = 7161.56 MB/sec
 Timing buffered disk reads: 1172 MB in  3.00 seconds = 390.61 MB/sec

-------------------------------------------------------------------------------------

I would thought all meaningful aspects are identical:
- server model (IBM/Lenovo X3650 M3)
- identical disks (count, brand, model, capacity) and RAID controller cards (LSI ServeRAID M5014 SAS/SATA Controller)
- RAID settings:

Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-5, Secondary-0, RAID Level Qualifier-3
Size                : 3.629 TB
Sector Size         : 512
Parity Size         : 929.458 GB
State               : Optimal
Strip Size          : 128 KB
Number Of Drives per span:5
Span Depth          : 2
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy   : Disabled
Encryption Type     : None
Is VD Cached: No

- both have BBUs in optimal states

-------------------------------------------------------------------------------------

There is a slight difference in RAID firmware:

FW Package Build: 12.12.0-0085

vs

FW Package Build: 12.15.0-0248

With the newer giving lower results.

Shall I start digging with Lenovo / LSI or am I missing something?

Thanks,
Adam



Reply to: