[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

[SOLVED] Is squeeze compatible with WD20EARS and other 2TB drives?



Attention: long post ahead!
I don't use line wrapping because it breaks long URLs. If that makes you or your e-mail client cringe, you may as well read this at http://bufferoverflow.tiddlywiki.com instead (same text, nicer formatting).

First of all, let me thank all of you who responded. As promised, I am giving feedback to the list so that future purchasers of Western Digital WD EARS/EADS models and similar "Advanced Format" hard drives may benefit.

The first thing of notice is that the Load_Cycle_Count of the drive heads increases every 8 seconds by default. As seen on the Internet, this may pose a problem in the long run, since these drives are "guaranteed" to sustain a limited number of such head parking cycles. The number given varies from 300.000 to 1.000.000, depending on where you look. The first thing I did was, therefore, launch a shell script that wrote something to the drive every second. Not being content with this dirty workaround, I proceeded to download the WD proprietary utility wdidle3.exe, and the first link obtained by googling for "wdidle3.exe" did the trick: http://support.wdc.com/product/download.asp?groupid=609&sid=113 I then proceeded to download a freedos bootable floppy image and copied it to a floppy disk using dd. Once the bootable floppy was thus created, I copied wdidle3.exe thereto. Reboot computer, change BIOS boot order to floppy first, save&exit, the floppy boots and I run wdidle3.exe. The utility offers three command-line switches, for viewing the current status of the Load_Cycle_Count parameter, for changing it, and for disabling it. No drive is specified, so if you change/disable the parameter, you are doing this to ALL and ANY WD drives in your system. I chose to disable head parking, and since I also have an older 160GB WD IDE disk in the box, the utility disabled head parking cycles for BOTH drives. Except that ... there be problems. As opposed to the old 160 GB drive, the setting didn't work for the new 2 TB drive. Instead, the frequency of the load cycles increased 16-fold, to a whopping 7200 cycles per hour! This quickly increased my Load_Cycle_Count parameter (checked by issuing smartctl --all /dev/sda) by several thousand ticks overnight. Interestingly enough, the drive loaded and unloaded its heads at the amazing rate of twice per second even while sustained copying was underway (copying a 10 GB directory subtree from one drive to another). I didn't notice the increased cycle count until the next morning, however. When I did, I rebooted the machine with the freedos floppy again and set the interval from "disabled" to "every 300 seconds", which appears to be the maximum interval allowed. It would seem that, for the time being at least, this made the Load_Cycle_Count stay put at 22413. Whew! So, setting this bugger straight is probably the first thing you'll want to do after getting one of these WD drives.

Now, the second issue: the hardware/logical sector alignment.
Since it will affects real-world transfer speeds, let's first check out the theoretical speeds of this drive in this particular environment -- a 3GHz Pentium-IV motherboard with a humble integrated SATA controller (I think it's an early SATA-I generation).

Before partitioning and formatting:

obelix# hdparm -tT /dev/sda

/dev/sda:
Timing cached reads: 1726 MB in 2.00 seconds = 713.98 - 862.86 MB/sec (several iterations performed) Timing buffered disk reads: 336 MB in 3.01 seconds = 100.01 - 111.72 MB/sec (several iterations performed)

After partitioning the drive, aligned on modulo 8 sector boundaries:

obelix:# hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   1264 MB in  2.00 seconds = 631.97 MB/sec
 Timing buffered disk reads:  252 MB in  3.08 seconds =  81.80 MB/sec

Hmm, while we're at it, why don't we also check the antiquated 160 GB drive on the obsolete IDE interface?

obelix# hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   1348 MB in  2.00 seconds = 674.14 MB/sec
 Timing buffered disk reads:  206 MB in  3.02 seconds =  68.26 MB/sec

Well, so much for the alleged superiority of serial ATA over IDE...

Anyway. I have to prepend here that, Squeeze still not having reached stable, all of the following was performed on a stock Lenny i386 system (the reason being I have no Squeeze system yet). So, many of the following points may become obsolete in a matter of weeks when Squeeze, with a newer kernel and updated partitioning tools, reaches stable. The first thing is, fdisk in Lenny doesn't support GPT partitioning, so I had to use parted. I first used its Gnome variant, GParted, and must say that it cant't align the partitions. Even if you align the first sector by hand (in parted, since GParted can't do it) and de-select the "Round to cylinders" option in GParted as recommended in http://www.ibm.com/developerworks/linux/library/l-4kb-sector-disks/index.html (which was my main guide and reference in this adventure), GParted will end your partition on an aligned sector -- which means that, by default, the next partition will start on a non-aligned sector again. Be as it may, I then proceeded to use the new partitions created by GParted, doing some cursory "benchmarks". The typical copy speed reached in mc was about 20 MB/s, while rsync reported speeds of up to 51MB/S. Rsync reached a maximum 51MB/s on unaligned partitions, when copying from hda (WD1600AAJB) to sda (WD20EARS).

Then I tried to re-align my partitions by manually calculating the starting sectors of all the partitions so as to have them divisible by 8. This could only be done in parted, not in GParted. On the other hand, parted couldn't create ext3 filesystems, so manually created partitions had to be subsequently formatted in GParted. In short, a combination of both tools had to be used to successfully create AND format the partitions. Here's my final result as seen in parted (fdisk doesn't understand GPT):

(parted) print Model: ATA WDC WD20EARS-00M (scsi)
Disk /dev/sda: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags 1 128s 8194055s 8193928s linux-swap 2 8194056s 49154055s 40960000s ext3 primary 3 49154056s 90114055s 40960000s ext3 primary 4 90114056s 1998569479s 1908455424s ext3 primary 5 1998569480s 3907024064s 1908454585s ext3

I was just curious if aligned partitions would yield any noticeable speed improvement (especially in the file write department, since file reads, according to the above IBM article, should not be that heavily hit by misalignment). The "benchmarks" I performed, consisting in copying random files from the other drive to the WD20EARS using mc and rsync, generally yielded something between 15 and 35 MB/s, sometimes falling under 10 MB/s and at times going as high as 56 MB/s; the latter figure, however, was usually reached in the initial moments of a large file rsync (an Ubuntu CD ISO file) and would decrease after several seconds to about 40 MB/s, so it may very well be due to the 64MB cache on these drives. Just for the heck of it, I decided to re-align the partitions modulo-64, thus:

Partition Table: gpt

Number Start End Size File system Name Flags 1 128s 8194047s 8193920s linux-swap linux-swap 2 8194048s 49154047s 40960000s ext3 ext3 3 49154048s 90114047s 40960000s ext3 ext3 4 90114048s 1998569472s 1908455425s ext3 ext3 5 1998569473s 3907024064s 1908454592s ext3 Rsyncing the good old ubuntu ISO file yielded transfer rates of around 60 MB/s, with the exception of the last partition, which was written to at under 50 MB/s. It made me wonder. I checked the mount options in fstab, double checked that the CPU governor was set to max performance, all to no avail. Then, I fired up parted again and noticed that the 5th partition was actually one sector off. I corrected my error thus:

Partition Table: gpt

Number Start End Size File system Name Flags 1 128s 8194047s 8193920s linux-swap linux-swap 2 8194048s 49154047s 40960000s ext3 ext3 3 49154048s 90114047s 40960000s ext3 ext3 4 90114048s 1998569471s 1908455424s ext3 ext3 5 1998569472s 3907024064s 1908454593s ext3 ext3 As expected, the rsync results for the last partition became consistent with the other partitions (i.e. around 60 MB/s).

Conclusions:
By default, these WD drives are not Linux-ready. They do work out-of-the box, but are not configured optimally speedwise. Given that we're talking about "green" (marketing mumbo-jumbo for "slow") drives, this additional performance hit is noticeable and quite undesirable. By aligning the partitions on 8-sector boundaries, the transfer speeds are improved by almost 20%; aligning them on 64-sector boundaries doesn't yield further noticeable improvements though. Or, more precisely: the tests I performed were too coarse to substantiate potential small differences, because as differences become smaller, other factors, such as the CPU governor used, fstab parameters, or actual load on the CPU at a given moment may prevail, completely masking such small differences. The CPU governor seems to be the most crucial of those secondary factors (see below). So, there are indications that using 64-sector alignment "may" give a slightly better performance over 8-sector alignment, but they are nothing more than indications, really. Proper benchmarks would be required to ascertain that.

Curiosa:
All testing was done with a ~700-MB ISO file; copying many smaller files may (and will) incur additional performance hits. Dropping the CPU governor to powersave reduced file writes to under 20 MB/s and less, which means to about a third of the maximum speed achievable. Mount options for the partitions, and the performance of the source disk are also major factors in these tests. In my case, the source from which the files were copied was an oldish 160 GB WD IDE drive (model WD1600AAJB). The only downtime needed was about 10 minutes -- the time it took to actually install the drive into the chassis; had WD provided a tool for online modifying the drive's S.M.A.R.T. Load_Cycle_Count parameter, no further reboots would be needed, i.e. once the hard drive was installed, it could be taken to production use without as much as a single reboot. Due to my own mistake, however, a superfluous reboot was needed. Namely, while messing with parted and gparted and modifying partition sizes, at one point I forgot to unmount the partitions before deleting the partition table in parted. After that, I kept getting the warning that a reboot would be required for the kernel to re-read the partition tables, preventing me from creating the last two filesystems and wrapping it up. Neither umount nor swapoff would help. Instead of digging for the offending process and killing/restarting it, I preferred to reboot the system, since it wasn't in use at the moment anyway. Beside the physical installation of the drive in the chassis, which was done during off hours, virtually everything else was done remotely via ssh, without interrupting the work of the currently logged-in user. To enable graphical tools such as GParted to be used, ssh was run with the -XC option, and then GParted was launched remotely by issuing "gksu gparted". The flexibility of GNU/Linux is simply mind-boggling. I have no kind words for WD. Their drives as provided are severely underoptimized for GNU/Linux. On the drive label and on their site they state that no further configuration is required for using the drive in Linux; which is quite simply untrue. In addition, the head parking feature is heavily flawed, and is only accessible via a DOS proprietary tool, and only by taking the entire system offline. I am quite disappointed in WD, but am thoroughly confident that the GNU/Linux community will provide for the WD's shortcomings, as always. We'll see what hdparm and smartmontools in Squeeze will bring along. The Lenny versions are too old to be of much use with this disk (for example, the hdparm -B command doesn't work). The foregoing user experience is nothing more than that -- a user experience; copying a handful of files is not to be considered a "test" or "benchmark" in any meaningful sense whatsoever, so take it with a huge lump of salt!
Happy computing!

--
Cheerio,

Klistvud http://bufferoverflow.tiddlyspot.com Certifiable Loonix User #481801 Please reply to the list, not to me.


Reply to: