[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: SATA disk detected as IDE? SOLVED



Anand Sivaram put forth on 6/30/2010 10:25 PM:

> Why do you say that it is detected as IDE.  Normally IDE disks using

I don't get this either.  Nothing in anything he posted shows that the kernel
is detecting this drive as IDE.  Quite the contrary, it's being detected as a
SATA device, and if he'd have shown dmesg output, it would clearly state so,
but he did not.

> deprecated IDE driver are shown as hda, hdb etc. where as SATA and the same
> IDE disks with newer PATA driver are shown as sda, sdb etc.  For you it is
> showing the disk as sda.  Take a look at "lspci -k" to see which kernel
> driver is getting used.
> Also a very easy method to see the reading speed of the disk is

You're talking about libata, the current all-in-one SATA/PATA/ATAPI driver.
And yes, regardless of whether a drive is PATA or SATA, if it's under the
control of libata, it will show up as /dev/sdx, or if it's a CD/DVD-ROM as
/dev/srx.

> dd if=/dev/sda of=/dev/null bs=1M count=1024
> This will read the first 1024MB of your disk.  I think a good
> disk/controller gives you more than 70MB per second or so.

That depends on many factors, the big one being whether the drive and
controller both support NCQ, and if they both have a good implementation of
it.  Look at the ATA_NCQ_HORKAGE list to see a group of drives whose
performance _drops_ considerably with NCQ enabled, or suffer other more
serious problems with NCQ enabled such as filesystem corruption, data loss, etc.

Other factors affecting sequential read performance (dd) are the elevator
used, and the nr_requests and read_ahead_kb settings.  Bumping read_ahead_kb
up from the default 128 to 512 or 1024 will produce a decent increase in
sequential read performance, about 10-20%.  For example, a quick test on one
of my lower end systems produces a 16% increase in sequential read performance:

/$ dd if=/dev/sda of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 16.8026 s, 63.9 MB/s

/$ echo 1024 > read_ahead_kb
/$ dd if=/dev/sda of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 14.4375 s, 74.4 MB/s

(This system only has only 384MB RAM so little to none of the performance
increase was due to buffers/cache from the first dd run)

_But_ a high read_ahead_kb setting causes a huge rise in the size of kernel
I/O buffers, eating system memory like candy.  This one test caused a 6 fold
increase in my kernel buffer size, to over 260MB.  Playing with read_ahead_kb
for testing can be useful in measuring absolute hardware performance, but I
wouldn't run day-to-day with a setting much higher than the default.  There
are some specific server applications where high read_ahead_kb is useful, such
as streaming media servers, but they are few and far between.

-- 
Stan


Reply to: