[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

tools to improve harddisk performance by short-stroking?



Thanks to the national holiday (Beijing) I begin to read some article
marked for free-time reading a few years ago. One of them is short stroking.

http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html

The article is awfully long just to give a simple idea: by only using
the first 20% of hard disk space you get about 3 to 4 times of hard-disk
performance gain. The less you use your hard disk space, the more
performance gain you get, you can get as much as 5 times faster HDD by
using less than 10% of hard-disk.

The article says, the only disadvantage of this method is you don't use
the rest of the hard disk.

Finish reading the article it makes obvious to me that, if this
"technology" is really so powerful, it should have already been
implemented in OSes, like Linux, without necessarily abandoning the slow
part of hard-disk space but instead put rarely used data there.

Technically it cannot be too difficult to design file system tools in
the way that it tends to put rarely-accessed files to the end of the
partition that holds the file system. I am sure I have many files on my
computer with atime a few years ago (e.g. man page for hier; files I
moved to Trash a few years ago), they can be moved to the bottom of HDD
space with a performance gain.

Since the knowledge seems to be decades old (some references to this
idea are in 1990s under different names), if the article's
many-time-hdd-performance-gain is true, in the decade there should
already been harddisk performance speedup tool that moves rarely
accessed file to the bottom of file system once a week or so and get me
a 3 time boost of HDD performance.

Question is how to find such a tool? A google search for such tool end
up in vain, that seem to suggest the tool doesn't exist, which means the
article is probably wrong.

Note that I read about methods of partitioning harddisk to get
short-stroking advantage, that wouldn't be optimal, because human has to
decide what are frequently accessed files (put them to first partition)
and what are rarely accessed files (put them on later partitions), and a
tool can do this thing much better than human. Besides, human doing
partition optimization can only choose one of /var, /usr, /home, /tmp
and "swap" as "rarely accessed", the fact is none of them are rarely
accessed, while most of them contain rarely accessed files. That's a
very bad granularity compare to what specialized tool could do.

Thanks in advance for comments!


Reply to: