On Fri, Nov 30, 2007 at 02:53:32PM -0800, Alvin Oga wrote: > > David Brodbeck wrote: > > > > On Nov 30, 2007, at 9:45 AM, Stefan Monnier wrote: > > > No. The NTFS file system does not need defragmentation. > > all file systems can use a defragmentor ...description of interleaving deleted... > the defragmentor can be used to move sectors around to optimize > reading the whole file w/o waiting for the next revolution This is called interleaving, and has nothing to do with fragmentation. Defragmentors do nothing about interleave. If one were to interleave at the filesystem level, performance would quickly get poor because of the huge number of pointers to chunks of the file. Interleaving is done at the hardware format level, and because the controller is integrated with the media, is completely hidden from the host. Note that in a sequential read case, performance would be half or worse for interleaved sectors, as you'd suffer through at least two rotations to read each track. > - how the defragmentor displays used and unused sectors > can make a big difference in the pretty pic you see vs the > actual performance > what you see the defragmentor showing would be a continuously > allocated file instead of scattered across various sectors > within a track or having to move the heads to a different tract > to get to the next 512byte > there's only 512bytes per sector > 63 sectors per track > and any number of cylinders depending on your disk size I'll bet you can't find a hard drive made in the past decade with 63 sectors per track. Hard drives have a number that varies with the circumference of the track. The 63 spt is an artificial construct for backwards compatibility. > lba ... > maps all the cylinder/heads/sector into other whacky numbers ( lba blocks ) A continuous range from 0 to (# sectors - 1) is whacky? Compared to using physical c/h/s numbers that bear no relation to reality? -- Rob
Attachment:
signature.asc
Description: Digital signature