[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: dd to clone a drive

On Tue, Sep 26, 2017 at 04:28:10PM +0200, Thomas Schmitt wrote:
The first suspect for slow dd is small block size.
So you should in any case ask dd for larger chunks as already proposed
by Michael Stone. For copying Debian ISOs to USB sticks the FAQ proposes
4 MiB. But i think 1 MiB is surely enough:

32k is surely enough. :) You're not going to see much performance difference on a USB drive after 8k or so. Eventually the performance may actually start going down as the block size gets more ridiculous, and very large blocks will tend to exhibit inconsistent behavior as you're generally observing transient quirks in the interaction with the OS cache rather than real performance differences. (In fact, regardless of what block size you use in a this particular dd command line, the OS is going to repackage it into a different size that's optimal for the device before actually writing it out. In my experience that's usually something around 1KByte for USB drives. So why use a block larger than 1k at all? The point of isn't to send a giant block to the disk, it's to reduce the overhead of calling the kernel to write a block. With the default 512 byte blocks the overhead is large enough to restrict performance. After a few kbytes the overhead is low enough that the USB drive is the bottleneck. I have no idea why anyone would have suggested 4M as a good starting point, but that's well into the range of diminishing returns.)

Back to the original question, it's also worth mentioning that if the source disk really is failing it might run extremely slowly as it retries blocks over and over again. Looking at the progress through one of the mechanisms already mentioned will give you an idea of whether the process has gotten stuck in that way. If that's the problem, you can just leave it running and if the disk isn't too bad it may eventually finish. If it errors out, adding "conv=noerror,sync" will make dd write zeroes to the new disk when it hits a bad block, but keep going. (Be aware that means there will spots with garbage data, maybe in a file, maybe making the filesystem unreadable.) Note that the entire input block of lost data will be zeroed, so you may want something like dd ibs=512 obs=64k conv=sync,noerror ... to minimize the amount of lost data. There is no point to reading less than 512 bytes at a time, as the hardware block will be at least that large.

Mike Stone

Reply to: