[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Overwriting a 4096 byte sector harddisk drive with random data



On Thu, 21 Jul 2011 21:36:28 +1000
yudi v <yudi.tux@gmail.com> wrote:

> On Thu, Jul 21, 2011 at 8:28 PM, Robert Blair Mason Jr.
> <rbmj@verizon.net>wrote:
> 
> > On Thu, 21 Jul 2011 19:47:44 +1000
> > yudi v <yudi.tux@gmail.com> wrote:
> >
> > > Hi
> > >
> > > I need to write random data to a partition before encrypting it.
> > > Suggested way is to use urandom:
> > >
> > > #dd if=/dev/urandom of=/dev/sda2
> > >
> > > What is the use of operand "bs" in the following case? I see the
> > > above command executed as follows sometime:
> > >
> > > #dd if=/dev/urandom of=/dev/sda2 bs=1M
> > >
> > > For the hard drive I got, should I use the following command:
> > >
> > > #dd if=/dev/urandom of=/dev/sda2 bs=4096
> > >
> > > >From what  I understand, the first command above will write data
> > > >in 512 byte
> > > blocks, the second one in 1MB blocks, and the third in 4096 byte
> > > blocks. Right?
> > > I am a bit confused about the usage of this operand in this case.
> > >
> >
> > That is correct.  They will both do effectively the same thing,
> > however, the version with a larger block size will probably run
> > faster on a modern machine.
>
> If performance is linked to the size of the block size why not a
> very high
> number? Is this purely arbitrary or is there an optimum size?
> 

The increase in speed comes from the a few factors, AFAIK:
 - decrease in overhead switching from reads to writes (small)
 - minimizing the number of reads to the harddisk, as with very small
   block sizes a single read from disk may contain more data then the
   size of the desired block, thus wasting data
 - optimizing usage of hardware buffers (big)

As with most things in computers, the optimum number is most
likely a power of two and is specific to your hardware.  dd usually
outputs transfer rate statistics, which you can use to test, e.g.:

$ dd if=/dev/urandom of=qwertyuiop bs=1k count=32768
32768+0 records in
32768+0 records out
33554432 bytes (34 MB) copied, 4.6973 s, 7.1 MB/s
$ dd if=/dev/urandom of=qwertyuiop bs=1M count=32
32+0 records in
32+0 records out
33554432 bytes (34 MB) copied, 4.26189 s, 7.9 MB/s

Note that the results are much more interesting with /dev/zero:

$ dd if=/dev/zero of=qwertyuiop bs=1k count=32768
32768+0 records in
32768+0 records out
33554432 bytes (34 MB) copied, 0.16304 s, 206 MB/s
$ dd if=/dev/zero of=qewrtyuiop bs=1M count=32
32+0 records in
32+0 records out
33554432 bytes (34 MB) copied, 0.0222486 s, 1.5 GB/s

Thus it seems that the bottleneck is the speed of /dev/urandom.  When a
faster device is used, block size plays a much larger role.

Lastly, please send replies to the list unless you have a specific
reason for keeping it personal.

--
rbmj


Reply to: