[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Unidentified subject!



Hi,

David Christensen wrote:
> Concurrency:
> threads throughput
> 8       205+198+180+195+205+184+184+189=1,540 MB/s

There remains the question how to join these streams without losing speed
in order to produce a single checksum. (Or one would have to divide the
target into 8 areas which get checked separately.)

Does this 8 thread generator cause any problems with the usability of
the rest of the system ? Sluggish program behavior or so ?

The main reason to have my own low-quality implementation of a random
stream was that /dev/urandom was too slow for 12x speed (= 1.8 MB/s) CD-RW
media and that higher random quality still put too much load on a
single-core 600 MHz Pentium system. That was nearly 25 years ago.


I wrote:
> > Last time i tested /dev/urandom it was much slower on comparable machines
> > and also became slower as the amount grew.

> Did you figure out why the Linux random number subsystem slowed, and at what
> amount?

No. I cannot even remember when and why i had reason to compare it with
my own stream generator. Maybe 5 or 10 years ago. The throughput was
more than 100 times better.


I have to correct my previous measurement on the 4 GHz Xeon, which was
made with a debuggable version of the program that produced the stream.
The production binary which is compiled with -O2 can write 2500 MB/s into
a pipe with a pacifier program which counts the data:

  $ time $(scdbackup -where bin)/cd_backup_planer -write_random - 100g 2s62gss463ar46492bni | $(scdbackup -where bin)/raedchen -step 100m -no_output -print_count
      100.0g bytes

  real    0m39.884s
  user    0m30.629s
  sys     0m41.013s

(One would have to install scdbackup to reproduce this and to see raedchen
 count the bytes while spinning the classic SunOS boot wheel: |/-\|/-\|/-\
   http://scdbackup.webframe.org/main_eng.html
   http://scdbackup.webframe.org/examples.html
 Oh nostalgy ...
)

Totally useless but yielding nearly 4000 MB/s:

  $ time $(scdbackup -where bin)/cd_backup_planer -write_random - 100g 2s62gss463ar46492bni >/dev/null

  real    0m27.064s
  user    0m23.433s
  sys     0m3.646s

The main bottleneck in my proposal would be the checksummer:

  $ time $(scdbackup -where bin)/cd_backup_planer -write_random - 100g 2s62gss463ar46492bni | md5sum
  5a6ba41c2c18423fa33355005445c183  -

  real    2m8.160s
  user    2m25.599s
  sys     0m22.663s

That's quite exactly 800 MiB/s ~= 6.7 Gbps.
Still good enough for vanilla USB-3 with a fast SSD, i'd say.


> TIMTOWTDI.  :-)

Looks like another example of a weak random stream. :))


Have a nice day :)

Thomas


Reply to: