[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Why hexcat?



On Mon, Feb 10, 2003 at 10:54:34PM +0200, Richard Braakman wrote:
Looks like hextype is only three times as fast now, and hexcat is
orders of magnitude slower.

[snip]

file was loaded into memory first.  For hexcat I used a 1 MB file because
I got impatient.  The file contained many blocks of zeroes, so hexdump
had an advantage from its duplicate-compression.)

On non-zero data the results are quite different:

(222)osgiliath:/tmp> dd if=/dev/urandom of=testfile bs=1024 count=204800
204800+0 records in
204800+0 records out
209715200 bytes transferred in 143.756790 seconds (1458819 bytes/sec)
0.090u 139.960s 2:23.75 97.4%   0+0k 0+0io 130pf+0w
(223)osgiliath:/tmp> hextype testfile | dd of=/dev/null
1996800+1 records in
1996800+1 records out
1022361639 bytes transferred in 51.766556 seconds (19749462 bytes/sec)
31.320u 4.010s 0:51.77 68.2%    0+0k 0+0io 226pf+0w
(224)osgiliath:/tmp> hd testfile | dd of=/dev/null
2022400+1 records in
2022400+1 records out
1035468809 bytes transferred in 368.855245 seconds (2807250 bytes/sec)
359.780u 5.100s 6:08.86 98.9%   0+0k 0+0io 246pf+0w

That still leaves open the very good question of why hexcat exists,
since it duplicates funcionality and is ridiculously slow as well.
Mike Stone



Reply to: