[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: transfer speed data



On Wed, Dec 23, 2020 at 07:27:49PM -0600, David Wright wrote:
I thought Michael Stone had already covered that, by suggesting sparse
files (with which I'm not familiar)

A sparse file is one which has logically allocated empty (zero-filled) blocks without allocating physical blocks. You can create one easily with something like "truncate -s 1G testfile" and use "ls -l testfile ; du testfile" to confirm that it's logically 1G but using 0 disk blocks. This is convenient for storing certain data structures with a lot of empty space (e.g., /var/log/lastlog). On some ancient unix systems it could actually be slower to access sparse files than real files, but you're unlikely to run into those anymore and sparse files can be useful in certain kinds of testing. You do want to make sure you're not testing something that compresses data, as a file full of zeros will skew results for that sort of application.

On Thu, Dec 24, 2020 at 11:06:50AM +0200, Andrei POPESCU wrote:
I was rather referring to real use ;)

Speed tests under optimized conditions do have their purpose (e.g. is my
network interface working properly?[1]), but they might be misleading
when the bottleneck for real world transfers is elsewhere (like the
limited storage options on the PINE A64+).

Generally you'd want to test multiple things in isolation to understand where the bottlenecks are. I was speaking specifically about the encryption algorithms because someone suggested that was the problem was. Testing the disk I/O in isolation might be a logical next step, if a null-disk and null-network copy performed well. If it didn't perform well, then you'd have established an upper bound on what to expect from scp. (This would be relevant mainly on very low-power hardware these days, and though you're talking about an A64 I don't see where the OP said that was what he was using.)


Reply to: