[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

understanding iperf output



Hi,

I'm using iperf to measure the throughput between two systems, 'local'
and 'remote'.  They're both connected via 802.11g wireless to wireless
AP / routers, connected in turn to pretty fast (I don't have the exact
bandwidth figures) broadband internet access providers.  So:

local - (802.11g) - router1 - (broadband connection) - internet - (broadband connection) - router2 - (802.11g) - remote

Remote is behind a firewall, with only port 22 open, so my setup is
something like this:

[On local:]

local$ ssh -f -L 5001:localhost:5001 remote iperf -s
local$ iperf -c localhost

Using these default settings, iperf pretty consistently manages to
transfer ~ 5.30 Mbits during its default ten second window.  The local
iperf client reports:

~$ iperf  -c localhost
------------------------------------------------------------
Client connecting to localhost, TCP port 5001
TCP window size: 49.4 KByte (default)
------------------------------------------------------------
[  3] local 127.0.0.1 port 40594 connected with 127.0.0.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.1 sec  5.29 MBytes  4.40 Mbits/sec

But the remote iperf reports:

[  4] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 45664
[  4]  0.0-21.5 sec  5.29 MBytes  2.06 Mbits/sec

The bandwidth keeps coming in as only half of what the local version
reports, apparently because it's twice as much time.

What does all this mean?  What are the two different times reported?
Sorry if this is obvious, but I can't quite figure this out, or
determine where this is documented.  Maybe it's too simple ;)

Celejar
-- 
foffl.sourceforge.net - Feeds OFFLine, an offline RSS/Atom aggregator
mailmin.sourceforge.net - remote access via secure (OpenPGP) email
ssuds.sourceforge.net - A Simple Sudoku Solver and Generator


Reply to: