[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [OT] Network throughput

Thomas Krennwallner wrote:

On Fri Apr 18, 2003 at 12:22:30PM +0200, David Fokkema wrote:

I have a laptop with a PCMCIA 10Mbit adapter connected to a 10Mbit nic on
my server. In between is a fast ethernet switch (10/100 Mbit).
Even as I write this mail, I'm transferring a 1Gb file from my laptop to
my server. A simple very approximate calculation for the transfer time
gives me:

	10 Mbit/s ~= 1 Mb/s
	1Gb ~= 1000 Mb

	=> transfer takes 1000 secs ~= 15 min.

However, this transfer takes much longer already, so I decided to try out
ntop and I notice that the reported throughput is at only 360Kbps!

As always, there are severable possible answers, including the one that
says that I lost all ability for handling numbers. Of course, another
possible answer is that I made a stupid mistake.

According to ntop, its a virtually one-way flow. I'm using dd to transfer
the image from hard drive to file which resides on a samba exported
filesystem on my server.

10MBit is the raw traffic speed without IP Header Information. So you
NEVER can reach 10MBit. Also the quality of your NICS/Switches can
decrease network flow significanly (see http://www.fefe.de/linuxeth/
and http://www.fefe.de/linuxeth/realtek.txt). It should also be
considered that the packets have to walk through the network stacks
which should be minimal but you have to take it into account.
Also your laptops/server harddisk may decrease network throughput.
The next thing which throttles your connection significantly is the
overhead produced by samba.

So your net throughput depends on various factors and cannot be
generalized simply.

so long

I second that! I recently looked into this on my home LAN. It was supposed to be a 100 Mb network, but I was getting less than 10 Mb transfers. I originally had a math problem converting between Mega-"bits"/sec and Mega-"bytes"/sec, and the symbols associated with each. After getting this straightend out, it still showed pitiful performance from what I expected.

The REAL bottleneck turned out to be my HD controllers and/or HDs "throttling" the transfers! I had all IDE HDs, but some were old & slow while others were fairly modern & fast. Transfer speeds varied widely depending on which controller/HD PAIR was involved in the transfer. After doing some tuning with hdparm on the IDE drives and some hardware swaps, I was able to achieve about 50%-60% of the expected throughput here on a "modern" ---> "modern" transfer pair of controllers & HDs. I think that is about all you can expect given all the overhead of an ethernet network. Other things like SAMBA could slow this down further.

I also had a secondary "problem" with a dual speed HUB in one section of my network that wouldn't work in the duplex mode. I replaced it with a switch and gained a small amount of increased transfer, but nothing like I got out of tuning the HD's transfer rates.

-Don Spoon-

Reply to: