[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: IP performance question



On Sun, 24 May 2015 13:20:04 +0300
Reco <recoverym4n@gmail.com> wrote:

>  Hi.
> 
> On Sun, 24 May 2015 11:28:36 +0200
> Petter Adsen <petter@synth.no> wrote:
> 
> > > On Sun, 24 May 2015 10:36:39 +0200
> > > Petter Adsen <petter@synth.no> wrote:
> > > 
> > > > I've been trying to improve NFS performance at home, and in that
> > > > process i ran iperf to get an overview of general network
> > > > performance. I have two Jessie hosts connected to a dumb switch
> > > > with Cat-5e. One host uses a Realtek RTL8169 PCI controller, and
> > > > the other has an Intel 82583V on the motherboard.
> > > > 
> > > > iperf maxes out at about 725Mbps. At first I thought maybe the
> > > > switch could be at fault, it's a really cheap one, so I
> > > > connected both hosts to my router instead. Didn't change
> > > > anything, and it had no significant impact on the load on the
> > > > router. I can't try to run iperf on the router (OpenWRT),
> > > > though, as it maxes out the CPU.
> > > > 
> > > > Should I be getting more than 725Mbps in the real world?
> > > 
> > > A quick test in my current environment shows this:
> > > 
> > > [ ID] Interval       Transfer     Bandwidth
> > > [  3]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec
> > > 
> > > Two hosts, connected via Cisco 8-port unmanaged switch, Realtek
> > > 8168e on one host, Atheros Attansic L1 on another.
> > > 
> > > On the other hand, the same test, Realtek 8139e on one side, but
> > > with lowly Marvell ARM SOC on the other side shows this:
> > > 
> > > [ ID] Interval       Transfer     Bandwidth
> > > [  3]  0.0-10.0 sec   534 MBytes   448 Mbits/sec
> > > 
> > > So - you can, definitely, and yes, it depends.
> > 
> > That last one, would that be limited because of CPU power?
> 
> That too. You cannot extract that much juice from a single-core ARM5.
> Another possibility is that Marvell is unable to design a good chipset
> even in the case it would be a matter of life and death :)

That might be why I'm not using the Marvell adapter :) I remember
reading somewhere that either Marvell or Realtek were bad, but I
couldn't remember which one, so I kept using the Realtek one since I
had obviously switched for a reason :)

> > > Try the same test but use UDP instead of TCP.
> > 
> > Only gives me 1.03Mbits/sec :)
> 
> iperf(1) says that by default UDP is capped on 1Mbit. Use -b option on
> client side to set desired bandwidth to 1024m like this:
> 
> iperf -c <server> -u -b 1024M
> 
> Note that -b should be the last option. Gives me 812 Mbits/sec with
> the default UDP buffer settings.

Didn't notice that. I get 808, so close to what you get.

> > > Increase TCP window size (those net.core.rmem/wmem sysctls) on
> > > both sides.
> > 
> > It is currently 85KB and 85.3KB, what should I try setting them to?
> 
> Try these:
> 
> net.core.rmem_max = 4194304
> net.core.wmem_max = 1048576

OK, I've set them on both sides, but it doesn't change the results, no
matter what values I give iperf with -w.

> > > Try increasing MTU above 1500 on both sides.
> > 
> > Likewise, increase to what?
> 
> The amount of your NICs support. No way of knowing the maximum unless
> you try. A magic value seems to be 9000. Any value above 1500 is
> non-standard (so nothing is guaranteed)
> 
> A case study (that particular NIC claims to support MTU of 9200 in
> dmesg):
> 
> # ip l s dev eth0 mtu 1500
> # ip l s dev eth0 mtu 9000
> # ip l s dev eth0 mtu 65536
> RTNETLINK answers: Invalid argument
> 
> Of course, for this to work it would require to increase MTU on every
> host between your two, so that's kind of a last resort measure.

Well, both hosts are connected to the same switch (or right now, to the
router, but I could easily put them back on the switch if that
matters). One of the hosts would not accept a value larger than 7152,
but it did have quite an effect: I now get up to 880Mbps :)

Will this setting have an impact on communication with machines where
the MTU is smaller? In other words, will it have a negative impact on
general network performance, or is MTU adjusted automatically?

And what is the appropriate way of setting it permanently - rc.local?

> > Also, the machine with the Realtek PCI adapter has a Marvell
> > 88E8001 on the motherboard, but I haven't used it for years since
> > there were once driver problems. Those are probably fixed now, I
> > will try that once I can. Didn't think of it before.
> 
> If the Marvell NIC uses sky2 kernel module - I would not even hope.
> Like I said earlier, Marvel is unable to design a good chip even if
> someone life would depend on it.

Then I will keep using the Realtek card :)

Thanks to you, I now get ~880Mbps, which is a lot better. It seems
increasing the MTU was what had the most effect, so I won't bother with
TCP window size.

Petter

-- 
"I'm ionized"
"Are you sure?"
"I'm positive."

Attachment: pgpR8aHJnrZRM.pgp
Description: OpenPGP digital signature


Reply to: