[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: IP performance question



 Hi.

On Sun, 24 May 2015 11:28:36 +0200
Petter Adsen <petter@synth.no> wrote:

> > On Sun, 24 May 2015 10:36:39 +0200
> > Petter Adsen <petter@synth.no> wrote:
> > 
> > > I've been trying to improve NFS performance at home, and in that
> > > process i ran iperf to get an overview of general network
> > > performance. I have two Jessie hosts connected to a dumb switch
> > > with Cat-5e. One host uses a Realtek RTL8169 PCI controller, and
> > > the other has an Intel 82583V on the motherboard.
> > > 
> > > iperf maxes out at about 725Mbps. At first I thought maybe the
> > > switch could be at fault, it's a really cheap one, so I connected
> > > both hosts to my router instead. Didn't change anything, and it had
> > > no significant impact on the load on the router. I can't try to run
> > > iperf on the router (OpenWRT), though, as it maxes out the CPU.
> > > 
> > > Should I be getting more than 725Mbps in the real world?
> > 
> > A quick test in my current environment shows this:
> > 
> > [ ID] Interval       Transfer     Bandwidth
> > [  3]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec
> > 
> > Two hosts, connected via Cisco 8-port unmanaged switch, Realtek 8168e
> > on one host, Atheros Attansic L1 on another.
> > 
> > On the other hand, the same test, Realtek 8139e on one side, but with
> > lowly Marvell ARM SOC on the other side shows this:
> > 
> > [ ID] Interval       Transfer     Bandwidth
> > [  3]  0.0-10.0 sec   534 MBytes   448 Mbits/sec
> > 
> > So - you can, definitely, and yes, it depends.
> 
> That last one, would that be limited because of CPU power?

That too. You cannot extract that much juice from a single-core ARM5.
Another possibility is that Marvell is unable to design a good chipset
even in the case it would be a matter of life and death :)


> > > Could there be a driver issue, or some settings that aren't optimal?
> > 
> > Check your iptables rules if any. Especially nat and mangle tables.
> 
> None. iptables are currently disabled on both sides.

Was worth the try :)


> > Try the same test but use UDP instead of TCP.
> 
> Only gives me 1.03Mbits/sec :)

iperf(1) says that by default UDP is capped on 1Mbit. Use -b option on
client side to set desired bandwidth to 1024m like this:

iperf -c <server> -u -b 1024M

Note that -b should be the last option. Gives me 812 Mbits/sec with the
default UDP buffer settings.


> > Increase TCP window size (those net.core.rmem/wmem sysctls) on both
> > sides.
> 
> It is currently 85KB and 85.3KB, what should I try setting them to?

Try these:

net.core.rmem_max = 4194304
net.core.wmem_max = 1048576


> > Try increasing MTU above 1500 on both sides.
> 
> Likewise, increase to what?

The amount of your NICs support. No way of knowing the maximum unless
you try. A magic value seems to be 9000. Any value above 1500 is
non-standard (so nothing is guaranteed)

A case study (that particular NIC claims to support MTU of 9200 in
dmesg):

# ip l s dev eth0 mtu 1500
# ip l s dev eth0 mtu 9000
# ip l s dev eth0 mtu 65536
RTNETLINK answers: Invalid argument

Of course, for this to work it would require to increase MTU on every
host between your two, so that's kind of a last resort measure.


> > Use crossover cable if everything else fails.
> 
> If I have one. I read somewhere that newer interfaces will
> auto-negotiate if you use a straight cable as a crossover, is that true?

They should. I never encountered desktop-class NIC that is not able to
negotiate the cable in last 15 years. Consumer-grade routers, on the
other hand … <shrugs>.

 
> Also, the machine with the Realtek PCI adapter has a Marvell 88E8001 on
> the motherboard, but I haven't used it for years since there were once
> driver problems. Those are probably fixed now, I will try that once I
> can. Didn't think of it before.

If the Marvell NIC uses sky2 kernel module - I would not even hope.
Like I said earlier, Marvel is unable to design a good chip even if
someone life would depend on it.

Reco


Reply to: