[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: IP performance question



On Sun, 24 May 2015 15:53:17 +0300
Reco <recoverym4n@gmail.com> wrote:

>  Hi.
> 
> On Sun, 24 May 2015 13:26:52 +0200
> Petter Adsen <petter@synth.no> wrote:
> 
> > > On Sun, 24 May 2015 11:28:36 +0200
> > > Petter Adsen <petter@synth.no> wrote:
> > > 
> > > > > On Sun, 24 May 2015 10:36:39 +0200
> > > > > Petter Adsen <petter@synth.no> wrote:
> > > > > 
> > > > > > I've been trying to improve NFS performance at home, and in
> > > > > > that process i ran iperf to get an overview of general
> > > > > > network performance. I have two Jessie hosts connected to a
> > > > > > dumb switch with Cat-5e. One host uses a Realtek RTL8169
> > > > > > PCI controller, and the other has an Intel 82583V on the
> > > > > > motherboard.
> > > > > > 
> > > > > > iperf maxes out at about 725Mbps. At first I thought maybe
> > > > > > the switch could be at fault, it's a really cheap one, so I
> > > > > > connected both hosts to my router instead. Didn't change
> > > > > > anything, and it had no significant impact on the load on
> > > > > > the router. I can't try to run iperf on the router
> > > > > > (OpenWRT), though, as it maxes out the CPU.
> > > > > > 
> > > > > > Should I be getting more than 725Mbps in the real world?
> > > > > 
> > > > > A quick test in my current environment shows this:
> > > > > 
> > > > > [ ID] Interval       Transfer     Bandwidth
> > > > > [  3]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec
> > > > > 
> > > > > Two hosts, connected via Cisco 8-port unmanaged switch,
> > > > > Realtek 8168e on one host, Atheros Attansic L1 on another.
> > > > > 
> > > > > On the other hand, the same test, Realtek 8139e on one side,
> > > > > but with lowly Marvell ARM SOC on the other side shows this:
> > > > > 
> > > > > [ ID] Interval       Transfer     Bandwidth
> > > > > [  3]  0.0-10.0 sec   534 MBytes   448 Mbits/sec
> > > > > 
> > > > > So - you can, definitely, and yes, it depends.
> > > > 
> > > > That last one, would that be limited because of CPU power?
> > > 
> > > That too. You cannot extract that much juice from a single-core
> > > ARM5. Another possibility is that Marvell is unable to design a
> > > good chipset even in the case it would be a matter of life and
> > > death :)
> > 
> > That might be why I'm not using the Marvell adapter :) I remember
> > reading somewhere that either Marvell or Realtek were bad, but I
> > couldn't remember which one, so I kept using the Realtek one since I
> > had obviously switched for a reason :)
> 
> Both are actually. Realtek *was* good at least 5 years ago, but since
> then they managed to introduce multiple chips that are managed by the
> same r8169 kernel module. Since then it became a matter of luck.
> Either your NIC works flawlessly without any firmware (mine does), or
> you're getting all kinds of weird glitches.

The Realtek is not at all new, but I have no idea just how old, as it
was given to me by a friend. 5 years sounds about right, though. I do
have the firmware installed, haven't tried without it.

I'm slowly beginning to think about getting another NIC, but what? I've
heard good things about Intel, and the Intel in the other box is
behaving well. Are there any specific chipsets to buy or stay away
from? The one I have is a 82583V.

I haven't bought a separate NIC since the days of the DEC 21140 :)

> > > > > Try the same test but use UDP instead of TCP.
> > > > 
> > > > Only gives me 1.03Mbits/sec :)
> > > 
> > > iperf(1) says that by default UDP is capped on 1Mbit. Use -b
> > > option on client side to set desired bandwidth to 1024m like this:
> > > 
> > > iperf -c <server> -u -b 1024M
> > > 
> > > Note that -b should be the last option. Gives me 812 Mbits/sec
> > > with the default UDP buffer settings.
> > 
> > Didn't notice that. I get 808, so close to what you get.
> 
> Good. The only thing is left to do is to apply that 'udp' flag to NFS
> clients, and you're set. Just don't mix it with 'async' flag, as Bad
> Things ™ can happen if you do so (see nfs(5) for the gory details).

Yes, I always use 'sync' anyways - performance isn't _that_
important, data integrity is :)

> > > net.core.rmem_max = 4194304
> > > net.core.wmem_max = 1048576
> > 
> > OK, I've set them on both sides, but it doesn't change the results,
> > no matter what values I give iperf with -w.
> 
> Now that's weird. I picked those sysctl values from one of NFS
> performance tuning guides. Maybe I misunderstood something.

I'll do a little more searching online, I need to better understand
what I'm messing with in any case. I seriously dislike setting
parameters I don't understand. In my bookcase is a copy of "Computer
Networks" by Tanenbaum, I guess that's my next stop.

> > > Of course, for this to work it would require to increase MTU on
> > > every host between your two, so that's kind of a last resort
> > > measure.
> > 
> > Well, both hosts are connected to the same switch (or right now, to
> > the router, but I could easily put them back on the switch if that
> > matters). One of the hosts would not accept a value larger than
> > 7152, but it did have quite an effect: I now get up to 880Mbps :)
> 
> Consider yourself lucky as MTU experiments on server hardware usually
> lead to a long trip to a datacenter :)

I'd be hard pressed to call any of this server hardware or a
datacenter :)

> > Will this setting have an impact on communication with machines
> > where the MTU is smaller? In other words, will it have a negative
> > impact on general network performance, or is MTU adjusted
> > automatically?
> 
> Well, back in the old days it was simple. Hardware simply rejected all
> frames which MTU exceeded their settings (i.e. 1500).
> Since then they introduced all those smart switches which presumably
> fragment such frames to their MTU (i.e. 1 big frame transforms into
> several small ones). Which is considered CPU-costly and ineffective,
> but no packets are lost that way.
> 
> Still, since MTU is strictly L2 category (according to OSI Model), and
> you managed to send this very e-mail I'm replying to - big MTU should
> not hamper your ability to communicate to the outside world. That
> means that about the only problem you should encounter will manifest
> yourself as soon as you introduce third computer into your LAN.

I have several devices on my home network, but only a few on Gbit.
Everything that goes to and from the outside world goes through the
OpenWRT router, so I guess that takes care of transforming the frames.
But see my other mail sent today on this.

> > And what is the appropriate way of setting it permanently -
> > rc.local?
> 
> /etc/network/interfaces supports 'mtu' setting. So doing it Debian way
> means you locate 'eth0' (or whatever) interface definition and set MTU
> there. The only caveat I can think of is that 'mtu' setting does not
> seem to be supported for 'dhcp'. See interfaces(5).

OK, thanks. I'm using static DHCP mapping right now as a matter of
convenience, but it would be trivial to switch to static addressing, if
necessary.

And thank you very much for all advice, it has been invaluable.

Petter

-- 
"I'm ionized"
"Are you sure?"
"I'm positive."

Attachment: pgplmnNoR561c.pgp
Description: OpenPGP digital signature


Reply to: