[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: http.us rotation Was: PGI installer ISO image (BETA) for woody now available



* Puts on professional IP network engineer hat *

On Fri, Jun 21, 2002 at 09:45:17AM -0400, Michael Stone wrote:
> On Fri, Jun 21, 2002 at 02:05:01PM +0200, Josip Rodin wrote:
> > Surely if you change the _size_ of packets, their number decreases, not
> > increases? The real problem, I would think, would be with clients to which
> > there is much less latency -- compared to the defaults, they would get
> > larger packets in the same small number of ms, which could then become a
> > problem if they've got a TCP/IP implementation incapable of handling this.
> > I'm also told that if a machine's got insufficient RAM, it would be unable
> > to handle huge window sizes.
> 
> Changing the window size doesn't change the size of packets, which is
> determined by the MTU (MSS)--and which is likely going to be restricted
> by intermediate routers. What a larger window size does is allows more
> in-flight packets, or packets that have been sent without an
> acknowledgement. As link speed and/or latency increase, you need a
> larger tcp window (more outstanding packets) to ensure that you saturate
> a link. 

A precise and accurate summary. For what it's worth, the current 'maximum
useful' MTU for most things that exit the local network is 1500, because
that is the maximum MTU of Ethernet (including the Fast variety), and it
ends up getting set as a default on a lot of other things, even if it isn't
the true limit. In some local situations (using Gigabit Ethernet and "jumbo
MTU" capable routers/switches), you can get it much larger; 9k is one of the
most commonly seen maximums, but the details are a bit gory.

> Your bandwidth is theoretically limited to [window size/round trip
> latency]. So with a 1MB window on a 1 second link you'd get 1MB/s; on a
> .5 second link you'd get a max of 2MB/s -- regardless of the link's
> potential bandwidth. If you increase that to a 2MB window, you could
> transmit 2MB packets per 1s on the first link, for 2MB/s; on the second
> link you'd get 2MB per .5s, or 4MB/s.  That's the basic idea. There are
> obviously other factors at work, and the real numbers would be smaller
> and harder to multiply. :) It follows that you can calculate your
> theoretical optimal window by multiplying [bandwidth*round trip
> latency]. You'll find that the defaults work for low-bandwidth and
> low-latency connections (e.g., modems & lans). Note that increasing the
> window too much will generally hurt performance, and that some systems
> have trouble with certain sizes. It helps to benchmark different
> combinations if you need performance, with the bandwidth*delay as the
> starting point.

It's not quite this simple, but the general point is correct, yes. :)
The main issue with having a large window is when you're faced with a small,
lossy connection - you send a lot of data that then gets dropped, before you
hear back anything about it having been dropped. For the purposes of this
excercise (where most of the servers are well-connected and on reasonably
stable links), it isn't a huge issue.

> > The problem with the default value is that even if the remote side had rmem
> > increased, there was no increase in throughput unless I increased wmem.
> 
> There are three values in each field, which includes both default and
> maximum values. The suggestion was (AIUI) that you increase the maximums
> for both rmem and wmem, but not the default. Whether that works depends
> on who you're talking to, and with what (as does much of this
> discussion.)

Increasing the maximum but not the default means that the connection will
grow over time until it reaches that point. It is much kinder to potential
opposite-ends, but it does mean you won't gain the benefits on most 'short'
connections, because there isn't enough time to build it up.

Of course, for servers which spend most of their time doing multi-gig rsync
runs (IE, those in our specific case)... this isn't going to be much waste.
Things do come up to speed reasonably quickly, if the link is both fast and
solid. Turning up the max to something significant, but leaving the default
alone, should produce significant improvements, I'd expect.

Remember, a lot of these defaults were origionated in a day and age where
a 56k frame that lost 1 out of 10 packets because of burst discard was one
of the better links you could get... or even before that.
-- 
***************************************************************************
Joel Baker                           System Administrator - lightbearer.com
lucifer@lightbearer.com              http://users.lightbearer.com/lucifer/


-- 
To UNSUBSCRIBE, email to debian-devel-request@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org



Reply to: