[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: http.us rotation Was: PGI installer ISO image (BETA) for woody now available



On Fri, Jun 21, 2002 at 02:05:01PM +0200, Josip Rodin wrote:
> Surely if you change the _size_ of packets, their number decreases, not
> increases? The real problem, I would think, would be with clients to which
> there is much less latency -- compared to the defaults, they would get
> larger packets in the same small number of ms, which could then become a
> problem if they've got a TCP/IP implementation incapable of handling this.
> I'm also told that if a machine's got insufficient RAM, it would be unable
> to handle huge window sizes.

Changing the window size doesn't change the size of packets, which is
determined by the MTU (MSS)--and which is likely going to be restricted
by intermediate routers. What a larger window size does is allows more
in-flight packets, or packets that have been sent without an
acknowledgement. As link speed and/or latency increase, you need a
larger tcp window (more outstanding packets) to ensure that you saturate
a link. 

Your bandwidth is theoretically limited to [window size/round trip
latency]. So with a 1MB window on a 1 second link you'd get 1MB/s; on a
.5 second link you'd get a max of 2MB/s -- regardless of the link's
potential bandwidth. If you increase that to a 2MB window, you could
transmit 2MB packets per 1s on the first link, for 2MB/s; on the second
link you'd get 2MB per .5s, or 4MB/s.  That's the basic idea. There are
obviously other factors at work, and the real numbers would be smaller
and harder to multiply. :) It follows that you can calculate your
theoretical optimal window by multiplying [bandwidth*round trip
latency]. You'll find that the defaults work for low-bandwidth and
low-latency connections (e.g., modems & lans). Note that increasing the
window too much will generally hurt performance, and that some systems
have trouble with certain sizes. It helps to benchmark different
combinations if you need performance, with the bandwidth*delay as the
starting point.

> The problem with the default value is that even if the remote side had rmem
> increased, there was no increase in throughput unless I increased wmem.

There are three values in each field, which includes both default and
maximum values. The suggestion was (AIUI) that you increase the maximums
for both rmem and wmem, but not the default. Whether that works depends
on who you're talking to, and with what (as does much of this
discussion.)

-- 
Mike Stone


-- 
To UNSUBSCRIBE, email to debian-devel-request@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org



Reply to: