[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Problems with some web sites (tuning?)



owens@netptc.net writes:
>>---- Original Message ----
>>From: carlj@peak.org
>>To: debian-user@lists.debian.org
>>Subject: Re: Problems with some web sites (tuning?)
>>Date: Sat, 28 Mar 2009 15:32:49 -0700
>>
>>>owens@netptc.net writes:
>>>
>>>> I don't know what could be causing this but the behavior might
>>>> suggest that the packets are undergoing a very high error rate. 
>>The
>>>> timeout or "lockup" can indicate that the packets cannot be
>>>> reassembled at the destination (your computer) and the TCP
>>protocol
>>>> times out waiting for one or more missing packets.
>>>
>>>That makes sense to me, but why is it only a very few web sites?  I
>>>haven't heard complaints about poor wikipedia access, so it appears
>>>that most other people don't have problems with that either.  One
>>idea
>>>I thought of is that maybe they have very tight timeout limits, and
>>>since I am on dialup, I often exceed those limits and they then drop
>>>packets.  I have heard about a couple things(ECN and SACK?) that can
>>>cause problems with some sites, but I think I already have disabled
>>>them in the sysctl.conf settings I included in my original message.
>>>
>>>Any idea how I could trace something like dropped packets?  Looking
>>>for the absence of something can be very difficult if I don't know
>>>what I should be looking for.
>>>
>>>Thanks for your suggestions.
>>>-- 
>>>Carl Johnson		carlj@peak.org
>>>
>>>
> Oy dial up!  Since others have not seemed to experience this and
> since you are on dial-up, this seems to point to your local dial-up
> connection-either excessive errors (a possibility) or substantial
> delay (also a possibility).  Either one potentially requires (TCP)
> retransmission.  A couple of tests to run (the second is a bit
> difficult):
> 1.  Try the difficult web sites in the middle of the night when the
> ambient noise (and hence any errors) should be less
> 2.  Move your computer to another location (a friends house across
> town perhaps) and try it from there.

Actually that brings up another possibility.  I hadn't mentioned that
I also use wwwoffle (an offline www caching proxy), and I just
realized that seems to play a part.  While offline, wwwoffle queues up
any http requests until I go online, and then it starts fetching them
at up to 4 at a time.  The problem seems to mostly go away when I
fetch articles 1 at a time, so it appears that the congestion makes
the problem much worse.  I am just going to experiment more with
reducing the number of parallel fetches for now.  Having wwwoffle
fetch queued up jobs is convenient, so right now I'll just experiment
to find a reasonable compromise.

Thanks for your suggestions.  I think they have given me some idea of
what is happening.
-- 
Carl Johnson		carlj@peak.org


Reply to: