[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: APT do not work with Squid as a proxy because of pipelining default



Bjørn Mork <bjorn@mork.no> writes:

> Goswin von Brederlow <goswin-v-b@web.de> writes:
>
>> A HTTP/1.1 conforming server or proxy 
>
> This is not the real world...
>
>> is free to process pipelined
>> requests serially one by one. The only requirement is that it does not
>> corrupt the second request by reading all available data into a buffer,
>> parsing the first request and then throwing away the buffer and thereby
>> discarding the subsequent requests in that buffer. It is perfectly fine
>> for the server to parse the first request, think, respond to the first
>> request and then continue to parse the second one.
>
> Yes, this can be done.  But you should ask yourself what proxies are
> used for.  The serializing strategy will work, but it will make the
> connection slower with a proxy than without.  That's not going to sell
> many proxy servers.

Sure. I was talking about conformance. A proxy / server that screws that
up deserves to be shot. And from the description of the probem this part
actualy works in squid, people WOULD have noticed otherwise. Squids
problem seems to be that it does break later when processing the
responce. But that is a guess from the description of the problem. Still
no tcpdump of what actually happens posted.

>> Note that that behaviour in the server already gives a huge speed
>> increase. It cuts away the round trip time between the last responce and
>> the next request. For static web content the speedup of processing
>> pipelined requests truely in parallel is neglible anyway. Only dynamic
>> pages, where formulating the responce to a request takes time, would
>> benefit by working on multiple responces on multiple cores. And those
>> are probably busy handling requests from other connections. So I
>> wouldn't expect servers to actually to parallel processing of pipelined
>> requests at all.
>
> This is true for a web server.  It is only true for a proxy server if it
> can either forward a pipelined request or parallelize it.  That's where
> we get the complexity.
>
> If you keep your simple serial strategy, then a pipelined request will
> be slower than a parallel one.

But no slower than without pipelining. I was talking specifically about
server and not porxy for a reason by the way. :)

>> And your argument 1 applies perfectly to fixing squid by the way. It
>> should accept pipelined requests and then it can proces them one by one
>> and send them non pipelined if it likes. It should NOT corrupt the
>> requests/responces.
>
> Sure.  Squid should be fixed.  But I'm afraid fixing squid in Debian
> won't fix all the http proxies around the world.

Lets start with the beam in our eye before we go chasing moths in other
peoples.

>> So just from your argument 1 APT should default not to pipeline and
>> squid should be fixed.
>
> Good.

Let me clarify this. Apt should default not to pipeline in stable. That
is the trivial workaround for the problem. In unstable something better
can be tried, like detect when pipelining fails and automatically fall
back to depth 1. But defaulting to depth 1 till someone programs that is
an option too.

> Bjørn

MfG
        Goswin


Reply to: