[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#810796: HTTP pipelining is broken and causes download failures



On Tue, Jan 12, 2016 at 02:36:28PM +0100, Julian Andres Klode wrote:
> Well, yes, nobody really uses HTTP/1.0 servers, so it's not really tested much.

I just checked and a newer squid (3.1.19) that I have handy also
responds with HTTP/1.0. I don't have a squid 3.4 deployment to check,
but 3.1 shipped in wheezy (Debian LTS project EOL is 2018) and in Ubuntu
12.04, which is not EOL until 2019. I think apt needs to be able to work
by default through squid proxies that are still commonly deployed.

> If we have hashes, we will try to do pipelining and then fall back if
> the server messes up the response.
> 
> Maybe it helps to also disable pipelining if the server responds with
> HTTP/1.0, like this:

I think this would help. You will reduce the number of pipelined
requests, and thus the number of times the race is lost. But you won't
eliminate the race completely. Really you need to only enable pipelining
after you see an HTTP/1.1 or higher response, rather than the other way
round. That is trickier to code correctly but is required by the
standards, AIUI.

How about disabling pipelining by default if Acquire::http:Proxy is
defined? This still wouldn't be technically correct but might be a good
interim workaround to avoid users having to research the workaround
themselves.

And in many cases a locally defined proxy will also mean that the proxy
is local, so you have less to lose by not having pipelining in those
cases anyway, even if the proxy does support it.

Attachment: signature.asc
Description: Digital signature


Reply to: