[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: apt and libcurl



On Thu, Apr 17, 2014 at 02:09:43PM +0100, Tim Retout wrote:
> I was looking into what would be required to add socks proxy support to
> apt (bug #744934), and I found that the HTTP and HTTPS acquire methods
> are implemented completely differently.

Well, they aren't completely different. I have 'recently' moved them a
bit closer to fix some longstanding bugs in https regarding partial file
handling as our http code is relatively advanced and deals with this and
other things while curl doesn't really handle it all that well …

This leads us to a state in which curls job in https is basically doing
the 's' part, while most of the 'http' part is done by our http code
(not all bits, but we parse the response on our own for example).
So in theory we could implement https on our own, but well, the world
really doesn't need another buggy SSL/TLS implementation I presume…


> Am I right in thinking that making libcurl a dependency of the HTTP and
> FTP transports would not be acceptable?  Or some other networking
> library?  I can't find any previous discussion on this.

Which is also why it could be difficult to replace our http with something
else. Pipelining, partial files and an army of If-headers isn't done by
many clients (and for that matter, not all servers either).

FTP, well, I guess it doesn't have that many users anymore. The http
comment from 1998 already says that http is better in every way, so 16
years later I guess everyone got the memo. I can't remember changing
anything in the ftp code at least, so it is probably happily bit-
rotting in the source tree…


> Pros: Reduced code in apt.  Possibly easier to share code between HTTP
> and HTTPS methods.  Looking ahead, features like HTTP 2 might be easier
> to add via curl.  Cons: Increased size of base system.  Possibly
> performance?

I am not really sure you will really have "reduced code" if you reach
feature parity. Proving me wrong would be nice through. There is also the
problem of no OpenSSL exception, so GnuTLS it is at the moment and what
that means in regards of HTTP2.

Technical, apt is 'just' prio:important, so if you can promote a library
into this rank (or higher) you are good to go to use it. So far https is
in an optional package, so in this regard you can do whatever you want.

Our acquire system is pluggable, so just go ahead and implement yourhttp
and use it in your sources.list to convince others. Probably better than
talking theoretically about endless options we might have.

Regarding performance, well, who cares… no, I don't trust comments from
1998. Of course, we don't want it to be death slow, but not at all costs
and if apt really was that good with a double buffer I am sure at least
some others catched up in the last 16 years…
Recent benchmarks or it didn't happen ;)
And http2 might be able to teach proxy/servers how to support pipelining
correctly, which might very well be the biggest performance point in apt
(which is disabled by default at the moment…).


Best regards

David Kalnischkies

Attachment: signature.asc
Description: Digital signature


Reply to: