[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Please test gzip -9n - related to dpkg with multiarch support



On Thu, 2012-02-09 at 21:50:17 +0200, Riku Voipio wrote:
> On Thu, Feb 09, 2012 at 03:34:28AM +0100, Guillem Jover wrote:
> > Riku mentioned as an argument that this increases the data to download
> > due to slightly bigger Packages files, but pdiffs were introduced
> > exactly to fix that problem. And, as long as the packages do not get
> > updated one should not get pdiff updates. And with the splitting of
> > Description there's even less data to download now.
> 
> off-topic but often pdiffs don't really speed up apt-get update. Added
> roundtrip time latency on pulling several small files slows down the
> download unless you run update nightly.

One of the reasons of this I think, is that the current pdiff
implementation in apt is really not optimal, see #372712.

> But the more interesting slowdown is that the amount of packages is general
> slows down apt operations in a rate that is around O(dependencies^2) (pure guess,
> perhaps someone has better knowledge?). We do remember apt-get slowing down
> to crawl on maemo platforms with much smaller repositories..

Well, if we take the number of new packages Steve quoted (even w/o
taking into account the stuff I mentioned that could be reduced), and
round it to 200 new packages, that's really insignificant compared to
the amount of packages one will inject into apt per new foreign arch
configured. I really fail to see the issue here.

> > Adding shared file support into dpkg, introduces additional uneeded
> > complexity, that can never be taken out, and which seems clear to me
> > should be dealt with at the package level instead.
> 
> However, if we add the complexity to dpkg, we don't need to add it to
> all of the 1000+ multiarched packages. It would not be wise to do something
> 1000 times in packages which could be done once in dpkg. Even when doing it
> once in dpkg is harder, it is still a lot less total work. Since Debian has
> chronic lack of active hands working on packages, solutions that add the
> workload of maintainers will just slow down the development of debian even
> further.

If this was something that dpkg could do reliably, was future-proof,
introduced no issues at all and was technically sound, then I'd agree
with you that even if it might be harder to implement (which is not
the case), and maintain (maybe) it would be well worth it. But given
the amount of problems, inconsitent handling between M-A: same and
other packages, corner cases and general fragility it introduces, for
the supposed benefit of size reduction (which does not seem to be
significant at all) and to avoid a possible one time package split,
it seems clear this is the complete wrong approach.

Also except for the package splits, most of the arch-qualified path
changes should be easily handled automatically by something like
debhelper or cdbs.

In any case when I was talking about complexity here, was not code wise,
but the implications it has on the handling of packages in general.
I'll write more about this in a summary mail I'm finishing up.

regards,
guillem


Reply to: