[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Please test gzip -9n - related to dpkg with multiarch support



Steve Langasek <vorlon@debian.org> writes:

> On Thu, Feb 09, 2012 at 10:29:53PM +0100, Guillem Jover wrote:
>> > But the more interesting slowdown is that the amount of packages is general
>> > slows down apt operations in a rate that is around O(dependencies^2) (pure guess,
>> > perhaps someone has better knowledge?). We do remember apt-get slowing down
>> > to crawl on maemo platforms with much smaller repositories..
>
>> Well, if we take the number of new packages Steve quoted (even w/o
>> taking into account the stuff I mentioned that could be reduced), and
>> round it to 200 new packages, that's really insignificant compared to
>> the amount of packages one will inject into apt per new foreign arch
>> configured. I really fail to see the issue here.
>
> That's based on a sample of 1200 packages currently tagged Multi-Arch: same
> in the Ubuntu precise archive.  If we have all packages in sections libs and
> libdevel converted for multiarch (which I suppose we eventually will), this
> number will be closer to 7000.  Does 700 more of these support packages
> approach the level that it starts to be a problem?

Currently we have 36706 packages in main/contrib/non-free amd64 sid.
Adding 700 would be an increase of less than 2%. If the speed decrease
is linear then that isn't a problem. If it is quadratic or even
exponential that could be different. But then we have bigger problems as
the number of packages does increase as more packages are added to
Debian by probably more than 700 till wheezy+1. And doubling (tripling,
quadrupling) the number of packages (as multiarch system do) would
totaly kill the performance. Since apt still works even with 4 archs I
strongly doubt the O(dependencies^2) guess.

Adding 700 packages compared to adding 36706 (73412, 110118) for
multiarch seems still minor.

MfG
        Goswin


Reply to: