[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: APT branch using CMake and debhelper 7 available



On Sun, Nov 22, 2009 at 01:20:46PM +0100, David Kalnischkies wrote:
> Hello Julian & Goswin and hi deity@ in general
> 
> 2009/11/20 Julian Andres Klode <jak@debian.org>:
> >    * Merge apt-utils into apt (+400kB)
> >      REASON: libdb4.7 is required, and they otherwise have the same
> >      dependencies; they also have the same priority (important);
> >      and 400kB are not very much.
> The dpkg team at least plans to "Merge back apt-exttracttemplates" [0].
> (but no message to deity@ about this or the other plans so far)
I did not know this, but it sounds good.

> As i think the package is only important because of this application and
> the others are only useful for repositorybuilders i would leave the package
> as is for now and after the back merge drastically downgrade the priority.
> (btw as a nitpicker i need to say:
>  apt already builds itself against libdb4.8 ;) )
Sorry, I only checked Ubuntu, as I am still running Ubuntu 9.10 here.


> 
> >    * Move libapt-pkg and libapt-inst into a libapt-pkg4.8 package.
> >      REASON: When they are not shipped in the apt package we can handle
> >      ABI breaks more easily without breaking most systems (like requesting
> >      removal of python-apt just because it is not recompiled yet).
> Yes, but is it really a good idea to allow a maintainer of a package related
> to package management to be so lazy that two different ABIs are shipped
> in a stable release? The discussion about this seems to be pretty old
> (if not as old as apt itself) and as much as i hate it to destroy unstable
> for a few days or weeks just because the apt team has uploaded an ABI or API
> break it would be even harder for me to see half of the reverse dependencies
> using an obsolete API (or ABI) in testing/stable and therefore using
> obsolete logic to retrieve, install, remove and manage packages.
> I don't think this is something we should support. Therefore we would need to
> enforce a single version in stable versions which we already have now for free.
> (also: maintaining multiple versions - the ones in unstable and the ones in
>  stable - would be near to impossible with the current size of the APT Team)
Stable won't ship with two different ABI versions as the older one would have
no source package. And as Goswin said, co-installability is a requirement of
Policy §8.2.

> 
> It would also expose us to other problems like the Cache as Goswin mentioned.
> (It would be possible that apt accepts an old format as correct which
>  would result in "funny" things... )
Our cache files are versioned, so I don't expect any problems here; see:
   if (HeaderP->MajorVersion != DefHeader.MajorVersion ||
       HeaderP->MinorVersion != DefHeader.MinorVersion ||
       HeaderP->CheckSizes(DefHeader) == false)
      return _error->Error(_("The package cache file is an incompatible version"));

If the version is different, you can fallback to memory only cache; reading
the data from the disk directly.

> I am also considering apt as pseudo-essential, therefore it should work in
> "unpack" (e.g. after dpkg was interrupted by something) which is not given
> with a versionmismatch between apt and libapt...

AFAIK:
If dpkg fails at the libapt-pkg installation, APT would continue to work,
because it uses the old library. And libapt is always unpacked before apt,
because APT depends on libapt.

> 
> I think it would be better to stabilize the API more & more, e.g.
> with the help of fvisibility ... especially as we don't break the API
> so often and an ABI break-fix is just a simple binNMU away...
People complain a lot when we break the ABI; because APT has far too many
ABI breaks.

> 
> Packages we should really build from the source would be
> (at least in my eyes)
> * apt-dbg (obsolete if the automatic debug-package build goal is reached)
> * apt-common to collect locals & translations of manpages in an arch-indep
> package instead of shipping them for every arch independently.
> (lintian doesn't complain about it now, but if we get a few more manpage
> translations it should start soonish to do it and it would support nicely
> a few hacks for embed devices and could come in handy if tdebs steps out
> of the dark draft state - so more or less a win-win situation)
> The good part is that I already spent a few minutes this week to do this
> and i intended to propose that to Michael for the next upload - it is just
> a question of how fast we complete the real tasks for the next version ...
> 
> > I would like to get your comments about the build system, the proposals,
> > and receive patches for documentation and translation building.
> ... - no offense intended - but it looks to me that cmake will fix a problem
> which doesn't really exist or isn't as major as other problems we should
> tackle (at first), e.g. all the funny resolver stuff needed for multi-arch
It's not a major problem, but it is a problem. And using cmake would make building
much easier, reducing the time needed to test one's changes; especially if one
build multiple times using sbuild/pbuilder.

> (the download of the Packages files is trivial and a [unfinished] patch
> from Goswin already exists). As some already noticed the acquire system also
> need some work, not so much on the speed part (come one, i can't even imagine
> a situation in which 20.000 items are in the query, so why discussion it as
> it would be a major problem) but on the extensibility side:

At least from the DebImg perspective, 20,000 items are common. Building images
requires many files; and debimg would add 10*20,000 = 200,000 items to a queue
if it wants to build 10 architectures at once. But on the other hand, debimg is
not developed at the moment; maybe I'll continue next year.

And as the required time is n² things become even more worse when you have more
items (n² (n=list traversal) * (n=items added)). As far as I can tell, a hash
map would solve this problem the most efficiently (mapping uri to item).

The traversal part seems to be:
   // move to the end of the queue and check for duplicates here
   for (; *I != 0; I = &(*I)->Next)
      if (Item.URI == (*I)->URI) 
      {
	 Item.Owner->Status = Item::StatDone;
	 return false;
      }
and is located in bool pkgAcquire::Queue::Enqueue(ItemDesc &Item).


> Quite a few clients have metadata which should be in sync with the Packages
> files and should therefore be updated also in an "apt-get update":
> debtags comes to my mind, all the fancy stuff Ubuntu Software Center
> wants to show is another one and even the multi-arch thing above and
> things like the ability to download multiple Translation-files would benefit
> from it, while it is not strictly required for the last two things.
> (bonus: if it is done right all these files could get checksum, pdiffs
> and/or future extensions like zsync basically for free)

Those clients can do this themselves by providing a command somewhere
and adding it to APT::Update::Post-Invoke-Success. Debtags can do this,
and others can do this as well.
-- 
Julian Andres Klode  - Debian Developer, Ubuntu Member

See http://wiki.debian.org/JulianAndresKlode and http://jak-linux.org/.


Reply to: