[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: APT branch using CMake and debhelper 7 available



Hi again *,

>> >    * Move libapt-pkg and libapt-inst into a libapt-pkg4.8 package.
>> >      REASON: When they are not shipped in the apt package we can handle
>> >      ABI breaks more easily without breaking most systems (like requesting
>> >      removal of python-apt just because it is not recompiled yet).
>> Yes, but is it really a good idea to allow a maintainer of a package related
>> to package management to be so lazy that two different ABIs are shipped
>> in a stable release? The discussion about this seems to be pretty old
>> (if not as old as apt itself) and as much as i hate it to destroy unstable
>> for a few days or weeks just because the apt team has uploaded an ABI or API
>> break it would be even harder for me to see half of the reverse dependencies
>> using an obsolete API (or ABI) in testing/stable and therefore using
>> obsolete logic to retrieve, install, remove and manage packages.
>> I don't think this is something we should support. Therefore we would need to
>> enforce a single version in stable versions which we already have now for free.
>> (also: maintaining multiple versions - the ones in unstable and the ones in
>>  stable - would be near to impossible with the current size of the APT Team)
> Stable won't ship with two different ABI versions as the older one would have
> no source package.
But the library package is not removed from the usersystem (yes, with
autoremove, but in this case it would be uninteresting, i mean in cases a
package depend on it: Maybe a selfbuilt one, a package from a third party
source or even a no longer supported (and therefore removed) package from
debian) and that is why it challenges me. Also, if we don't ship and
maintain different library versions, why shipping them in the first place
- just to be more gentle to unstable?
(remember: it is meant to be breaking your toys -- and yes, i am an
 unstable user, so i know that it breaks my toys, but better my toys then
 the business of testing and stable users, as at some time in the future
 the software need to be rebuilt against the new version anyway, so why not
 directly in unstable there FTBFS can be fixed immediately -
 libapt is just not a "normal" library for me which should follow
 the same rules as libqt and a 3 to 4 transition)

>And as Goswin said, co-installability is a requirement of Policy §8.2.
huh? [0] I have read Chapter 8 now multiply times and i can't find a MUST
in it regarding co-installability nor a MUST regarding a SONAME, only should
(which is handled as a must with the possibility of unnamed exceptions)
We could even argue that the library does break the ABI / API way to often
to be considered stable and should therefore be compiled statically [1].
(NO, i don't recommend it) - heck, a few already joked that debian is based
on software which is placed in pre-release versions for more than 10. years,
so how we could provide a stable API for it. ;)
We could review if we should use a SONAME after stabilize the library,
(this is what the dpkg team tries to do with libdpkg) but now with an ABI and
maybe upcoming API breaks every now and then it would be (in my eyes) stupid.
(btw: The last ABI break is 4 months ago)

I agree that breaking uploads should be better coordinated with the release
team, but if we need to move through NEW or through a bunch of binNMUs isn't
really a big difference (beside that different teams are involved).

>> It would also expose us to other problems like the Cache as Goswin mentioned.
>> (It would be possible that apt accepts an old format as correct which
>>  would result in "funny" things... )
> Our cache files are versioned, so I don't expect any problems here; see:
>   if (HeaderP->MajorVersion != DefHeader.MajorVersion ||
>       HeaderP->MinorVersion != DefHeader.MinorVersion ||
>       HeaderP->CheckSizes(DefHeader) == false)
>      return _error->Error(_("The package cache file is an incompatible version"));
The cache is versioned different than libapt-pkg - just in case... and
i can't generate the message so apt is intelligent enough to rebuilt
automatic or ...

>> I am also considering apt as pseudo-essential, therefore it should work in
>> "unpack" (e.g. after dpkg was interrupted by something) which is not given
>> with a versionmismatch between apt and libapt...
> AFAIK:
> If dpkg fails at the libapt-pkg installation, APT would continue to work,
> because it uses the old library. And libapt is always unpacked before apt,
> because APT depends on libapt.
Why? Because some logic in apt defines it? dpkg should be happy to unpack
apt and after that libapt as dependencies only matter at configuration time.

>> > I would like to get your comments about the build system, the proposals,
>> > and receive patches for documentation and translation building.
>> ... - no offense intended - but it looks to me that cmake will fix a problem
>> which doesn't really exist or isn't as major as other problems we should
>> tackle (at first), e.g. all the funny resolver stuff needed for multi-arch
> It's not a major problem, but it is a problem. And using cmake would make building
> much easier, reducing the time needed to test one's changes; especially if one
> build multiple times using sbuild/pbuilder.
I compile apt pretty often on my (slow) laptop and i am not really sure if
the pure overhead of the buildsystem and make is really the big timesucker
which will be fixed with cmake. ccache and distcc help a lot more...
(and a way to disable the build of locale and manpages would be good)

>> (the download of the Packages files is trivial and a [unfinished] patch
>> from Goswin already exists). As some already noticed the acquire system also
>> need some work, not so much on the speed part (come one, i can't even imagine
>> a situation in which 20.000 items are in the query, so why discussion it as
>> it would be a major problem) but on the extensibility side:
> At least from the DebImg perspective, 20,000 items are common. Building images
> requires many files; and debimg would add 10*20,000 = 200,000 items to a queue
> if it wants to build 10 architectures at once. But on the other hand, debimg is
> not developed at the moment; maybe I'll continue next year.
I haven't said nobody would accept patches, i am just thought about the
normal every day usecase of apt and it's library: packagemanagment on a
usersystem: I haven't seen a (real world) system with 20.000 packages by now
and i guess i will never see it.* I just want to said with the previous
mail that focusing on the common case would be a great start, we
(or the actual developer of app X) can think about these not-so-common-
usecases later and cd building for 10 architectures is something i do not
consider as normal usecase - and even if i would i am slightly tempted
to say that acquiresystem will not be the bottleneck of this usecase either.

>> Quite a few clients have metadata which should be in sync with the Packages
>> files and should therefore be updated also in an "apt-get update":
>> debtags comes to my mind, all the fancy stuff Ubuntu Software Center
>> wants to show is another one and even the multi-arch thing above and
>> things like the ability to download multiple Translation-files would benefit
>> from it, while it is not strictly required for the last two things.
>> (bonus: if it is done right all these files could get checksum, pdiffs
>> and/or future extensions like zsync basically for free)
> Those clients can do this themselves by providing a command somewhere
> and adding it to APT::Update::Post-Invoke-Success. Debtags can do this,
> and others can do this as well.
Yeap, and they can reimplement sources.list parsing, checksum, pdiff
and everything else as well or apt could simple provide a way to do this
for these applications. The Packages file is already full of not-so-often
used metadata just because nobody want to reinvent apt - SCNR - and
therefore metadata is placed in files which apt already downloads.
The feature will be not black magic, not entirely useless and misplaced
and at least in my eyes way more useful than a new buildsystem,
but your mileage may vary, of course, because i am not a D{M,D} nor
an ubuntu relative...



Best regards / Mit freundlichen Grüßen,

David "DonKult" Kalnischkies

(* I think this will come back to me like the famous "640kb ought to be
 enough for everybody" comment sometime in the future)


Reply to: