Re: birtney changes
Anthony Towns wrote:
Yo,
Heh. So much for the "working out when to do -foo%bar etc" being the
easy part.
# for src in testing.sources:
# if src in upgrademe:
# rem_sources = unstable
# rem_packages = unstablepkgs
# else:
# rem_sources = testing
# rem_packages = None
So, we need to look at each package; and for the one's were upgrading we
want to compare the "kept around" packages in testing we want to compare
against the source/related packages in unstable, not testing.
# src_bin_cur = [{},{}]
# src_arch_cur = [{},{}]
So, obviously a list of two hashes. The first hash is for when the
"non-current" packages, the second is for the "current" packages.
They're accessed as src_*_cur[False] or src_*_cur[True], and should be
read as "sources that (are | aren't) current".
The hash takes a package or an arch, and gives you a hash of
arches/packages where the pairing is current/non-current as appropriate.
Simple, right?
# srcv = rem_sources.get_version(src)
What srcv we're working towards.
# for arch in arches:
# rem_bins = []
# if rem_packages:
# rem_bins = rem_packages.binaries(src, arch):
Binaries that will be upgraded. Hrm, horribly named.
# for b in testing.binaries(src, arch):
So, let's look at the packages that could potentially be out of date.
# if b in rem_bins: # will be upgraded anyway
# continue
We assume upgraded packages will be up to date. Not necessarily true,
but it'll be caught later so whatever.
# n = same_source(testing[arch].get_sourcever(b),srcv)
Is the package (going to be) up to date?
# src_bin_cur[n].set_default(b, {})
# src_bin_cur[n][b][arch] = 1
# src_arch_cur[n].set_default(arch, {})
# src_arch_cur[n][arch][b] = 1
Add an entry to the appropriate hash, creating the subhash if necessary.
# if n and not rem_bins.has_key(b):
# print "XXX WARNING CAN'T DEAL"
Drat, this is still not the right test for the un-handle-able special
case mentioned in the previous mail. Horrible. I guess the test has to
go somewhere else entirely :(
Anyway, once we've collected all that info for all arches...
# bins_to_remove = []
# undone_arch = {}
We start working out which binaries we have to remove specially, and
which architectures need further ood removals done.
# for b in src_bin_cur[False].keys():
First we process all the out of date binaries.
# if not src_bin_cur[True].has_key(b):
# bins_to_remove.append("-%s%%%s" % (src, b))
Is this binary up to date on any architecture? If not, then it can be
removed entirely.
# else:
# for a in src_bin_cur[True][b].keys():
# undone_arch[a] = 1
Otherwise, this arch will have to be processed specially, next.
Really, all this could be done with the per-arch tests, but it seems
better to remove "libfoo0" on all architectures simultaneously to me,
when it needs to be removed on all architectures anyway.
# #XXX not yet implemented:
# #for a in undone_arch.keys():
# # bins_to_remove.append("-%s/%s" % (src, a)
Pretty obvious. The "-foo/i386" thing hasn't been implemented elsewhere
yet is all (doop_source, britney-py.c, maybe dpkg.c).
# if src in upgrademe:
# hidden[src] = bins_to_remove
# else:
# upgrademe.extend(bins_to_remove)
And with all that done, we either do them directly, or as soon as "src"
has been done.
FYI, only; this was mostly for my benefit. Haven't tried testing it as
the leading hashes might have indicated, and I think I saw a couple of
syntax errors anyway :)
Cheers,
aj
Reply to: