[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

ITM: Britney patch series



Hi,

I have deviced a set of 16 patches for Britney two that intend to merge
into master given no objections for a week or a request for further time
to review the patches.

The patch series contains:

 * Bug fix in reverse dependency handling that could make nuninst go
   out of sync.
   - Thanks to Ivo for finding this and devising a minimal test case.
     Not to mention Adam helping us with debugging the issue. :)
   - The series also includes a patch to make britney assert that she
     does not screw up the nuninst sets again.

 * "No change" re-factoring to:
   - remove RDEPENDS, RCONFLICTS plus register_reverses
   - use the "pkg id" tuples (name, version, "parch") more
   - move a computation out of get_dependency_solvers

 * Rewrite migration code to interleave "auto-hints" in the main run.
   - this has some changes to the output format


Output changes/interleaving hints in the main run
=================================================

As a part of supporting "auto-hints" in the main run, I rewrote the
output from:

  trying: item
  (skipped|accepted): item

To:

  trying: item1 item2 [...]
  (skipped|accepted): item1 item2 [...]

I considered this more readable than using the current "easy" hint
output.  On failed hints, the runner will currently split the hints into
single items and try them individually.

Results
=======

With the patch series, Britney will pass 96 out of 101 tests (Also,
thanks to Ivo for getting us up to a 101 \o/).  The current master
branch passes 91.  Notably, 3 of these are different ways of making
britney screw up her nuninst sets.

Live-data tests also saw some result changes (see live-data-tests.diff):

 * 2011-12-13: Interleaved hinting makes two extra packages migrate

 * 2012-01-04:
  - Interleaved hinting makes haskell-math-functions able to migrate
  - The rdep fixes makes Britney realise she needs to keep 4 extra
    binaries around to avoid breaking packages.


Thanks,
~Niels

From 8eb1abede397986e49f53fa6ad45b8902d74cc71 Mon Sep 17 00:00:00 2001
From: Niels Thykier <niels@thykier.net>
Date: Wed, 9 Sep 2015 17:38:33 +0200
Subject: [PATCH 01/16] InstTester: Clarify some documentation

Signed-off-by: Niels Thykier <niels@thykier.net>
---
 installability/tester.py | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/installability/tester.py b/installability/tester.py
index e148c62..d4810c1 100644
--- a/installability/tester.py
+++ b/installability/tester.py
@@ -25,15 +25,16 @@ class InstallabilityTester(object):
                  safe_set, eqv_table):
         """Create a new installability tester
 
-        universe is a dict mapping package tuples to their
+        universe is a dict mapping package ids to their
         dependencies and conflicts.
 
-        revuniverse is a set of all packages with reverse relations
+        revuniverse is a table containing all packages with reverse
+        relations mapping them to their reverse relations.
 
-        testing is a (mutable) set of package tuples that determines
+        testing is a (mutable) set of package ids that determines
         which of the packages in universe are currently in testing.
 
-        broken is a (mutable) set of package tuples that are known to
+        broken is a (mutable) set of package ids that are known to
         be uninstallable.
 
         essentials is a set of packages with "Essential: yes".
@@ -42,7 +43,7 @@ class InstallabilityTester(object):
         either have no dependencies or only depends on other "safe"
         packages.
 
-        Package tuple: (pkg_name, pkg_version, pkg_arch)
+        Package id: (pkg_name, pkg_version, pkg_arch)
           - NB: arch:all packages are "re-mapped" to given architecture.
             (simplifies caches and dependency checking)
         """
-- 
2.5.1

From 289807463ea42e9c8267aab45ea29bff402e077f Mon Sep 17 00:00:00 2001
From: Niels Thykier <niels@thykier.net>
Date: Wed, 9 Sep 2015 17:39:09 +0200
Subject: [PATCH 02/16] Rewrite computation and testing of "affected" packages

Rely on the Installability tester to locate all of the affected
packages plus their transitive reverse dependencies.  As the
InstallabilityTester is suite agnostic, the set of affected packages
now includes (versions of) packages not in testing, which is filtered
out during the check.

Signed-off-by: Niels Thykier <niels@thykier.net>
---
 britney.py               | 93 ++++++++++++++++++++++--------------------------
 britney_util.py          | 51 ++++++++------------------
 installability/tester.py | 24 +++++++++++++
 3 files changed, 82 insertions(+), 86 deletions(-)

diff --git a/britney.py b/britney.py
index c0acebf..5884817 100755
--- a/britney.py
+++ b/britney.py
@@ -2044,7 +2044,6 @@ class Britney(object):
         # local copies for better performance
         sources = self.sources
         packages_t = self.binaries['testing']
-        get_reverse_tree = partial(compute_reverse_tree, packages_t)
         inst_tester = self._inst_tester
         eqv_set = set()
 
@@ -2078,8 +2077,8 @@ class Britney(object):
                             eqv_set.add(key)
 
                 # remove all the binaries which aren't being smooth updated
-                for rm_tuple in rms:
-                    binary, version, parch = rm_tuple
+                for rm_pkg_id in rms:
+                    binary, version, parch = rm_pkg_id
                     p = binary + "/" + parch
                     binaries_t_a, provides_t_a = packages_t[parch]
                     pkey = (binary, parch)
@@ -2090,7 +2089,9 @@ class Britney(object):
                     if pkey not in eqv_set:
                         # all the reverse dependencies are affected by
                         # the change
-                        affected.update(get_reverse_tree(binary, parch))
+
+                        affected.update(inst_tester.reverse_dependencies_of(rm_pkg_id))
+                        affected.update(inst_tester.negative_dependencies_of(rm_pkg_id))
 
                     # remove the provided virtual packages
                     for j in pkg_data[PROVIDES]:
@@ -2116,8 +2117,10 @@ class Britney(object):
         elif item.package in packages_t[item.architecture][0]:
             binaries_t_a = packages_t[item.architecture][0]
             undo['binaries'][item.package + "/" + item.architecture] = binaries_t_a[item.package]
-            affected.update(get_reverse_tree(item.package, item.architecture))
             version = binaries_t_a[item.package][VERSION]
+            pkg_id = (item.package, version, item.architecture)
+            affected.add(pkg_id)
+            affected.update(inst_tester.reverse_dependencies_of(pkg_id))
             del binaries_t_a[item.package]
             inst_tester.remove_testing_binary(item.package, version, item.architecture)
 
@@ -2127,15 +2130,16 @@ class Britney(object):
             source = sources[item.suite][item.package]
             packages_s = self.binaries[item.suite]
 
-            for binary, version, parch in updates:
+            for updated_pkg_id in updates:
+                binary, new_version, parch = updated_pkg_id
                 p = "%s/%s" % (binary, parch)
                 key = (binary, parch)
                 binaries_t_a, provides_t_a = packages_t[parch]
                 equivalent_replacement = key in eqv_set
 
                 # obviously, added/modified packages are affected
-                if not equivalent_replacement and key not in affected:
-                    affected.add(key)
+                if not equivalent_replacement and updated_pkg_id not in affected:
+                    affected.add(updated_pkg_id)
                 # if the binary already exists in testing, it is currently
                 # built by another source package. we therefore remove the
                 # version built by the other source package, after marking
@@ -2144,15 +2148,14 @@ class Britney(object):
                     old_pkg_data = binaries_t_a[binary]
                     # save the old binary package
                     undo['binaries'][p] = old_pkg_data
+                    old_version = old_pkg_data[VERSION]
                     if not equivalent_replacement:
+                        old_pkg_id = (binary, old_version, parch)
                         # all the reverse dependencies are affected by
                         # the change
-                        affected.update(get_reverse_tree(binary, parch))
-                        # all the reverse conflicts and their
-                        # dependency tree are affected by the change
-                        for j in old_pkg_data[RCONFLICTS]:
-                            affected.update(get_reverse_tree(j, parch))
-                    old_version = old_pkg_data[VERSION]
+                        affected.update(inst_tester.reverse_dependencies_of(old_pkg_id))
+                        # all the reverse conflicts
+                        affected.update(inst_tester.negative_dependencies_of(old_pkg_id))
                     inst_tester.remove_testing_binary(binary, old_version, parch)
                 elif hint_undo:
                     # the binary isn't in testing, but it may have been at
@@ -2165,16 +2168,14 @@ class Britney(object):
                     # reverse dependencies built from this source can be
                     # ignored as their reverse trees are already handled
                     # by this function
-                    # XXX: and the reverse conflict tree?
                     for (tundo, tpkg) in hint_undo:
                         if p in tundo['binaries']:
-                            for rdep in tundo['binaries'][p][RDEPENDS]:
-                                if rdep in binaries_t_a and rdep not in source[BINARIES]:
-                                    affected.update(get_reverse_tree(rdep, parch))
+                            pv = tundo['binaries'][p][VERSION]
+                            tpkg_id = (p, pv, parch)
+                            affected.update(inst_tester.reverse_dependencies_of(tpkg_id))
 
                 # add/update the binary package from the source suite
                 new_pkg_data = packages_s[parch][0][binary]
-                new_version = new_pkg_data[VERSION]
                 binaries_t_a[binary] = new_pkg_data
                 inst_tester.add_testing_binary(binary, new_version, parch)
                 # register new provided packages
@@ -2188,7 +2189,9 @@ class Britney(object):
                     provides_t_a[j].append(binary)
                 if not equivalent_replacement:
                     # all the reverse dependencies are affected by the change
-                    affected.update(get_reverse_tree(binary, parch))
+                    affected.add(updated_pkg_id)
+                    affected.update(inst_tester.reverse_dependencies_of(updated_pkg_id))
+                    affected.update(inst_tester.negative_dependencies_of(updated_pkg_id))
 
             # register reverse dependencies and conflicts for the new binary packages
             if item.architecture == 'source':
@@ -2202,6 +2205,8 @@ class Britney(object):
             if item.architecture == 'source':
                 sources['testing'][item.package] = sources[item.suite][item.package]
 
+        # Also include the transitive rdeps of the packages found so far
+        compute_reverse_tree(inst_tester, affected)
         # return the package name, the suite, the list of affected packages and the undo dictionary
         return (affected, undo)
 
@@ -2211,38 +2216,26 @@ class Britney(object):
         to_check = []
 
         # broken packages (first round)
-        for p in (x[0] for x in affected if x[1] == arch):
-            if p not in binaries[arch][0]:
+        for pkg_id in (x for x in affected if x[2] == arch):
+            name, version, parch = pkg_id
+            if name not in binaries[parch][0]:
                 continue
-            pkgdata = binaries[arch][0][p]
-            version = pkgdata[VERSION]
-            parch = pkgdata[ARCHITECTURE]
+            pkgdata = binaries[parch][0][name]
+            if version != pkgdata[VERSION]:
+                # Not the version in testing right now, ignore
+                continue
+            actual_arch = pkgdata[ARCHITECTURE]
             nuninst_arch = None
             # only check arch:all packages if requested
-            if check_archall or parch != 'all':
-                nuninst_arch = nuninst[arch]
-            elif parch == 'all':
-                nuninst[arch].discard(p)
-            self._installability_test(p, version, arch, broken, to_check, nuninst_arch)
-
-        # broken packages (second round, reverse dependencies of the first round)
-        while to_check:
-            j = to_check.pop(0)
-            if j not in binaries[arch][0]: continue
-            for p in binaries[arch][0][j][RDEPENDS]:
-                if p in broken or p not in binaries[arch][0]:
-                    continue
-                pkgdata = binaries[arch][0][p]
-                version = pkgdata[VERSION]
-                parch = pkgdata[ARCHITECTURE]
-                nuninst_arch = None
-                # only check arch:all packages if requested
-                if check_archall or parch != 'all':
-                    nuninst_arch = nuninst[arch]
-                elif parch == 'all':
-                    nuninst[arch].discard(p)
-                self._installability_test(p, version, arch, broken, to_check, nuninst_arch)
+            if check_archall or actual_arch != 'all':
+                nuninst_arch = nuninst[parch]
+            elif actual_arch == 'all':
+                nuninst[parch].discard(name)
+            self._installability_test(name, version, parch, broken, to_check, nuninst_arch)
 
+        # We have always overshot the affected set, so to_check does not
+        # contain anything new.
+        assert affected.issuperset(to_check)
 
     def iter_packages_hint(self, hinted_packages, lundo=None):
         """Iter on hinted list of actions and apply them in one go
@@ -2986,12 +2979,12 @@ class Britney(object):
             # not installable
             if pkg_name not in broken:
                 broken.add(pkg_name)
-                to_check.append(pkg_name)
+                to_check.append((pkg_name, pkg_version, pkg_arch))
             if nuninst_arch is not None and pkg_name not in nuninst_arch:
                 nuninst_arch.add(pkg_name)
         else:
             if pkg_name in broken:
-                to_check.append(pkg_name)
+                to_check.append((pkg_name, pkg_version, pkg_arch))
                 broken.remove(pkg_name)
             if nuninst_arch is not None and pkg_name in nuninst_arch:
                 nuninst_arch.remove(pkg_name)
diff --git a/britney_util.py b/britney_util.py
index 6915126..3103bdb 100644
--- a/britney_util.py
+++ b/britney_util.py
@@ -259,45 +259,24 @@ def register_reverses(packages, provides, check_doubles=True, iterator=None,
                                 packages[i][RCONFLICTS].append(pkg)
 
 
-def compute_reverse_tree(packages_s, pkg, arch,
-                     set=set, flatten=chain.from_iterable,
-                     RDEPENDS=RDEPENDS):
-    """Calculate the full dependency tree for the given package
+def compute_reverse_tree(inst_tester, affected):
+    """Calculate the full dependency tree for a set of packages
 
-    This method returns the full dependency tree for the package
-    "pkg", inside the "arch" architecture for a given suite flattened
-    as an iterable.  The first argument "packages_s" is the binary
-    package table for that given suite (e.g. Britney().binaries["testing"]).
+    This method returns the full dependency tree for a given set of
+    packages.  The first argument is an instance of the InstallabilityTester
+    and the second argument are a set of packages ids (as defined in
+    the constructor of the InstallabilityTester).
 
-    The tree (or graph) is returned as an iterable of (package, arch)
-    tuples and the iterable will contain ("pkg", "arch") if it is
-    available on that architecture.
-
-    If "pkg" is not available on that architecture in that suite,
-    this returns an empty iterable.
-
-    The method does not promise any ordering of the returned
-    elements and the iterable is not reusable.
-
-    The flatten=... and the "X=X" parameters are optimizations to
-    avoid "load global" in the loops.
+    The set of affected packages will be updated in place and must
+    therefore be mutable.
     """
-    binaries = packages_s[arch][0]
-    if pkg not in binaries:
-        return frozenset()
-    rev_deps = set(binaries[pkg][RDEPENDS])
-    seen = set([pkg])
-
-    binfilt = ifilter_only(binaries)
-    revfilt = ifilter_except(seen)
-
-    while rev_deps:
-        # mark all of the current iteration of packages as affected
-        seen |= rev_deps
-        # generate the next iteration, which is the reverse-dependencies of
-        # the current iteration
-        rev_deps = set(revfilt(flatten( binaries[x][RDEPENDS] for x in binfilt(rev_deps) )))
-    return zip(seen, repeat(arch))
+    remain = list(affected)
+    while remain:
+        pkg_id = remain.pop()
+        new_pkg_ids = inst_tester.reverse_dependencies_of(pkg_id) - affected
+        affected.update(new_pkg_ids)
+        remain.extend(new_pkg_ids)
+    return None
 
 
 def write_nuninst(filename, nuninst):
diff --git a/installability/tester.py b/installability/tester.py
index d4810c1..eeb3059 100644
--- a/installability/tester.py
+++ b/installability/tester.py
@@ -115,6 +115,30 @@ class InstallabilityTester(object):
         eqv_table = self._eqv_table
         return p1 in eqv_table and p2 in eqv_table[p1]
 
+    def reverse_dependencies_of(self, pkg_id):
+        """Returns the set of reverse dependencies of a given package
+
+        :param pkg_id: The package id as defined in the constructor.
+        :return: A set containing the package ids all of the reverse
+        dependencies of the input package.  The result is suite agnostic.
+        """
+        revuniverse = self._revuniverse
+        if pkg_id not in revuniverse:
+            return frozenset()
+        return revuniverse[pkg_id][0]
+
+    def negative_dependencies_of(self, pkg_id):
+        """Returns the set of negative dependencies of a given package
+
+        Note that there is no "reverse_negative_dependencies_of" method,
+        since negative dependencies have no "direction" unlike positive
+        dependencies.
+
+        :param pkg_id: The package id as defined in the constructor.
+        :return: A set containing the package ids all of the negative
+        dependencies of the input package.  The result is suite agnostic.
+        """
+        return self._universe[pkg_id][1]
 
     def add_testing_binary(self, pkg_name, pkg_version, pkg_arch):
         """Add a binary package to "testing"
-- 
2.5.1

From 5123bc23172ae408c5500ae8dccf079f0b113457 Mon Sep 17 00:00:00 2001
From: Niels Thykier <niels@thykier.net>
Date: Wed, 9 Sep 2015 18:22:08 +0200
Subject: [PATCH 03/16] Remove the now unused RCONFLICTS field

Signed-off-by: Niels Thykier <niels@thykier.net>
---
 britney.py      |  3 +--
 britney_util.py | 19 ++-----------------
 consts.py       |  3 +--
 3 files changed, 4 insertions(+), 21 deletions(-)

diff --git a/britney.py b/britney.py
index 5884817..eda9fbe 100755
--- a/britney.py
+++ b/britney.py
@@ -208,7 +208,7 @@ from britney_util import (old_libraries_format, same_source, undo_changes,
                           clone_nuninst)
 from consts import (VERSION, SECTION, BINARIES, MAINTAINER, FAKESRC,
                    SOURCE, SOURCEVER, ARCHITECTURE, DEPENDS, CONFLICTS,
-                   PROVIDES, RDEPENDS, RCONFLICTS, MULTIARCH, ESSENTIAL)
+                   PROVIDES, RDEPENDS, MULTIARCH, ESSENTIAL)
 
 __author__ = 'Fabio Tranchitella and the Debian Release Team'
 __version__ = '2.0'
@@ -673,7 +673,6 @@ class Britney(object):
                     ', '.join(final_conflicts_list) or None,
                     get_field('Provides'),
                     [],
-                    [],
                     ess,
                    ]
 
diff --git a/britney_util.py b/britney_util.py
index 3103bdb..ad1dbe9 100644
--- a/britney_util.py
+++ b/britney_util.py
@@ -33,7 +33,7 @@ import yaml
 from migrationitem import MigrationItem, UnversionnedMigrationItem
 
 from consts import (VERSION, BINARIES, PROVIDES, DEPENDS, CONFLICTS,
-                    RDEPENDS, RCONFLICTS, ARCHITECTURE, SECTION,
+                    RDEPENDS, ARCHITECTURE, SECTION,
                     SOURCE, SOURCEVER, MAINTAINER, MULTIARCH,
                     ESSENTIAL)
 
@@ -204,8 +204,7 @@ def old_libraries_format(libs):
 
 def register_reverses(packages, provides, check_doubles=True, iterator=None,
                       parse_depends=apt_pkg.parse_depends,
-                      DEPENDS=DEPENDS, CONFLICTS=CONFLICTS,
-                      RDEPENDS=RDEPENDS, RCONFLICTS=RCONFLICTS):
+                      DEPENDS=DEPENDS, RDEPENDS=RDEPENDS):
     """Register reverse dependencies and conflicts for a given
     sequence of packages
 
@@ -243,20 +242,6 @@ def register_reverses(packages, provides, check_doubles=True, iterator=None,
                         if i not in packages: continue
                         if not check_doubles or pkg not in packages[i][RDEPENDS]:
                             packages[i][RDEPENDS].append(pkg)
-        # register the list of the conflicts for the conflicting packages
-        if pkg_data[CONFLICTS]:
-            for p in parse_depends(pkg_data[CONFLICTS], False):
-                for a in p:
-                    con = a[0]
-                    # register real packages
-                    if con in packages and (not check_doubles or pkg not in packages[con][RCONFLICTS]):
-                        packages[con][RCONFLICTS].append(pkg)
-                    # also register packages which provide the package (if any)
-                    if con in provides:
-                        for i in provides[con]:
-                            if i not in packages: continue
-                            if not check_doubles or pkg not in packages[i][RCONFLICTS]:
-                                packages[i][RCONFLICTS].append(pkg)
 
 
 def compute_reverse_tree(inst_tester, affected):
diff --git a/consts.py b/consts.py
index c72bb8b..3353520 100644
--- a/consts.py
+++ b/consts.py
@@ -34,5 +34,4 @@ DEPENDS = 6
 CONFLICTS = 7
 PROVIDES = 8
 RDEPENDS = 9
-RCONFLICTS = 10
-ESSENTIAL = 11
+ESSENTIAL = 10
-- 
2.5.1

From 3d5d328738a3520a7385ac1694a899c8d4cc89ed Mon Sep 17 00:00:00 2001
From: Niels Thykier <niels@thykier.net>
Date: Wed, 9 Sep 2015 20:30:09 +0200
Subject: [PATCH 04/16] Use inst_tester to compute smooth updatable binaries

Signed-off-by: Niels Thykier <niels@thykier.net>
---
 britney.py               | 85 ++++++++++++++++++++++--------------------------
 installability/tester.py | 17 ++++++++++
 2 files changed, 56 insertions(+), 46 deletions(-)

diff --git a/britney.py b/britney.py
index eda9fbe..98d7656 100755
--- a/britney.py
+++ b/britney.py
@@ -1887,10 +1887,11 @@ class Britney(object):
         # local copies for better performances
         sources = self.sources
         binaries_t = self.binaries['testing']
+        inst_tester = self._inst_tester
 
         adds = set()
         rms = set()
-        smoothbins = {}
+        smoothbins = set()
 
         # remove all binary packages (if the source already exists)
         if migration_architecture == 'source' or not is_removal:
@@ -1898,7 +1899,7 @@ class Britney(object):
                 source_data = sources['testing'][source_name]
 
                 bins = []
-                check = {}
+                check = set()
                 # remove all the binaries
 
                 # first, build a list of eligible binaries
@@ -1911,11 +1912,11 @@ class Britney(object):
                     if (not include_hijacked
                         and binaries_t[parch][0][binary][SOURCE] != source_name):
                         continue
+                    version = binaries_t[parch][0][binary][VERSION]
+                    bins.append((binary, version, parch))
 
-                    bins.append(p)
-
-                for p in bins:
-                    binary, parch = p.split("/")
+                for pkg_id in bins:
+                    binary, _, parch = pkg_id
                     # if a smooth update is possible for the package, skip it
                     if allow_smooth_updates and suite == 'unstable' and \
                        binary not in self.binaries[suite][parch][0] and \
@@ -1927,52 +1928,44 @@ class Britney(object):
                         # a smooth update.  if not, it may still be a valid
                         # candidate if one if its r-deps is itself a candidate,
                         # so note it for checking later
-                        bin_data = binaries_t[parch][0][binary]
-                        rdeps = bin_data[RDEPENDS]
-
-                        # the list of reverse-dependencies may be outdated
-                        # if, for example, we're processing a hint and
-                        # a new version of one of the apparent reverse-dependencies
-                        # migrated earlier in the hint.  walk the list to make
-                        # sure that at least one of the entries is still
-                        # valid
-                        rrdeps = [x for x in rdeps if x not in [y.split("/")[0] for y in bins]]
-                        if rrdeps:
-                            for dep in rrdeps:
-                                if dep in binaries_t[parch][0]:
-                                    bin = binaries_t[parch][0][dep]
-                                    deps = []
-                                    # If the package is being removed
-                                    # together with dep, then it is
-                                    # not a reason to smooth update
-                                    # the binary
-                                    t = (dep, bin[VERSION], parch)
-                                    if t in removals:
-                                        continue
-
-                                    if bin[DEPENDS] is not None:
-                                        deps.extend(apt_pkg.parse_depends(bin[DEPENDS], False))
-                                    if any(binary == entry[0] for deplist in deps for entry in deplist):
-                                        smoothbins[p] = (binary, bin_data[VERSION], parch)
+                        rdeps = set(inst_tester.reverse_dependencies_of(pkg_id))
+                        # We ignore all binaries listed in "removals" as we
+                        # assume they will leave at the same time as the
+                        # given package.
+                        rdeps.difference_update(removals, bins)
+
+                        smooth_update_it = False
+                        if inst_tester.any_of_these_are_in_testing(rdeps):
+                            combined = set(smoothbins)
+                            combined.add(pkg_id)
+                            for rdep in rdeps:
+                                for dep_clause in inst_tester.dependencies_of(rdep):
+                                    if dep_clause <= combined:
+                                        smooth_update_it = True
                                         break
+
+                        if smooth_update_it:
+                            smoothbins = combined
                         else:
-                            check[p] = (binary, bin_data[VERSION], parch)
+                            check.add(pkg_id)
 
                 # check whether we should perform a smooth update for
                 # packages which are candidates but do not have r-deps
                 # outside of the current source
-                for p in check:
-                    ptuple = check[p]
-                    binary, _, parch = ptuple
-                    rdeps = [ bin for bin in binaries_t[parch][0][binary][RDEPENDS] \
-                              if bin in [y[0] for y in smoothbins.values()] ]
-                    if rdeps:
-                        smoothbins[p] = ptuple
+                while 1:
+                    found_any = False
+                    for pkg_id in check:
+                        rdeps = inst_tester.reverse_dependencies_of(pkg_id)
+                        if not rdeps.isdisjoint(smoothbins):
+                            smoothbins.add(pkg_id)
+                            found_any = True
+                    if not found_any:
+                        break
+                    check = [x for x in check if x not in smoothbins]
 
                 # remove all the binaries which aren't being smooth updated
-                for p in ( bin for bin in bins if bin not in smoothbins ):
-                    binary, parch = p.split("/")
-                    version = binaries_t[parch][0][binary][VERSION]
+                for pkg_id in (pkg_id for pkg_id in bins if pkg_id not in smoothbins):
+                    binary, version, parch = pkg_id
                     # if this is a binary migration from *pu, only the arch:any
                     # packages will be present. ideally dak would also populate
                     # the arch-indep packages, but as that's not the case we
@@ -1983,7 +1976,7 @@ class Britney(object):
                          binaries_t[parch][0][binary][ARCHITECTURE] == 'all':
                         continue
                     else:
-                        rms.add((binary, version, parch))
+                        rms.add(pkg_id)
 
         # single binary removal; used for clearing up after smooth
         # updates but not supported as a manual hint
@@ -2015,7 +2008,7 @@ class Britney(object):
 
                 adds.add((binary, version, parch))
 
-        return (adds, rms, set(smoothbins.values()))
+        return (adds, rms, smoothbins)
 
     def doop_source(self, item, hint_undo=None, removals=frozenset()):
         """Apply a change to the testing distribution as requested by `pkg`
diff --git a/installability/tester.py b/installability/tester.py
index eeb3059..f578960 100644
--- a/installability/tester.py
+++ b/installability/tester.py
@@ -140,6 +140,23 @@ class InstallabilityTester(object):
         """
         return self._universe[pkg_id][1]
 
+    def dependencies_of(self, pkg_id):
+        """Returns the set of dependencies of a given package
+
+        :param pkg_id: The package id as defined in the constructor.
+        :return: A set containing the package ids all of the dependencies
+        of the input package.  The result is suite agnostic.
+        """
+        return self._universe[pkg_id][0]
+
+    def any_of_these_are_in_testing(self, pkgs):
+        """Test if at least one package of a given set is in testing
+
+        :param pkgs: A set of package ids (as defined in the constructor)
+        :return: True if any of the packages in pkgs are currently in testing
+        """
+        return not self._testing.isdisjoint(pkgs)
+
     def add_testing_binary(self, pkg_name, pkg_version, pkg_arch):
         """Add a binary package to "testing"
 
-- 
2.5.1

From 7736b53c60f846c11f5a0ee41766036cc36fc3b7 Mon Sep 17 00:00:00 2001
From: Niels Thykier <niels@thykier.net>
Date: Wed, 9 Sep 2015 20:31:07 +0200
Subject: [PATCH 05/16] Drop now unused register_reverses and RDEPENDS

Signed-off-by: Niels Thykier <niels@thykier.net>
---
 britney.py      | 16 ++--------------
 britney_util.py | 45 +--------------------------------------------
 consts.py       |  3 +--
 3 files changed, 4 insertions(+), 60 deletions(-)

diff --git a/britney.py b/britney.py
index 98d7656..098921d 100755
--- a/britney.py
+++ b/britney.py
@@ -200,7 +200,7 @@ from excuse import Excuse
 from migrationitem import MigrationItem
 from hints import HintCollection
 from britney_util import (old_libraries_format, same_source, undo_changes,
-                          register_reverses, compute_reverse_tree,
+                          compute_reverse_tree,
                           read_nuninst, write_nuninst, write_heidi,
                           eval_uninst, newly_uninst, make_migrationitem,
                           write_excuses, write_heidi_delta, write_controlfiles,
@@ -208,7 +208,7 @@ from britney_util import (old_libraries_format, same_source, undo_changes,
                           clone_nuninst)
 from consts import (VERSION, SECTION, BINARIES, MAINTAINER, FAKESRC,
                    SOURCE, SOURCEVER, ARCHITECTURE, DEPENDS, CONFLICTS,
-                   PROVIDES, RDEPENDS, MULTIARCH, ESSENTIAL)
+                   PROVIDES, MULTIARCH, ESSENTIAL)
 
 __author__ = 'Fabio Tranchitella and the Debian Release Team'
 __version__ = '2.0'
@@ -672,7 +672,6 @@ class Britney(object):
                     deps,
                     ', '.join(final_conflicts_list) or None,
                     get_field('Provides'),
-                    [],
                     ess,
                    ]
 
@@ -711,9 +710,6 @@ class Britney(object):
             # add the resulting dictionary to the package list
             packages[pkg] = dpkg
 
-        # loop again on the list of packages to register reverse dependencies and conflicts
-        register_reverses(packages, provides, check_doubles=False)
-
         # return a tuple with the list of real and virtual packages
         return (packages, provides)
 
@@ -2185,14 +2181,6 @@ class Britney(object):
                     affected.update(inst_tester.reverse_dependencies_of(updated_pkg_id))
                     affected.update(inst_tester.negative_dependencies_of(updated_pkg_id))
 
-            # register reverse dependencies and conflicts for the new binary packages
-            if item.architecture == 'source':
-                pkg_iter = (p.split("/")[0] for p in source[BINARIES])
-            else:
-                ext = "/" + item.architecture
-                pkg_iter = (p.split("/")[0] for p in source[BINARIES] if p.endswith(ext))
-            register_reverses(binaries_t_a, provides_t_a, iterator=pkg_iter)
-
             # add/update the source package
             if item.architecture == 'source':
                 sources['testing'][item.package] = sources[item.suite][item.package]
diff --git a/britney_util.py b/britney_util.py
index ad1dbe9..bd34b2f 100644
--- a/britney_util.py
+++ b/britney_util.py
@@ -33,7 +33,7 @@ import yaml
 from migrationitem import MigrationItem, UnversionnedMigrationItem
 
 from consts import (VERSION, BINARIES, PROVIDES, DEPENDS, CONFLICTS,
-                    RDEPENDS, ARCHITECTURE, SECTION,
+                    ARCHITECTURE, SECTION,
                     SOURCE, SOURCEVER, MAINTAINER, MULTIARCH,
                     ESSENTIAL)
 
@@ -201,49 +201,6 @@ def old_libraries_format(libs):
     return "\n".join("  " + k + ": " + " ".join(libraries[k]) for k in libraries) + "\n"
 
 
-
-def register_reverses(packages, provides, check_doubles=True, iterator=None,
-                      parse_depends=apt_pkg.parse_depends,
-                      DEPENDS=DEPENDS, RDEPENDS=RDEPENDS):
-    """Register reverse dependencies and conflicts for a given
-    sequence of packages
-
-    This method registers the reverse dependencies and conflicts for a
-    given sequence of packages.  "packages" is a table of real
-    packages and "provides" is a table of virtual packages.
-
-    iterator is the sequence of packages for which the reverse
-    relations should be updated.
-
-    The "X=X" parameters are optimizations to avoid "load global" in
-    the loops.
-    """
-    if iterator is None:
-        iterator = packages.keys()
-    else:
-        iterator = ifilter_only(packages, iterator)
-
-    for pkg in iterator:
-        # register the list of the dependencies for the depending packages
-        dependencies = []
-        pkg_data = packages[pkg]
-        if pkg_data[DEPENDS]:
-            dependencies.extend(parse_depends(pkg_data[DEPENDS], False))
-        # go through the list
-        for p in dependencies:
-            for a in p:
-                dep = a[0]
-                # register real packages
-                if dep in packages and (not check_doubles or pkg not in packages[dep][RDEPENDS]):
-                    packages[dep][RDEPENDS].append(pkg)
-                # also register packages which provide the package (if any)
-                if dep in provides:
-                    for i in provides[dep]:
-                        if i not in packages: continue
-                        if not check_doubles or pkg not in packages[i][RDEPENDS]:
-                            packages[i][RDEPENDS].append(pkg)
-
-
 def compute_reverse_tree(inst_tester, affected):
     """Calculate the full dependency tree for a set of packages
 
diff --git a/consts.py b/consts.py
index 3353520..de5d05b 100644
--- a/consts.py
+++ b/consts.py
@@ -33,5 +33,4 @@ MULTIARCH = 5
 DEPENDS = 6
 CONFLICTS = 7
 PROVIDES = 8
-RDEPENDS = 9
-ESSENTIAL = 10
+ESSENTIAL = 9
-- 
2.5.1

From 2914a8a51b156d70d06fbb41095b5491ee773029 Mon Sep 17 00:00:00 2001
From: Niels Thykier <niels@thykier.net>
Date: Mon, 7 Sep 2015 21:04:02 +0200
Subject: [PATCH 06/16] britney.py: Verify nuninst before updating control
 files

Signed-off-by: Niels Thykier <niels@thykier.net>
---
 britney.py | 29 ++++++++++++++++++++++++++++-
 1 file changed, 28 insertions(+), 1 deletion(-)

diff --git a/britney.py b/britney.py
index 098921d..f945352 100755
--- a/britney.py
+++ b/britney.py
@@ -2498,6 +2498,31 @@ class Britney(object):
             undo_changes(lundo, self._inst_tester, self.sources, self.binaries)
 
 
+    def assert_nuninst_is_correct(self):
+        self.__log("> Update complete - Verifying non-installability counters", type="I")
+
+        cached_nuninst = self.nuninst_orig
+        self._inst_tester.compute_testing_installability()
+        computed_nuninst = self.get_nuninst(build=True)
+        if cached_nuninst != computed_nuninst:
+            self.__log("==================== NUNINST OUT OF SYNC =========================", type="E")
+            for arch in self.options.architectures:
+                expected_nuninst = set(cached_nuninst[arch])
+                actual_nuninst = set(computed_nuninst[arch])
+                false_negatives = actual_nuninst - expected_nuninst
+                false_positives = expected_nuninst - actual_nuninst
+                any_output = actual_nuninst
+                if false_negatives:
+                    self.__log(" %s - unnoticed nuninst: %s" % (arch, str(false_negatives)), type="E")
+                if false_positives:
+                    self.__log(" %s - invalid nuninst: %s" % (arch, str(false_positives)), type="E")
+                self.__log(" %s - actual nuninst: %s" % (arch, str(actual_nuninst)), type="I")
+                self.__log("==================== NUNINST OUT OF SYNC =========================", type="E")
+            raise AssertionError("NUNINST OUT OF SYNC")
+
+        self.__log("> All non-installability counters are ok", type="I")
+
+
     def upgrade_testing(self):
         """Upgrade testing using the unstable packages
 
@@ -2579,7 +2604,7 @@ class Britney(object):
         if len(removals) > 0:
             self.output_write("Removing obsolete source packages from testing (%d):\n" % (len(removals)))
             self.do_all(actions=removals)
-                                                                                                                                     
+
         # smooth updates
         if self.options.smooth_updates:
             self.__log("> Removing old packages left in testing from smooth updates", type="I")
@@ -2595,6 +2620,8 @@ class Britney(object):
         self.output_write("List of old libraries in testing (%d):\n%s" % \
              (len(removals), old_libraries_format(removals)))
 
+        self.assert_nuninst_is_correct()
+
         # output files
         if not self.options.dry_run:
             # re-write control files
-- 
2.5.1

From a9ee64470e9ad34c84f6a09d741398fc55849efa Mon Sep 17 00:00:00 2001
From: Niels Thykier <niels@thykier.net>
Date: Wed, 9 Sep 2015 20:44:13 +0200
Subject: [PATCH 07/16] Always use pkg ids in the InstallabilityTester API

Signed-off-by: Niels Thykier <niels@thykier.net>
---
 britney.py               | 23 ++++++++++----------
 britney_util.py          |  6 +++---
 installability/tester.py | 55 ++++++++++++++++++++++--------------------------
 3 files changed, 40 insertions(+), 44 deletions(-)

diff --git a/britney.py b/britney.py
index f945352..f74fd81 100755
--- a/britney.py
+++ b/britney.py
@@ -1785,7 +1785,8 @@ class Britney(object):
             nuninst[arch] = set()
             for pkg_name in binaries[arch][0]:
                 pkgdata = binaries[arch][0][pkg_name]
-                r = inst_tester.is_installable(pkg_name, pkgdata[VERSION], arch)
+                pkg_id = (pkg_name, pkgdata[VERSION], arch)
+                r = inst_tester.is_installable(pkg_id)
                 if not r:
                     nuninst[arch].add(pkg_name)
 
@@ -2091,7 +2092,7 @@ class Britney(object):
                             del provides_t_a[j]
                     # finally, remove the binary package
                     del binaries_t_a[binary]
-                    inst_tester.remove_testing_binary(binary, version, parch)
+                    inst_tester.remove_testing_binary(rm_pkg_id)
                 # remove the source package
                 if item.architecture == 'source':
                     undo['sources'][item.package] = source
@@ -2110,7 +2111,7 @@ class Britney(object):
             affected.add(pkg_id)
             affected.update(inst_tester.reverse_dependencies_of(pkg_id))
             del binaries_t_a[item.package]
-            inst_tester.remove_testing_binary(item.package, version, item.architecture)
+            inst_tester.remove_testing_binary(pkg_id)
 
 
         # add the new binary packages (if we are not removing)
@@ -2137,14 +2138,14 @@ class Britney(object):
                     # save the old binary package
                     undo['binaries'][p] = old_pkg_data
                     old_version = old_pkg_data[VERSION]
+                    old_pkg_id = (binary, old_version, parch)
                     if not equivalent_replacement:
-                        old_pkg_id = (binary, old_version, parch)
                         # all the reverse dependencies are affected by
                         # the change
                         affected.update(inst_tester.reverse_dependencies_of(old_pkg_id))
                         # all the reverse conflicts
                         affected.update(inst_tester.negative_dependencies_of(old_pkg_id))
-                    inst_tester.remove_testing_binary(binary, old_version, parch)
+                    inst_tester.remove_testing_binary(old_pkg_id)
                 elif hint_undo:
                     # the binary isn't in testing, but it may have been at
                     # the start of the current hint and have been removed
@@ -2165,7 +2166,7 @@ class Britney(object):
                 # add/update the binary package from the source suite
                 new_pkg_data = packages_s[parch][0][binary]
                 binaries_t_a[binary] = new_pkg_data
-                inst_tester.add_testing_binary(binary, new_version, parch)
+                inst_tester.add_testing_binary(updated_pkg_id)
                 # register new provided packages
                 for j in new_pkg_data[PROVIDES]:
                     key = j + "/" + parch
@@ -2211,7 +2212,7 @@ class Britney(object):
                 nuninst_arch = nuninst[parch]
             elif actual_arch == 'all':
                 nuninst[parch].discard(name)
-            self._installability_test(name, version, parch, broken, to_check, nuninst_arch)
+            self._installability_test(name, pkg_id, broken, to_check, nuninst_arch)
 
         # We have always overshot the affected set, so to_check does not
         # contain anything new.
@@ -2968,7 +2969,7 @@ class Britney(object):
             for stat in self._inst_tester.stats.stats():
                 self.__log('>   %s' % stat, type="I")
 
-    def _installability_test(self, pkg_name, pkg_version, pkg_arch, broken, to_check, nuninst_arch):
+    def _installability_test(self, pkg_name, pkg_id, broken, to_check, nuninst_arch):
         """Test for installability of a package on an architecture
 
         (pkg_name, pkg_version, pkg_arch) is the package to check.
@@ -2981,17 +2982,17 @@ class Britney(object):
         If nuninst_arch is not None then it also updated in the same
         way as broken is.
         """
-        r = self._inst_tester.is_installable(pkg_name, pkg_version, pkg_arch)
+        r = self._inst_tester.is_installable(pkg_id)
         if not r:
             # not installable
             if pkg_name not in broken:
                 broken.add(pkg_name)
-                to_check.append((pkg_name, pkg_version, pkg_arch))
+                to_check.append(pkg_id)
             if nuninst_arch is not None and pkg_name not in nuninst_arch:
                 nuninst_arch.add(pkg_name)
         else:
             if pkg_name in broken:
-                to_check.append((pkg_name, pkg_version, pkg_arch))
+                to_check.append(pkg_id)
                 broken.remove(pkg_name)
             if nuninst_arch is not None and pkg_name in nuninst_arch:
                 nuninst_arch.remove(pkg_name)
diff --git a/britney_util.py b/britney_util.py
index bd34b2f..8a21a23 100644
--- a/britney_util.py
+++ b/britney_util.py
@@ -154,7 +154,7 @@ def undo_changes(lundo, inst_tester, sources, binaries,
                 if item.architecture in ['source', arch]:
                     version = binaries["testing"][arch][0][binary][VERSION]
                     del binaries["testing"][arch][0][binary]
-                    inst_tester.remove_testing_binary(binary, version, arch)
+                    inst_tester.remove_testing_binary((binary, version, arch))
 
 
     # STEP 3
@@ -170,10 +170,10 @@ def undo_changes(lundo, inst_tester, sources, binaries,
                 binaries_t_a = binaries['testing'][arch][0]
                 if p in binaries_t_a:
                     rmpkgdata = binaries_t_a[p]
-                    inst_tester.remove_testing_binary(binary, rmpkgdata[VERSION], arch)
+                    inst_tester.remove_testing_binary((binary, rmpkgdata[VERSION], arch))
                 pkgdata = undo['binaries'][p]
                 binaries_t_a[binary] = pkgdata
-                inst_tester.add_testing_binary(binary, pkgdata[VERSION], arch)
+                inst_tester.add_testing_binary((binary, pkgdata[VERSION], arch))
 
     # STEP 4
     # undo all changes to virtual packages
diff --git a/installability/tester.py b/installability/tester.py
index f578960..ad629b2 100644
--- a/installability/tester.py
+++ b/installability/tester.py
@@ -157,22 +157,20 @@ class InstallabilityTester(object):
         """
         return not self._testing.isdisjoint(pkgs)
 
-    def add_testing_binary(self, pkg_name, pkg_version, pkg_arch):
+    def add_testing_binary(self, pkg_id):
         """Add a binary package to "testing"
 
         If the package is not known, this method will throw an
         KeyError.
         """
 
-        t = (pkg_name, pkg_version, pkg_arch)
+        if pkg_id not in self._universe:
+            raise KeyError(str(pkg_id))
 
-        if t not in self._universe:
-            raise KeyError(str(t))
-
-        if t in self._broken:
-            self._testing.add(t)
-        elif t not in self._testing:
-            self._testing.add(t)
+        if pkg_id in self._broken:
+            self._testing.add(pkg_id)
+        elif pkg_id not in self._testing:
+            self._testing.add(pkg_id)
             if self._cache_inst:
                 self._stats.cache_drops += 1
             self._cache_inst = set()
@@ -180,44 +178,42 @@ class InstallabilityTester(object):
                 # Re-add broken packages as some of them may now be installable
                 self._testing |= self._cache_broken
                 self._cache_broken = set()
-            if t in self._essentials and t[2] in self._cache_ess:
+            if pkg_id in self._essentials and pkg_id[2] in self._cache_ess:
                 # Adds new essential => "pseudo-essential" set needs to be
                 # recomputed
-                del self._cache_ess[t[2]]
+                del self._cache_ess[pkg_id[2]]
 
         return True
 
-    def remove_testing_binary(self, pkg_name, pkg_version, pkg_arch):
+    def remove_testing_binary(self, pkg_id):
         """Remove a binary from "testing"
 
         If the package is not known, this method will throw an
         Keyrror.
         """
 
-        t = (pkg_name, pkg_version, pkg_arch)
-
-        if t not in self._universe:
-            raise KeyError(str(t))
+        if pkg_id not in self._universe:
+            raise KeyError(str(pkg_id))
 
-        self._cache_broken.discard(t)
+        self._cache_broken.discard(pkg_id)
 
-        if t in self._testing:
-            self._testing.remove(t)
-            if t[2] in self._cache_ess and t in self._cache_ess[t[2]][0]:
+        if pkg_id in self._testing:
+            self._testing.remove(pkg_id)
+            if pkg_id[2] in self._cache_ess and pkg_id in self._cache_ess[pkg_id[2]][0]:
                 # Removes a package from the "pseudo-essential set"
-                del self._cache_ess[t[2]]
+                del self._cache_ess[pkg_id[2]]
 
-            if t not in self._revuniverse:
+            if pkg_id not in self._revuniverse:
                 # no reverse relations - safe
                 return True
-            if t not in self._broken and t in self._cache_inst:
+            if pkg_id not in self._broken and pkg_id in self._cache_inst:
                 # It is in our cache (and not guaranteed to be broken) - throw out the cache
                 self._cache_inst = set()
                 self._stats.cache_drops += 1
 
         return True
 
-    def is_installable(self, pkg_name, pkg_version, pkg_arch):
+    def is_installable(self, pkg_id):
         """Test if a package is installable in this package set
 
         The package is assumed to be in "testing" and only packages in
@@ -228,21 +224,20 @@ class InstallabilityTester(object):
         """
 
         self._stats.is_installable_calls += 1
-        t = (pkg_name, pkg_version, pkg_arch)
 
-        if t not in self._universe:
-            raise KeyError(str(t))
+        if pkg_id not in self._universe:
+            raise KeyError(str(pkg_id))
 
-        if t not in self._testing or t in self._broken:
+        if pkg_id not in self._testing or pkg_id in self._broken:
             self._stats.cache_hits += 1
             return False
 
-        if t in self._cache_inst:
+        if pkg_id in self._cache_inst:
             self._stats.cache_hits += 1
             return True
 
         self._stats.cache_misses += 1
-        return self._check_inst(t)
+        return self._check_inst(pkg_id)
 
 
     def _check_inst(self, t, musts=None, never=None, choices=None):
-- 
2.5.1

From 49a7323ecb1c1b3f68ade59e728cb40bbc258b22 Mon Sep 17 00:00:00 2001
From: Niels Thykier <niels@thykier.net>
Date: Tue, 8 Sep 2015 20:48:56 +0200
Subject: [PATCH 08/16] installability/builder.py: Reuse local variable

Signed-off-by: Niels Thykier <niels@thykier.net>
---
 installability/builder.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/installability/builder.py b/installability/builder.py
index 2998d16..ee8b4f1 100644
--- a/installability/builder.py
+++ b/installability/builder.py
@@ -45,9 +45,9 @@ class _RelationBuilder(object):
         However, they must be added before the "build()" method is
         called.
         """
-        clause = self._itbuilder._intern_set(or_clause)
-        binary = self._binary
         itbuilder = self._itbuilder
+        clause = itbuilder._intern_set(or_clause)
+        binary = self._binary
         okay = False
         for dep_tuple in clause:
             okay = True
@@ -57,7 +57,7 @@ class _RelationBuilder(object):
 
         self._new_deps.add(clause)
         if not okay:
-            self._itbuilder._broken.add(binary)
+            itbuilder._broken.add(binary)
 
 
     def add_breaks(self, broken_binary):
-- 
2.5.1

From 681a7b8cd7f227838f9acad2f0cf7b293dbe9a5a Mon Sep 17 00:00:00 2001
From: Niels Thykier <niels@thykier.net>
Date: Tue, 8 Sep 2015 21:44:41 +0200
Subject: [PATCH 09/16] Log when compiling Installability tester

Signed-off-by: Niels Thykier <niels@thykier.net>
---
 britney.py | 1 +
 1 file changed, 1 insertion(+)

diff --git a/britney.py b/britney.py
index f74fd81..c381466 100755
--- a/britney.py
+++ b/britney.py
@@ -292,6 +292,7 @@ class Britney(object):
 
             self._check_mismatches(arch)
 
+        self.__log("Compiling Installability tester", type="I")
         self._build_installability_tester(self.options.architectures)
 
         if not self.options.nuninst_cache:
-- 
2.5.1

From 6db0ed5ac80b882c1a67ad26c97c7f80a5c6d19b Mon Sep 17 00:00:00 2001
From: Niels Thykier <niels@thykier.net>
Date: Tue, 8 Sep 2015 21:45:53 +0200
Subject: [PATCH 10/16] britney: Move table lookup out of
 get_dependency_solvers

The callers of get_dependency_solvers need to do those table lookups
anyway.  By moving it out, it is now possible to reuse the results.

Signed-off-by: Niels Thykier <niels@thykier.net>
---
 britney.py | 40 ++++++++++++++++++++--------------------
 1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/britney.py b/britney.py
index c381466..cbaadfe 100755
--- a/britney.py
+++ b/britney.py
@@ -501,12 +501,13 @@ class Britney(object):
                             sat = set()
 
                             for dep_dist in binaries:
-                                pkgs = solvers(block, arch, dep_dist)
+                                dep_packages_s_a = binaries[dep_dist][arch]
+                                pkgs = solvers(block, dep_packages_s_a)
                                 for p in pkgs:
                                     # version and arch is already interned, but solvers use
                                     # the package name extracted from the field and it is therefore
                                     # not interned.
-                                    pdata = binaries[dep_dist][arch][0][p]
+                                    pdata = dep_packages_s_a[0][p]
                                     pt = (sys.intern(p), pdata[VERSION], arch)
                                     if dep:
                                         sat.add(pt)
@@ -957,23 +958,19 @@ class Britney(object):
     # Utility methods for package analysis
     # ------------------------------------
 
-    def get_dependency_solvers(self, block, arch, distribution):
+    def get_dependency_solvers(self, block, packages_s_a):
         """Find the packages which satisfy a dependency block
 
         This method returns the list of packages which satisfy a dependency
-        block (as returned by apt_pkg.parse_depends) for the given architecture
-        and distribution.
+        block (as returned by apt_pkg.parse_depends) in a package table
+        for a given suite and architecture (a la self.binaries[suite][arch])
 
         It returns a tuple with two items: the first is a boolean which is
         True if the dependency is satisfied, the second is the list of the
         solving packages.
         """
-
         packages = []
 
-        # local copies for better performance
-        binaries = self.binaries[distribution][arch]
-
         # for every package, version and operation in the block
         for name, version, op in block:
             if ":" in name:
@@ -982,8 +979,8 @@ class Britney(object):
                 archqual = None
 
             # look for the package in unstable
-            if name in binaries[0]:
-                package = binaries[0][name]
+            if name in packages_s_a[0]:
+                package = packages_s_a[0][name]
                 # check the versioned dependency and architecture qualifier
                 # (if present)
                 if (op == '' and version == '') or apt_pkg.check_dep(package[VERSION], op, version):
@@ -991,8 +988,8 @@ class Britney(object):
                         packages.append(name)
 
             # look for the package in the virtual packages list and loop on them
-            for prov in binaries[1].get(name, []):
-                if prov not in binaries[0]: continue
+            for prov in packages_s_a[1].get(name, []):
+                if prov not in packages_s_a[0]: continue
                 # A provides only satisfies:
                 # - an unversioned dependency (per Policy Manual §7.5)
                 # - a dependency without an architecture qualifier
@@ -1013,7 +1010,9 @@ class Britney(object):
         as parameter.
         """
         # retrieve the binary package from the specified suite and arch
-        binary_u = self.binaries[suite][arch][0][pkg]
+        package_s_a = self.binaries[suite][arch]
+        package_t_a = self.binaries['testing'][arch]
+        binary_u = package_s_a[0][pkg]
 
         # local copies for better performance
         parse_depends = apt_pkg.parse_depends
@@ -1027,16 +1026,17 @@ class Britney(object):
         # for every dependency block (formed as conjunction of disjunction)
         for block, block_txt in zip(parse_depends(deps, False), deps.split(',')):
             # if the block is satisfied in testing, then skip the block
-            packages = get_dependency_solvers(block, arch, 'testing')
+            packages = get_dependency_solvers(block, package_t_a)
             if packages:
                 for p in packages:
-                    if p not in self.binaries[suite][arch][0]: continue
-                    excuse.add_sane_dep(self.binaries[suite][arch][0][p][SOURCE])
+                    if p not in package_s_a[0]:
+                        continue
+                    excuse.add_sane_dep(package_s_a[0][p][SOURCE])
                 continue
 
-            # check if the block can be satisfied in unstable, and list the solving packages
-            packages = get_dependency_solvers(block, arch, suite)
-            packages = [self.binaries[suite][arch][0][p][SOURCE] for p in packages]
+            # check if the block can be satisfied in the source suite, and list the solving packages
+            packages = get_dependency_solvers(block, package_s_a)
+            packages = [package_s_a[0][p][SOURCE] for p in packages]
 
             # if the dependency can be satisfied by the same source package, skip the block:
             # obviously both binary packages will enter testing together
-- 
2.5.1

From 13d4d4801bd912bfa468d8c8bc876ccc4fe238de Mon Sep 17 00:00:00 2001
From: Niels Thykier <niels@thykier.net>
Date: Wed, 27 May 2015 22:23:03 +0200
Subject: [PATCH 11/16] britney: Factor out a try_migration method

Signed-off-by: Niels Thykier <niels@thykier.net>
---
 britney.py | 93 +++++++++++++++++++++++++++++++++++---------------------------
 1 file changed, 52 insertions(+), 41 deletions(-)

diff --git a/britney.py b/britney.py
index cbaadfe..41b8329 100755
--- a/britney.py
+++ b/britney.py
@@ -2264,6 +2264,55 @@ class Britney(object):
 
         return nuninst
 
+    def try_migration(self, actions, nuninst_now, lundo=None, automatic_revert=True):
+        is_accepted = True
+        affected_architectures = set()
+        item = actions
+        packages_t = self.binaries['testing']
+
+        nobreakall_arches = self.options.nobreakall_arches
+        new_arches = self.options.new_arches
+        break_arches = self.options.break_arches
+        arch = None
+
+        # apply the changes
+        affected, undo = self.doop_source(item, lundo)
+        single_undo = (undo, item)
+
+        # Copy nuninst_comp - we have to deep clone affected
+        # architectures.
+
+        # NB: We do this *after* updating testing as we have to filter out
+        # removed binaries.  Otherwise, uninstallable binaries that were
+        # removed by the item would still be counted.
+        if item.architecture == 'source':
+            affected_architectures = set(self.options.architectures)
+        else:
+            affected_architectures.add(item.architecture)
+        nuninst_after = clone_nuninst(nuninst_now, packages_t, affected_architectures)
+
+        # check the affected packages on all the architectures
+        for arch in affected_architectures:
+            check_archall = arch in nobreakall_arches
+
+            self._check_packages(packages_t, arch, affected, check_archall, nuninst_after)
+
+            # if the uninstallability counter is worse than before, break the loop
+            if automatic_revert and len(nuninst_after[arch]) > len(nuninst_now[arch]):
+                # ... except for a few special cases
+                if (item.architecture != 'source' and arch not in new_arches) or \
+                   (arch not in break_arches):
+                    is_accepted = False
+                    break
+
+        # check if the action improved the uninstallability counters
+        if not is_accepted and automatic_revert:
+            undo_list = [single_undo]
+            undo_changes(undo_list, self._inst_tester, self.sources, self.binaries)
+
+        return (is_accepted, nuninst_after, single_undo, arch)
+
+
 
     def iter_packages(self, packages, selected, nuninst=None, lundo=None):
         """Iter on the list of actions and apply them one-by-one
@@ -2285,14 +2334,7 @@ class Britney(object):
             nuninst_comp = self.nuninst_orig
 
         # local copies for better performance
-        binaries = self.binaries['testing']
-        sources = self.sources
-        architectures = self.options.architectures
-        nobreakall_arches = self.options.nobreakall_arches
-        new_arches = self.options.new_arches
-        break_arches = self.options.break_arches
         dependencies = self.dependencies
-        check_packages = partial(self._check_packages, binaries)
 
         self.output_write("recur: [] %s %d/%d\n" % (",".join(x.uvname for x in selected), len(packages), len(extra)))
 
@@ -2320,41 +2362,12 @@ class Britney(object):
 
             self.output_write("trying: %s\n" % (item.uvname))
 
-            better = True
-
-            # apply the changes
-            affected, undo = self.doop_source(item, lundo)
-
-            # Copy nuninst_comp - we have to deep clone affected
-            # architectures.
-
-            # NB: We do this *after* updating testing as we have to filter out
-            # removed binaries.  Otherwise, uninstallable binaries that were
-            # removed by the item would still be counted.
-            if item.architecture == 'source':
-                affected_architectures = architectures
-            else:
-                affected_architectures = [item.architecture]
-            nuninst = clone_nuninst(nuninst_comp, binaries, affected_architectures)
-
-
-            # check the affected packages on all the architectures
-            for arch in affected_architectures:
-                check_archall = arch in nobreakall_arches
-
-                check_packages(arch, affected, check_archall, nuninst)
-
-                # if the uninstallability counter is worse than before, break the loop
-                if ((item.architecture != 'source' and arch not in new_arches) or \
-                    (arch not in break_arches)) and len(nuninst[arch]) > len(nuninst_comp[arch]):
-                    better = False
-                    break
-
+            (better, nuninst, undo_item, arch) = self.try_migration(item, nuninst_comp, lundo=lundo)
 
             # check if the action improved the uninstallability counters
             if better:
                 if lundo is not None:
-                    lundo.append((undo, item))
+                    lundo.append(undo_item)
                 selected.append(item)
                 packages.extend(extra)
                 extra = []
@@ -2368,6 +2381,7 @@ class Britney(object):
                     self.output_write("  most: (%d) .. %s\n" % (len(selected), " ".join(x.uvname for x in selected[-20:])))
                 nuninst_comp = nuninst
             else:
+                # NB: try_migration already reverted this for us, so just print the results and move on
                 self.output_write("skipped: %s (%d <- %d)\n" % (item.uvname, len(extra), len(packages)))
                 self.output_write("    got: %s\n" % (self.eval_nuninst(nuninst, item.architecture != 'source' and nuninst_comp or None)))
                 self.output_write("    * %s: %s\n" % (arch, ", ".join(sorted(b for b in nuninst[arch] if b not in nuninst_comp[arch]))))
@@ -2375,9 +2389,6 @@ class Britney(object):
                 extra.append(item)
                 if not mark_passed:
                     skipped.append(item)
-                single_undo = [(undo, item)]
-                # (local-scope) binaries is actually self.binaries["testing"] so we cannot use it here.
-                undo_changes(single_undo, self._inst_tester, sources, self.binaries)
 
 
         self.output_write(" finish: [%s]\n" % ",".join( x.uvname for x in selected ))
-- 
2.5.1

From 65f74c226d2cc4fb34a7775881a4f01a813cd354 Mon Sep 17 00:00:00 2001
From: Niels Thykier <niels@thykier.net>
Date: Thu, 28 May 2015 08:29:50 +0200
Subject: [PATCH 12/16] Make try_migration support multiple actions

This made iter_packages_hint a thin wrapper around try_migration, so
it was inlined into its only caller "do_all".

Signed-off-by: Niels Thykier <niels@thykier.net>
---
 britney.py | 104 +++++++++++++++++++++++++++----------------------------------
 1 file changed, 46 insertions(+), 58 deletions(-)

diff --git a/britney.py b/britney.py
index 41b8329..f1a1e60 100755
--- a/britney.py
+++ b/britney.py
@@ -2219,51 +2219,6 @@ class Britney(object):
         # contain anything new.
         assert affected.issuperset(to_check)
 
-    def iter_packages_hint(self, hinted_packages, lundo=None):
-        """Iter on hinted list of actions and apply them in one go
-
-        This method applies the changes from "hinted_packages" to
-        testing and computes the uninstallability counters after te
-        actions are performed.
-
-        The method returns the new uninstallability counters.
-        """
-
-        removals = set()
-        all_affected = set()
-        nobreakall_arches = self.options.nobreakall_arches
-        packages_t = self.binaries['testing']
-        check_packages = partial(self._check_packages, packages_t)
-
-
-        for item in hinted_packages:
-            _, rms, _ = self._compute_groups(item.package, item.suite,
-                                             item.architecture,
-                                             item.is_removal,
-                                             allow_smooth_updates=False)
-            removals.update(rms)
-
-        for item in hinted_packages:
-            affected, undo = self.doop_source(item,
-                                              removals=removals)
-            all_affected.update(affected)
-            if lundo is not None:
-                lundo.append((undo,item))
-
-        # deep copy nuninst (in case the hint is undone)
-        # NB: We do this *after* updating testing as we have to filter out
-        # removed binaries.  Otherwise, uninstallable binaries that were
-        # removed by the hint would still be counted.
-        nuninst = clone_nuninst(self.nuninst_orig, packages_t,
-                                self.options.architectures)
-
-        for arch in self.options.architectures:
-            check_archall = arch in nobreakall_arches
-
-            check_packages(arch, all_affected, check_archall, nuninst)
-
-        return nuninst
-
     def try_migration(self, actions, nuninst_now, lundo=None, automatic_revert=True):
         is_accepted = True
         affected_architectures = set()
@@ -2275,9 +2230,35 @@ class Britney(object):
         break_arches = self.options.break_arches
         arch = None
 
-        # apply the changes
-        affected, undo = self.doop_source(item, lundo)
-        single_undo = (undo, item)
+        if len(actions) == 1:
+            item = actions[0]
+            # apply the changes
+            affected, undo = self.doop_source(item, lundo)
+            undo_list = [(undo, item)]
+            if item.architecture == 'source':
+                affected_architectures = set(self.options.architectures)
+            else:
+                affected_architectures.add(item.architecture)
+        else:
+            undo_list = []
+            removals = set()
+            affected = set()
+            for item in actions:
+                _, rms, _ = self._compute_groups(item.package, item.suite,
+                                                 item.architecture,
+                                                 item.is_removal,
+                                                 allow_smooth_updates=False)
+                removals.update(rms)
+                affected_architectures.add(item.architecture)
+
+            if 'source' in affected_architectures:
+                affected_architectures = set(self.options.architectures)
+
+            for item in actions:
+                item_affected, undo = self.doop_source(item,
+                                                       removals=removals)
+                affected.update(item_affected)
+                undo_list.append((undo, item))
 
         # Copy nuninst_comp - we have to deep clone affected
         # architectures.
@@ -2285,10 +2266,7 @@ class Britney(object):
         # NB: We do this *after* updating testing as we have to filter out
         # removed binaries.  Otherwise, uninstallable binaries that were
         # removed by the item would still be counted.
-        if item.architecture == 'source':
-            affected_architectures = set(self.options.architectures)
-        else:
-            affected_architectures.add(item.architecture)
+
         nuninst_after = clone_nuninst(nuninst_now, packages_t, affected_architectures)
 
         # check the affected packages on all the architectures
@@ -2307,10 +2285,10 @@ class Britney(object):
 
         # check if the action improved the uninstallability counters
         if not is_accepted and automatic_revert:
-            undo_list = [single_undo]
-            undo_changes(undo_list, self._inst_tester, self.sources, self.binaries)
+            undo_copy = list(reversed(undo_list))
+            undo_changes(undo_copy, self._inst_tester, self.sources, self.binaries)
 
-        return (is_accepted, nuninst_after, single_undo, arch)
+        return (is_accepted, nuninst_after, undo_list, arch)
 
 
 
@@ -2362,12 +2340,12 @@ class Britney(object):
 
             self.output_write("trying: %s\n" % (item.uvname))
 
-            (better, nuninst, undo_item, arch) = self.try_migration(item, nuninst_comp, lundo=lundo)
+            (better, nuninst, undo_list, arch) = self.try_migration([item], nuninst_comp, lundo=lundo)
 
             # check if the action improved the uninstallability counters
             if better:
                 if lundo is not None:
-                    lundo.append(undo_item)
+                    lundo.extend(undo_list)
                 selected.append(item)
                 packages.extend(extra)
                 extra = []
@@ -2445,7 +2423,17 @@ class Britney(object):
 
         if init:
             # init => a hint (e.g. "easy") - so do the hint run
-            nuninst_end = self.iter_packages_hint(selected, lundo=lundo)
+            (better, nuninst_end, undo_list, _) = self.try_migration(selected,
+                                                                     self.nuninst_orig,
+                                                                     lundo=lundo,
+                                                                     automatic_revert=False)
+            if force:
+                # Force implies "unconditionally better"
+                better = True
+
+            if lundo is not None:
+                lundo.extend(undo_list)
+
             if recurse:
                 # Ensure upgrade_me and selected do not overlap, if we
                 # follow-up with a recurse ("hint"-hint).
-- 
2.5.1

From 326cc0d98b4f84dc72e52c6a04d3c96ed721963a Mon Sep 17 00:00:00 2001
From: Niels Thykier <niels@thykier.net>
Date: Fri, 11 May 2012 16:31:50 +0200
Subject: [PATCH 13/16] Rewrite the main run to use the solver and allow
 combined migrations

Signed-off-by: Niels Thykier <niels@thykier.net>
---
 britney.py | 130 ++++++++++++++++++++++---------------------------------------
 1 file changed, 46 insertions(+), 84 deletions(-)

diff --git a/britney.py b/britney.py
index f1a1e60..b368759 100755
--- a/britney.py
+++ b/britney.py
@@ -2233,7 +2233,7 @@ class Britney(object):
         if len(actions) == 1:
             item = actions[0]
             # apply the changes
-            affected, undo = self.doop_source(item, lundo)
+            affected, undo = self.doop_source(item, hint_undo=lundo)
             undo_list = [(undo, item)]
             if item.architecture == 'source':
                 affected_architectures = set(self.options.architectures)
@@ -2256,6 +2256,7 @@ class Britney(object):
 
             for item in actions:
                 item_affected, undo = self.doop_source(item,
+                                                       hint_undo=lundo,
                                                        removals=removals)
                 affected.update(item_affected)
                 undo_list.append((undo, item))
@@ -2301,82 +2302,66 @@ class Britney(object):
         final result is successful, otherwise (None, None).
         """
         extra = []
-        deferred = []
-        skipped = []
-        mark_passed = False
-        position = len(packages)
+        groups = set()
+        for y in sorted((y for y in packages), key=attrgetter('uvname')):
+            updates, rms, _ = self._compute_groups(y.package, y.suite, y.architecture, y.is_removal)
+            groups.add((y, frozenset(updates), frozenset(rms)))
 
+        if selected is None:
+            selected = []
         if nuninst:
-            nuninst_comp = nuninst
+            nuninst_orig = nuninst
         else:
-            nuninst_comp = self.nuninst_orig
-
-        # local copies for better performance
-        dependencies = self.dependencies
-
+            nuninst_orig = self.nuninst_orig
+        nuninst_last_accepted = nuninst_orig
+        maybe_reschuled = []
+        worklist = self._inst_tester.solve_groups(groups)
+        worklist.reverse()
         self.output_write("recur: [] %s %d/%d\n" % (",".join(x.uvname for x in selected), len(packages), len(extra)))
 
-        # loop on the packages (or better, actions)
-        while packages:
-            item = packages.pop(0)
-
-            # this is the marker for the first loop
-            if not mark_passed and position < 0:
-                mark_passed = True
-                packages.extend(deferred)
-                del deferred
-            else: position -= 1
-
-            # defer packages if their dependency has been already skipped
-            if not mark_passed:
-                defer = False
-                for p in dependencies.get(item, []):
-                    if p in skipped:
-                        deferred.append(item)
-                        skipped.append(item)
-                        defer = True
-                        break
-                if defer: continue
-
-            self.output_write("trying: %s\n" % (item.uvname))
-
-            (better, nuninst, undo_list, arch) = self.try_migration([item], nuninst_comp, lundo=lundo)
-
-            # check if the action improved the uninstallability counters
-            if better:
+        while worklist:
+            comp = worklist.pop()
+            comp_name = ' '.join(item.uvname for item in comp)
+            self.output_write("trying: %s\n" % comp_name)
+            accepted, nuninst_after, comp_undo, failed_arch = self.try_migration(comp, nuninst_last_accepted, lundo)
+            if accepted:
+                nuninst_last_accepted = nuninst_after
+                selected.extend(comp)
                 if lundo is not None:
-                    lundo.extend(undo_list)
-                selected.append(item)
-                packages.extend(extra)
-                extra = []
-                self.output_write("accepted: %s\n" % (item.uvname))
-                self.output_write("   ori: %s\n" % (self.eval_nuninst(self.nuninst_orig)))
-                self.output_write("   pre: %s\n" % (self.eval_nuninst(nuninst_comp)))
-                self.output_write("   now: %s\n" % (self.eval_nuninst(nuninst, nuninst_comp)))
+                    lundo.extend(comp_undo)
+                self.output_write("accepted: %s\n" % comp_name)
+                self.output_write("   ori: %s\n" % (self.eval_nuninst(nuninst_orig)))
+                self.output_write("   pre: %s\n" % (self.eval_nuninst(nuninst_last_accepted)))
+                self.output_write("   now: %s\n" % (self.eval_nuninst(nuninst_after)))
                 if len(selected) <= 20:
-                    self.output_write("   all: %s\n" % (" ".join( x.uvname for x in selected )))
+                    self.output_write("   all: %s\n" % (" ".join(x.uvname for x in selected)))
                 else:
                     self.output_write("  most: (%d) .. %s\n" % (len(selected), " ".join(x.uvname for x in selected[-20:])))
-                nuninst_comp = nuninst
             else:
+                broken = sorted(b for b in nuninst_after[failed_arch]
+                                if b not in nuninst_last_accepted[failed_arch])
+                compare_nuninst = None
+                if any(item for item in comp if item.architecture != 'source'):
+                    compare_nuninst = nuninst_last_accepted
                 # NB: try_migration already reverted this for us, so just print the results and move on
-                self.output_write("skipped: %s (%d <- %d)\n" % (item.uvname, len(extra), len(packages)))
-                self.output_write("    got: %s\n" % (self.eval_nuninst(nuninst, item.architecture != 'source' and nuninst_comp or None)))
-                self.output_write("    * %s: %s\n" % (arch, ", ".join(sorted(b for b in nuninst[arch] if b not in nuninst_comp[arch]))))
-
-                extra.append(item)
-                if not mark_passed:
-                    skipped.append(item)
+                self.output_write("skipped: %s (%d, %d)\n" % (comp_name, len(maybe_reschuled), len(worklist)))
+                self.output_write("    got: %s\n" % (self.eval_nuninst(nuninst_after, compare_nuninst)))
+                self.output_write("    * %s: %s\n" % (failed_arch, ", ".join(broken)))
 
+                if len(comp) > 1:
+                    self.output_write("    - splitting the component into single items and retrying them\n")
+                    worklist.extend([item] for item in comp)
+                else:
+                    extra.extend(comp)
 
         self.output_write(" finish: [%s]\n" % ",".join( x.uvname for x in selected ))
         self.output_write("endloop: %s\n" % (self.eval_nuninst(self.nuninst_orig)))
-        self.output_write("    now: %s\n" % (self.eval_nuninst(nuninst_comp)))
+        self.output_write("    now: %s\n" % (self.eval_nuninst(nuninst_last_accepted)))
         self.output_write(eval_uninst(self.options.architectures,
-                                      newly_uninst(self.nuninst_orig, nuninst_comp)))
+                                      newly_uninst(self.nuninst_orig, nuninst_last_accepted)))
         self.output_write("\n")
 
-        return (nuninst_comp, extra)
+        return (nuninst_last_accepted, extra)
 
 
     def do_all(self, hinttype=None, init=None, actions=None):
@@ -2807,11 +2792,10 @@ class Britney(object):
         self.upgrade_me = [ make_migrationitem(x, self.sources) for x in upgrade_me ]
         self.upgrade_me.extend(make_migrationitem(x, self.sources) for x in removals)
 
-
     def auto_hinter(self):
         """Auto-generate "easy" hints.
 
-        This method attempts to generate "easy" hints for sets of packages which    
+        This method attempts to generate "easy" hints for sets of packages which
         must migrate together. Beginning with a package which does not depend on
         any other package (in terms of excuses), a list of dependencies and
         reverse dependencies is recursively created.
@@ -2824,7 +2808,7 @@ class Britney(object):
         excuses relationships. If they build a circular dependency, which we already
         know as not-working with the standard do_all algorithm, try to `easy` them.
         """
-        self.__log("> Processing hints from the auto hinter [Partial-ordering]",
+        self.__log("> Processing hints from the auto hinter",
                    type="I")
 
         # consider only excuses which are valid candidates
@@ -2833,28 +2817,6 @@ class Britney(object):
         excuses_deps = dict((name, set(excuse.deps)) for name, excuse in excuses.items())
         sources_t = self.sources['testing']
 
-        groups = set()
-        for y in sorted((y for y in self.upgrade_me if y.uvname in excuses), key=attrgetter('uvname')):
-            if y.is_removal and y.package not in sources_t:
-                # Already removed
-                continue
-            if not y.is_removal:
-                excuse = excuses[y.uvname]
-                if y.architecture == 'source' and y.uvname in sources_t and sources_t[y.uvname][VERSION] == excuse.ver[1]:
-                    # Already migrated
-                    continue
-            adds, rms, _ = self._compute_groups(y.package, y.suite,
-                                                y.architecture, y.is_removal,
-                                                include_hijacked=True)
-            groups.add((y, frozenset(adds), frozenset(rms)))
-
-        for comp in self._inst_tester.solve_groups(groups):
-            if len(comp) > 1:
-                self.do_hint("easy", "autohinter", comp)
-
-        self.__log("> Processing hints from the auto hinter [Original]",
-                   type="I")
-
         def find_related(e, hint, circular_first=False):
             if e not in excuses:
                 return False
-- 
2.5.1

From 37c179f5f3d8a82b4ba02a9542e42e0eef2b3394 Mon Sep 17 00:00:00 2001
From: Niels Thykier <niels@thykier.net>
Date: Sat, 30 May 2015 12:35:20 +0200
Subject: [PATCH 14/16] solver: Emit single items together and early

Technically, we simply sort by the "hint-size", but it for most parts
have the effect of putting single item earlier.  Note this is /not/ a
perfect solution, but it is a simple heuristic exploiting the common
case.

Signed-off-by: Niels Thykier <niels@thykier.net>
---
 installability/solver.py | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/installability/solver.py b/installability/solver.py
index 9e1f1d8..9e6e1b7 100644
--- a/installability/solver.py
+++ b/installability/solver.py
@@ -17,6 +17,7 @@
 from __future__ import print_function
 
 import os
+from collections import deque
 
 from installability.tester import InstallabilityTester
 from britney_util import (ifilter_only, iter_except)
@@ -54,7 +55,7 @@ class InstallabilitySolver(InstallabilityTester):
         revuniverse = self._revuniverse
         result = []
         emitted = set()
-        check = set()
+        queue = deque()
         order = {}
         ptable = {}
         key2item = {}
@@ -226,22 +227,26 @@ class InstallabilitySolver(InstallabilityTester):
         if debug_solver:
             print("N: -- PARTIAL ORDER --")
 
+        initial_round = []
         for com in sorted(order):
             if debug_solver and order[com]['before']:
                 print("N: %s <= %s" % (com, str(sorted(order[com]['before']))))
             if not order[com]['after']:
                 # This component can be scheduled immediately, add it
-                # to "check"
-                check.add(com)
+                # to the queue
+                initial_round.append(com)
             elif debug_solver:
                 print("N: %s >= %s" % (com, str(sorted(order[com]['after']))))
 
+        queue.extend(sorted(initial_round, key=len))
+        del initial_round
+
         if debug_solver:
             print("N: -- END PARTIAL ORDER --")
             print("N: -- LINEARIZED ORDER --")
 
-        for cur in iter_except(check.pop, KeyError):
-            if order[cur]['after'] <= emitted:
+        for cur in iter_except(queue.popleft, IndexError):
+            if order[cur]['after'] <= emitted and cur not in emitted:
                 # This item is ready to be emitted right now
                 if debug_solver:
                     print("N: %s -- %s" % (cur, sorted(scc[cur])))
@@ -249,10 +254,10 @@ class InstallabilitySolver(InstallabilityTester):
                 result.append([key2item[x] for x in scc[cur]])
                 if order[cur]['before']:
                     # There are components that come after this one.
-                    # Add it to "check":
+                    # Add it to queue:
                     # - if it is ready, it will be emitted.
                     # - else, it will be dropped and re-added later.
-                    check.update(order[cur]['before'] - emitted)
+                    queue.extend(sorted(order[cur]['before'] - emitted, key=len))
 
         if debug_solver:
             print("N: -- END LINEARIZED ORDER --")
-- 
2.5.1

From bbc6c34ca569c611db2406b66cc537a641ebf4aa Mon Sep 17 00:00:00 2001
From: Niels Thykier <niels@thykier.net>
Date: Mon, 7 Sep 2015 18:07:37 +0200
Subject: [PATCH 15/16] britney.py: Make iter_packages repeat until no more
 actions work

Signed-off-by: Niels Thykier <niels@thykier.net>
---
 britney.py | 98 +++++++++++++++++++++++++++++++++-----------------------------
 1 file changed, 53 insertions(+), 45 deletions(-)

diff --git a/britney.py b/britney.py
index b368759..804f24c 100755
--- a/britney.py
+++ b/britney.py
@@ -2291,21 +2291,22 @@ class Britney(object):
 
         return (is_accepted, nuninst_after, undo_list, arch)
 
-
-
     def iter_packages(self, packages, selected, nuninst=None, lundo=None):
         """Iter on the list of actions and apply them one-by-one
 
         This method applies the changes from `packages` to testing, checking the uninstallability
         counters for every action performed. If the action does not improve them, it is reverted.
         The method returns the new uninstallability counters and the remaining actions if the
-        final result is successful, otherwise (None, None).
+        final result is successful, otherwise (None, []).
         """
-        extra = []
-        groups = set()
+        group_info = {}
+        maybe_rescheduled = packages
+        changed = True
+
         for y in sorted((y for y in packages), key=attrgetter('uvname')):
             updates, rms, _ = self._compute_groups(y.package, y.suite, y.architecture, y.is_removal)
-            groups.add((y, frozenset(updates), frozenset(rms)))
+            result = (y, frozenset(updates), frozenset(rms))
+            group_info[y] = result
 
         if selected is None:
             selected = []
@@ -2313,46 +2314,53 @@ class Britney(object):
             nuninst_orig = nuninst
         else:
             nuninst_orig = self.nuninst_orig
+
         nuninst_last_accepted = nuninst_orig
-        maybe_reschuled = []
-        worklist = self._inst_tester.solve_groups(groups)
-        worklist.reverse()
-        self.output_write("recur: [] %s %d/%d\n" % (",".join(x.uvname for x in selected), len(packages), len(extra)))
-
-        while worklist:
-            comp = worklist.pop()
-            comp_name = ' '.join(item.uvname for item in comp)
-            self.output_write("trying: %s\n" % comp_name)
-            accepted, nuninst_after, comp_undo, failed_arch = self.try_migration(comp, nuninst_last_accepted, lundo)
-            if accepted:
-                nuninst_last_accepted = nuninst_after
-                selected.extend(comp)
-                if lundo is not None:
-                    lundo.extend(comp_undo)
-                self.output_write("accepted: %s\n" % comp_name)
-                self.output_write("   ori: %s\n" % (self.eval_nuninst(nuninst_orig)))
-                self.output_write("   pre: %s\n" % (self.eval_nuninst(nuninst_last_accepted)))
-                self.output_write("   now: %s\n" % (self.eval_nuninst(nuninst_after)))
-                if len(selected) <= 20:
-                    self.output_write("   all: %s\n" % (" ".join(x.uvname for x in selected)))
-                else:
-                    self.output_write("  most: (%d) .. %s\n" % (len(selected), " ".join(x.uvname for x in selected[-20:])))
-            else:
-                broken = sorted(b for b in nuninst_after[failed_arch]
-                                if b not in nuninst_last_accepted[failed_arch])
-                compare_nuninst = None
-                if any(item for item in comp if item.architecture != 'source'):
-                    compare_nuninst = nuninst_last_accepted
-                # NB: try_migration already reverted this for us, so just print the results and move on
-                self.output_write("skipped: %s (%d, %d)\n" % (comp_name, len(maybe_reschuled), len(worklist)))
-                self.output_write("    got: %s\n" % (self.eval_nuninst(nuninst_after, compare_nuninst)))
-                self.output_write("    * %s: %s\n" % (failed_arch, ", ".join(broken)))
-
-                if len(comp) > 1:
-                    self.output_write("    - splitting the component into single items and retrying them\n")
-                    worklist.extend([item] for item in comp)
+
+        self.output_write("recur: [] %s %d/0\n" % (",".join(x.uvname for x in selected), len(packages)))
+        while changed:
+            changed = False
+            groups = {group_info[x] for x in maybe_rescheduled}
+            worklist = self._inst_tester.solve_groups(groups)
+            maybe_rescheduled = []
+
+            worklist.reverse()
+
+            while worklist:
+                comp = worklist.pop()
+                comp_name = ' '.join(item.uvname for item in comp)
+                self.output_write("trying: %s\n" % comp_name)
+                accepted, nuninst_after, comp_undo, failed_arch = self.try_migration(comp, nuninst_last_accepted, lundo)
+                if accepted:
+                    nuninst_last_accepted = nuninst_after
+                    selected.extend(comp)
+                    changed = True
+                    if lundo is not None:
+                        lundo.extend(comp_undo)
+                    self.output_write("accepted: %s\n" % comp_name)
+                    self.output_write("   ori: %s\n" % (self.eval_nuninst(nuninst_orig)))
+                    self.output_write("   pre: %s\n" % (self.eval_nuninst(nuninst_last_accepted)))
+                    self.output_write("   now: %s\n" % (self.eval_nuninst(nuninst_after)))
+                    if len(selected) <= 20:
+                        self.output_write("   all: %s\n" % (" ".join(x.uvname for x in selected)))
+                    else:
+                        self.output_write("  most: (%d) .. %s\n" % (len(selected), " ".join(x.uvname for x in selected[-20:])))
                 else:
-                    extra.extend(comp)
+                    broken = sorted(b for b in nuninst_after[failed_arch]
+                                    if b not in nuninst_last_accepted[failed_arch])
+                    compare_nuninst = None
+                    if any(item for item in comp if item.architecture != 'source'):
+                        compare_nuninst = nuninst_last_accepted
+                    # NB: try_migration already reverted this for us, so just print the results and move on
+                    self.output_write("skipped: %s (%d, %d)\n" % (comp_name, len(maybe_rescheduled), len(worklist)))
+                    self.output_write("    got: %s\n" % (self.eval_nuninst(nuninst_after, compare_nuninst)))
+                    self.output_write("    * %s: %s\n" % (failed_arch, ", ".join(broken)))
+
+                    if len(comp) > 1:
+                        self.output_write("    - splitting the component into single items and retrying them\n")
+                        worklist.extend([item] for item in comp)
+                    else:
+                        maybe_rescheduled.append(comp[0])
 
         self.output_write(" finish: [%s]\n" % ",".join( x.uvname for x in selected ))
         self.output_write("endloop: %s\n" % (self.eval_nuninst(self.nuninst_orig)))
@@ -2361,7 +2369,7 @@ class Britney(object):
                                       newly_uninst(self.nuninst_orig, nuninst_last_accepted)))
         self.output_write("\n")
 
-        return (nuninst_last_accepted, extra)
+        return (nuninst_last_accepted, maybe_rescheduled)
 
 
     def do_all(self, hinttype=None, init=None, actions=None):
-- 
2.5.1

From 9c3d0afaf5147fb76c6b33ebeae7501dc9973c07 Mon Sep 17 00:00:00 2001
From: Niels Thykier <niels@thykier.net>
Date: Mon, 7 Sep 2015 18:08:28 +0200
Subject: [PATCH 16/16] britney.py: Remove sort_actions, we no longer need it

Signed-off-by: Niels Thykier <niels@thykier.net>
---
 britney.py | 43 -------------------------------------------
 1 file changed, 43 deletions(-)

diff --git a/britney.py b/britney.py
index 804f24c..b75dd2d 100755
--- a/britney.py
+++ b/britney.py
@@ -2481,7 +2481,6 @@ class Britney(object):
             if not actions:
                 if recurse:
                     self.upgrade_me = extra
-                    self.sort_actions()
                 else:
                     self.upgrade_me = [x for x in self.upgrade_me if x not in set(selected)]
         else:
@@ -2759,47 +2758,6 @@ class Britney(object):
         self.do_all(hinttype, _pkgvers)
         return True
 
-    def sort_actions(self):
-        """Sort actions in a smart way
-
-        This method sorts the list of actions in a smart way. In detail, it uses
-        as the base sort the number of days the excuse is old, then reorders packages
-        so the ones with most reverse dependencies are at the end of the loop.
-        If an action depends on another one, it is put after it.
-        """
-        uvnames = frozenset(y.uvname for y in self.upgrade_me)
-        excuses = [e for e in self.excuses if e.name in uvnames]
-        removals = []
-        upgrade_me = []
-
-        for e in excuses:
-            # We order removals and regular migrations differently, so
-            # split them out early.
-            if e.name[0] == '-':
-                removals.append(e.name)
-            else:
-                upgrade_me.append(e.name)
-
-        for e in excuses:
-            # put the item (regular migration) in a good position
-            # checking its dependencies
-            pos = []
-            udeps = [upgrade_me.index(x) for x in e.deps if x in upgrade_me and x != e.name]
-            if udeps:
-                pos.append(max(udeps))
-            sdeps = [upgrade_me.index(x) for x in e.sane_deps if x in upgrade_me and x != e.name]
-            if sdeps:
-                pos.append(min(sdeps))
-            if not pos:
-                continue
-            upgrade_me.remove(e.name)
-            upgrade_me.insert(max(pos)+1, e.name)
-            self.dependencies[e.name] = e.deps
-
-        # replace the list of actions with the new one
-        self.upgrade_me = [ make_migrationitem(x, self.sources) for x in upgrade_me ]
-        self.upgrade_me.extend(make_migrationitem(x, self.sources) for x in removals)
-
     def auto_hinter(self):
         """Auto-generate "easy" hints.
 
@@ -2920,7 +2878,6 @@ class Britney(object):
         # if no actions are provided, build the excuses and sort them
         elif not self.options.actions:
             self.write_excuses()
-            self.sort_actions()
         # otherwise, use the actions provided by the command line
         else:
             self.upgrade_me = self.options.actions.split()
-- 
2.5.1

diff --git a/live-2011-12-13/expected b/live-2011-12-13/expected
index c492b51..3913f86 100644
--- a/live-2011-12-13/expected
+++ b/live-2011-12-13/expected
@@ -31743,9 +31743,11 @@ tcm 2.20+TSQD-4.2 amd64 graphics
 tcm-doc 2.20+TSQD-4.2 all doc
 tcng 10b-3 amd64 net
 tcos 0.89.86 all net
+tcos-configurator 1.22 all gnome
 tcos-core 0.89.86 amd64 net
 tcos-standalone 0.89.86 all net
 tcos-tftp-dhcp 0.89.86 all net
+tcosconfig 0.3.30 all gnome
 tcosmonitor 0.2.43 all gnome
 tcosmonitor-common 0.2.43 all gnome
 tcpd 7.6.q-21 amd64 net
@@ -65728,9 +65730,11 @@ tcm 2.20+TSQD-4.2 armel graphics
 tcm-doc 2.20+TSQD-4.2 all doc
 tcng 10b-3 armel net
 tcos 0.89.86 all net
+tcos-configurator 1.22 all gnome
 tcos-core 0.89.86 armel net
 tcos-standalone 0.89.86 all net
 tcos-tftp-dhcp 0.89.86 all net
+tcosconfig 0.3.30 all gnome
 tcosmonitor 0.2.43 all gnome
 tcosmonitor-common 0.2.43 all gnome
 tcpd 7.6.q-21 armel net
@@ -100511,9 +100515,11 @@ tcm 2.20+TSQD-4.2 i386 graphics
 tcm-doc 2.20+TSQD-4.2 all doc
 tcng 10b-3 i386 net
 tcos 0.89.86 all net
+tcos-configurator 1.22 all gnome
 tcos-core 0.89.86 i386 net
 tcos-standalone 0.89.86 all net
 tcos-tftp-dhcp 0.89.86 all net
+tcosconfig 0.3.30 all gnome
 tcosmonitor 0.2.43 all gnome
 tcosmonitor-common 0.2.43 all gnome
 tcpd 7.6.q-21 i386 net
@@ -133788,9 +133794,11 @@ tcm 2.20+TSQD-4.2 ia64 graphics
 tcm-doc 2.20+TSQD-4.2 all doc
 tcng 10b-3 ia64 net
 tcos 0.89.86 all net
+tcos-configurator 1.22 all gnome
 tcos-core 0.89.86 ia64 net
 tcos-standalone 0.89.86 all net
 tcos-tftp-dhcp 0.89.86 all net
+tcosconfig 0.3.30 all gnome
 tcosmonitor 0.2.43 all gnome
 tcosmonitor-common 0.2.43 all gnome
 tcpd 7.6.q-21 ia64 net
@@ -165986,8 +165994,10 @@ tclxapian 1.2.7-1 kfreebsd-amd64 libs
 tclxml 3.2-1 kfreebsd-amd64 devel
 tcm-doc 2.20+TSQD-4.2 all doc
 tcos 0.89.86 all net
+tcos-configurator 1.22 all gnome
 tcos-standalone 0.89.86 all net
 tcos-tftp-dhcp 0.89.86 all net
+tcosconfig 0.3.30 all gnome
 tcosmonitor 0.2.43 all gnome
 tcosmonitor-common 0.2.43 all gnome
 tcpd 7.6.q-21 kfreebsd-amd64 net
@@ -197948,8 +197958,10 @@ tclxapian 1.2.7-1 kfreebsd-i386 libs
 tclxml 3.2-1 kfreebsd-i386 devel
 tcm-doc 2.20+TSQD-4.2 all doc
 tcos 0.89.86 all net
+tcos-configurator 1.22 all gnome
 tcos-standalone 0.89.86 all net
 tcos-tftp-dhcp 0.89.86 all net
+tcosconfig 0.3.30 all gnome
 tcosmonitor 0.2.43 all gnome
 tcosmonitor-common 0.2.43 all gnome
 tcpd 7.6.q-21 kfreebsd-i386 net
@@ -231433,9 +231445,11 @@ tcm 2.20+TSQD-4.2 mips graphics
 tcm-doc 2.20+TSQD-4.2 all doc
 tcng 10b-3 mips net
 tcos 0.89.86 all net
+tcos-configurator 1.22 all gnome
 tcos-core 0.89.86 mips net
 tcos-standalone 0.89.86 all net
 tcos-tftp-dhcp 0.89.86 all net
+tcosconfig 0.3.30 all gnome
 tcosmonitor 0.2.43 all gnome
 tcosmonitor-common 0.2.43 all gnome
 tcpd 7.6.q-21 mips net
@@ -265155,9 +265169,11 @@ tcm 2.20+TSQD-4.2 mipsel graphics
 tcm-doc 2.20+TSQD-4.2 all doc
 tcng 10b-3 mipsel net
 tcos 0.89.86 all net
+tcos-configurator 1.22 all gnome
 tcos-core 0.89.86 mipsel net
 tcos-standalone 0.89.86 all net
 tcos-tftp-dhcp 0.89.86 all net
+tcosconfig 0.3.30 all gnome
 tcosmonitor 0.2.43 all gnome
 tcosmonitor-common 0.2.43 all gnome
 tcpd 7.6.q-21 mipsel net
@@ -299317,9 +299333,11 @@ tcm 2.20+TSQD-4.2 powerpc graphics
 tcm-doc 2.20+TSQD-4.2 all doc
 tcng 10b-3 powerpc net
 tcos 0.89.86 all net
+tcos-configurator 1.22 all gnome
 tcos-core 0.89.86 powerpc net
 tcos-standalone 0.89.86 all net
 tcos-tftp-dhcp 0.89.86 all net
+tcosconfig 0.3.30 all gnome
 tcosmonitor 0.2.43 all gnome
 tcosmonitor-common 0.2.43 all gnome
 tcpd 7.6.q-21 powerpc net
@@ -333127,9 +333145,11 @@ tcm 2.20+TSQD-4.2 s390 graphics
 tcm-doc 2.20+TSQD-4.2 all doc
 tcng 10b-3 s390 net
 tcos 0.89.86 all net
+tcos-configurator 1.22 all gnome
 tcos-core 0.89.86 s390 net
 tcos-standalone 0.89.86 all net
 tcos-tftp-dhcp 0.89.86 all net
+tcosconfig 0.3.30 all gnome
 tcosmonitor 0.2.43 all gnome
 tcosmonitor-common 0.2.43 all gnome
 tcpd 7.6.q-21 s390 net
@@ -366956,9 +366976,11 @@ tcm 2.20+TSQD-4.2 sparc graphics
 tcm-doc 2.20+TSQD-4.2 all doc
 tcng 10b-3 sparc net
 tcos 0.89.86 all net
+tcos-configurator 1.22 all gnome
 tcos-core 0.89.86 sparc net
 tcos-standalone 0.89.86 all net
 tcos-tftp-dhcp 0.89.86 all net
+tcosconfig 0.3.30 all gnome
 tcosmonitor 0.2.43 all gnome
 tcosmonitor-common 0.2.43 all gnome
 tcpd 7.6.q-21 sparc net
@@ -384754,6 +384776,8 @@ tclxml 3.2-1 source devel
 tcm 2.20+TSQD-4.2 source graphics
 tcng 10b-3 source net
 tcos 0.89.86 source net
+tcos-configurator 1.22 source gnome
+tcosconfig 0.3.30 source gnome
 tcosmonitor 0.2.43 source gnome
 tcp-wrappers 7.6.q-21 source net
 tcpdump 4.1.1-2 source net
diff --git a/live-2012-01-04/expected b/live-2012-01-04/expected
index 75c401d..e78f489 100644
--- a/live-2012-01-04/expected
+++ b/live-2012-01-04/expected
@@ -13581,6 +13581,9 @@ libghc-magic-prof 1.0.8-7+b2 amd64 haskell
 libghc-markov-chain-dev 0.0.3.1-2+b3 amd64 haskell
 libghc-markov-chain-doc 0.0.3.1-2 all doc
 libghc-markov-chain-prof 0.0.3.1-2+b3 amd64 haskell
+libghc-math-functions-dev 0.1.0.0-1 amd64 haskell
+libghc-math-functions-doc 0.1.0.0-1 all doc
+libghc-math-functions-prof 0.1.0.0-1 amd64 haskell
 libghc-maths-dev 0.4.1-1 amd64 haskell
 libghc-maths-doc 0.4.1-1 all doc
 libghc-maths-prof 0.4.1-1 amd64 haskell
@@ -15897,6 +15900,7 @@ libhwloc-doc 1.3.1-1 all doc
 libhwloc4 1.3.1-1 amd64 libs
 libhx-dev 3.12.1-1 amd64 libdevel
 libhx-doc 3.12.1-1 all doc
+libhx27 3.11-1 amd64 libs
 libhx28 3.12.1-1 amd64 libs
 libhx509-5-heimdal 1.5.dfsg.1-3 amd64 libs
 libhyantes-dev 1.2.1-2 amd64 libdevel
@@ -48348,6 +48352,9 @@ libghc-magic-prof 1.0.8-7+b1 armel haskell
 libghc-markov-chain-dev 0.0.3.1-2+b2 armel haskell
 libghc-markov-chain-doc 0.0.3.1-2 all doc
 libghc-markov-chain-prof 0.0.3.1-2+b2 armel haskell
+libghc-math-functions-dev 0.1.0.0-1 armel haskell
+libghc-math-functions-doc 0.1.0.0-1 all doc
+libghc-math-functions-prof 0.1.0.0-1 armel haskell
 libghc-maths-dev 0.4.1-1 armel haskell
 libghc-maths-doc 0.4.1-1 all doc
 libghc-maths-prof 0.4.1-1 armel haskell
@@ -83073,6 +83080,9 @@ libghc-magic-prof 1.0.8-7+b1 i386 haskell
 libghc-markov-chain-dev 0.0.3.1-2+b2 i386 haskell
 libghc-markov-chain-doc 0.0.3.1-2 all doc
 libghc-markov-chain-prof 0.0.3.1-2+b2 i386 haskell
+libghc-math-functions-dev 0.1.0.0-1 i386 haskell
+libghc-math-functions-doc 0.1.0.0-1 all doc
+libghc-math-functions-prof 0.1.0.0-1 i386 haskell
 libghc-maths-dev 0.4.1-1 i386 haskell
 libghc-maths-doc 0.4.1-1 all doc
 libghc-maths-prof 0.4.1-1 i386 haskell
@@ -85392,6 +85402,7 @@ libhwloc-doc 1.3.1-1 all doc
 libhwloc4 1.3.1-1 i386 libs
 libhx-dev 3.12.1-1 i386 libdevel
 libhx-doc 3.12.1-1 all doc
+libhx27 3.11-1 i386 libs
 libhx28 3.12.1-1 i386 libs
 libhx509-5-heimdal 1.5.dfsg.1-3 i386 libs
 libhyantes-dev 1.2.1-2 i386 libdevel
@@ -117404,6 +117415,7 @@ libghc-ltk-doc 0.10.0.4-1 all doc
 libghc-maccatcher-doc 2.1.4-1 all doc
 libghc-magic-doc 1.0.8-7 all doc
 libghc-markov-chain-doc 0.0.3.1-2 all doc
+libghc-math-functions-doc 0.1.0.0-1 all doc
 libghc-maths-doc 0.4.1-1 all doc
 libghc-maybet-doc 0.1.2-2 all doc
 libghc-mbox-doc 0.1-1 all doc
@@ -150641,6 +150653,9 @@ libghc-magic-prof 1.0.8-7+b2 kfreebsd-amd64 haskell
 libghc-markov-chain-dev 0.0.3.1-2+b3 kfreebsd-amd64 haskell
 libghc-markov-chain-doc 0.0.3.1-2 all doc
 libghc-markov-chain-prof 0.0.3.1-2+b3 kfreebsd-amd64 haskell
+libghc-math-functions-dev 0.1.0.0-1 kfreebsd-amd64 haskell
+libghc-math-functions-doc 0.1.0.0-1 all doc
+libghc-math-functions-prof 0.1.0.0-1 kfreebsd-amd64 haskell
 libghc-maths-dev 0.4.1-1 kfreebsd-amd64 haskell
 libghc-maths-doc 0.4.1-1 all doc
 libghc-maths-prof 0.4.1-1 kfreebsd-amd64 haskell
@@ -183033,6 +183048,9 @@ libghc-magic-prof 1.0.8-7+b1 kfreebsd-i386 haskell
 libghc-markov-chain-dev 0.0.3.1-2+b2 kfreebsd-i386 haskell
 libghc-markov-chain-doc 0.0.3.1-2 all doc
 libghc-markov-chain-prof 0.0.3.1-2+b2 kfreebsd-i386 haskell
+libghc-math-functions-dev 0.1.0.0-1 kfreebsd-i386 haskell
+libghc-math-functions-doc 0.1.0.0-1 all doc
+libghc-math-functions-prof 0.1.0.0-1 kfreebsd-i386 haskell
 libghc-maths-dev 0.4.1-1 kfreebsd-i386 haskell
 libghc-maths-doc 0.4.1-1 all doc
 libghc-maths-prof 0.4.1-1 kfreebsd-i386 haskell
@@ -216028,6 +216046,9 @@ libghc-magic-prof 1.0.8-7+b1 mips haskell
 libghc-markov-chain-dev 0.0.3.1-2+b1 mips haskell
 libghc-markov-chain-doc 0.0.3.1-2 all doc
 libghc-markov-chain-prof 0.0.3.1-2+b1 mips haskell
+libghc-math-functions-dev 0.1.0.0-1 mips haskell
+libghc-math-functions-doc 0.1.0.0-1 all doc
+libghc-math-functions-prof 0.1.0.0-1 mips haskell
 libghc-maths-dev 0.4.1-1 mips haskell
 libghc-maths-doc 0.4.1-1 all doc
 libghc-maths-prof 0.4.1-1 mips haskell
@@ -250112,6 +250133,9 @@ libghc-magic-prof 1.0.8-7+b1 mipsel haskell
 libghc-markov-chain-dev 0.0.3.1-2+b2 mipsel haskell
 libghc-markov-chain-doc 0.0.3.1-2 all doc
 libghc-markov-chain-prof 0.0.3.1-2+b2 mipsel haskell
+libghc-math-functions-dev 0.1.0.0-1 mipsel haskell
+libghc-math-functions-doc 0.1.0.0-1 all doc
+libghc-math-functions-prof 0.1.0.0-1 mipsel haskell
 libghc-maths-dev 0.4.1-1 mipsel haskell
 libghc-maths-doc 0.4.1-1 all doc
 libghc-maths-prof 0.4.1-1 mipsel haskell
@@ -254681,10 +254705,12 @@ libn32z1 1:1.2.3.4.dfsg-3 mipsel libs
 libn32z1-dev 1:1.2.3.4.dfsg-3 mipsel libdevel
 libnabrit-dbg 0.3.0-2 mipsel debug
 libnabrit-dev 0.3.0-2 all libdevel
+libnabrit1 0.2.91-1 mipsel libs
 libnabrit2 0.3.0-2 mipsel libs
 libnachocalendar-java 0.23-5 all java
 libnacore-dev 0.3.0-2 all libdevel
 libnacore-doc 0.3.0-2 all doc
+libnacore3 0.2.91-2 mipsel libs
 libnacore4 0.3.0-2 mipsel libs
 libnaga-java 2.1-2 all java
 libnagios-object-perl 0.21.16-1 all perl
@@ -284423,6 +284449,9 @@ libghc-magic-prof 1.0.8-7+b1 powerpc haskell
 libghc-markov-chain-dev 0.0.3.1-2+b2 powerpc haskell
 libghc-markov-chain-doc 0.0.3.1-2 all doc
 libghc-markov-chain-prof 0.0.3.1-2+b2 powerpc haskell
+libghc-math-functions-dev 0.1.0.0-1 powerpc haskell
+libghc-math-functions-doc 0.1.0.0-1 all doc
+libghc-math-functions-prof 0.1.0.0-1 powerpc haskell
 libghc-maths-dev 0.4.1-1 powerpc haskell
 libghc-maths-doc 0.4.1-1 all doc
 libghc-maths-prof 0.4.1-1 powerpc haskell
@@ -318773,6 +318802,9 @@ libghc-magic-prof 1.0.8-7+b1 s390 haskell
 libghc-markov-chain-dev 0.0.3.1-2+b2 s390 haskell
 libghc-markov-chain-doc 0.0.3.1-2 all doc
 libghc-markov-chain-prof 0.0.3.1-2+b2 s390 haskell
+libghc-math-functions-dev 0.1.0.0-1 s390 haskell
+libghc-math-functions-doc 0.1.0.0-1 all doc
+libghc-math-functions-prof 0.1.0.0-1 s390 haskell
 libghc-maths-dev 0.4.1-1 s390 haskell
 libghc-maths-doc 0.4.1-1 all doc
 libghc-maths-prof 0.4.1-1 s390 haskell
@@ -352970,6 +353002,9 @@ libghc-magic-prof 1.0.8-7+b1 sparc haskell
 libghc-markov-chain-dev 0.0.3.1-2+b2 sparc haskell
 libghc-markov-chain-doc 0.0.3.1-2 all doc
 libghc-markov-chain-prof 0.0.3.1-2+b2 sparc haskell
+libghc-math-functions-dev 0.1.0.0-1 sparc haskell
+libghc-math-functions-doc 0.1.0.0-1 all doc
+libghc-math-functions-prof 0.1.0.0-1 sparc haskell
 libghc-maths-dev 0.4.1-1 sparc haskell
 libghc-maths-doc 0.4.1-1 all doc
 libghc-maths-prof 0.4.1-1 sparc haskell
@@ -378725,6 +378760,7 @@ haskell-logict 0.5.0-1 source haskell
 haskell-ltk 0.10.0.4-1 source haskell
 haskell-maccatcher 2.1.4-1 source haskell
 haskell-markov-chain 0.0.3.1-2 source haskell
+haskell-math-functions 0.1.0.0-1 source haskell
 haskell-maths 0.4.1-1 source haskell
 haskell-maybet 0.1.2-2 source haskell
 haskell-mbox 0.1-1 source haskell

Attachment: signature.asc
Description: OpenPGP digital signature


Reply to: