[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#614249: marked as done (britney2: insufficient removal checks)



Your message dated Wed, 09 Mar 2011 18:39:48 +0000
with message-id <1299695989.20673.18.camel@hathi.jungle.funky-badger.org>
and subject line Re: Bug#614249: britney2: insufficient removal checks
has caused the Debian Bug report #614249,
regarding britney2: insufficient removal checks
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact owner@bugs.debian.org
immediately.)


-- 
614249: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=614249
Debian Bug Tracking System
Contact owner@bugs.debian.org with problems
--- Begin Message ---
Package: release.debian.org
User: release.debian.org@packages.debian.org
Usertag: britney
Tag: patch

[Executive summary: britney2's attempt to avoid continually
installability checking the entire package set means it can increase
installability when removing packages]

Hi,

britney1 and britney2 have different methods of determining whether a
package can be removed from testing, which recently led to britney2
removing a package when it shouldn't have.  A patch which appears to fix
this is attached, the remainder of this mail is discussion of what
happened and why I believe the patch is correct.

To set the scene... a package P depends on functionality which used to
be provided by packages O1 and O2 and is now provided by package N.  P
depends on "N | O1, N | O2"; N conflicts with both O1 and O2 (and
provides O2, but that's not relevant here as P's dependency on O2 is
versioned).  Another package, Q, depends on P and O1.  O2 has been
removed from unstable, so britney is trying to remove it from testing.

britney1's approach here is very simple:
  a) store uninstallable package count
  b) try removing package
  c) recalculate uninstallable count; compare to the earlier count
  d) if no new uninstallability has occurred, the package may be removed

britney2 tries to avoid the relatively expensive step of recalculating
installability regularly by being cleverer:
  a) store a list of the r-depends of the to-be-removed package
  b) for each package in that list, note whether its installabile status
has changed
  c) for each package in the list where the status has changed
    - iterate its r-depends checking their installability, repeating
step c until no status changes are found
  d) if no new uninstallability has occurred, the package may be removed

In our example scenario, britney2 removes O2 and checks the status of
its r-dependency, P; P can use N to satisfy the alternative dependencies
which involve O1 and O2, so it is still installable and the removal is
declared possible despite it rendering Q uninstallable.

My patch augments step a by storing a recursive list of r-depends for
later consideration; in this case this would mean that Q is checked in
step b and discovered to be uninstallable.

The patch does add some overhead to each migration operation as more
packages need to be checked but if this is a problem then it could be
mitigated by making the recursive check conditional on the operation in
question being a removal rather than upgrade or addition of a package.

Comments / questions / cluesticks welcome.

Regards,

Adam
diff -adNru /srv/release.debian.org/britney/code/b2/britney.py ../b2/britney.py
--- /srv/release.debian.org/britney/code/b2/britney.py	2011-02-11 08:06:24.000000000 +0000
+++ ../b2/britney.py	2011-02-20 16:40:28.000000000 +0000
@@ -2034,9 +2034,19 @@
                     # save the old binary for undo
                     undo['binaries'][p] = binaries[parch][0][binary]
                     # all the reverse dependencies are affected by the change
-                    for j in binaries[parch][0][binary][RDEPENDS]:
-                        key = (j, parch)
-                        if key not in affected: affected.append(key)
+                    seen = [ binary ]
+                    rev_deps = [ binaries[parch][0][binary][RDEPENDS] ]
+                    while len(rev_deps) > 0:
+                        rev_deps = rev_deps[0]
+                        for j in rev_deps:
+                            key = (j, parch)
+                            if key not in affected: affected.append(key)
+                        # Store a temporary copy of rev_deps so we can use it after
+                        # calculating the next iteration
+                        _rev_deps = rev_deps
+                        rev_deps = [ binaries[parch][0][x][RDEPENDS] for x in rev_deps \
+                                     if x in binaries[parch][0] and x not in seen ]
+                        seen.extend(_rev_deps)
                     # remove the provided virtual packages
                     for j in binaries[parch][0][binary][PROVIDES]:
                         key = j + "/" + parch


--- End Message ---
--- Begin Message ---
On Tue, 2011-03-08 at 16:53 +0000, Adam D. Barratt wrote:
> On Wed, March 2, 2011 06:40, Adam D. Barratt wrote:
> > Making the necessary changes to correctly loop over all the reverse
> > dependencies isn't particularly involved, but does take quite a bit
> > longer than b1 when checking debianutils; my initial testing last night
> > suggests it's in the order of a few minutes per architecture, producing
> > a combined list of approximately 11,000 cumulative reverse dependencies
> > each time.
> 
> Updated patch attached, which takes a few seconds to build the list across
> all architectures, producing the same results as the previous version
> (just far more efficiently).
> 
> I'm planning on pushing this to master in the near future together with a
> few other patches I've accumulated recently.

I pushed the patches last night and we appear to have survived two b2
runs without the fabric of the universe coming apart, so I think we can
consider this done.

Regards,

Adam



--- End Message ---

Reply to: