[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: experimental does not like Haskell



Hello Johannes,

2015-08-28 14:53 GMT+02:00 Johannes Schauer <josch@debian.org>:
> Quoting Hector Oron (2015-08-28 12:55:13)
>> It is rather tricky to check that, as you need quite a long list of
>> packages from experimental, then one needs to verify the build-dep
>> chain is correct, as using other solvers seemed to lead to bogus
>> packages being pulled in. <URL:
>> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=693921#52 >
>>
>> If you have ideas for robust test cases, feel free to post them.

> since multiple packages are able to provide the same package name and since
> there are dependency alternatives (A|B), there exist multiple solutions for
> most dependency satisfaction questions. If all constraints are honored then
> they are all correct.

Right! Currently we, officially use different solvers... on server
side, dose3 is run to check package instalability (which needs to be
optimizaed to take less time on its calculations - as you are already
aware of by reading IRC backlog). On build daemon side, solver
calculations are deferred to sbuild, which allows different solvers,
in non-experimental case, apt solver is used; for experimental case,
aptitude solver is used. One of the problems we have is that dose3 and
sbuild problem solving differs, it'd be nice to produce same results
both sides. Not sure if dropping B from (A|B) style dependencies might
help, as I understood that's already the case for sbuild.

> The bug you quote mentions "bogus" packages drawn in by aptitude. Either
> aptitude is not respecting build and binary dependencies correctly in which
> case, these packages are indeed "bogus" (but then this is an aptitude bug), or
> the metadata is incorrectly expressing that these extra packages can be part of
> the solution.
>
> Which of these problems are you concerned about?

On the linked bug report, Sune was apparently worried that solver was
pulling more packages than really needed. I suspect that matches your
first case.

> As with this specific "bogus" package problem, aspcud "should" not have this
> problem because it optimizes the solution by the given optimization criterea.
> As far as this optimization criterea is concerned there is no "bogus" packages
> (if we assume that there is no bug in aspcud). In that case, are you concerned
> about the right optimization criterea? Or about dependency relationships that
> could lead to solutions that you might call "bogus"? Or about a bug in aspcud?

I would be mainly worried by the case of server calculations (dose3)
not matching build daemon calculations (sbuild, apt or aptitude).

> It is extremely hard to compare solutions because in most cases, the solution
> found by apt will be different from the one found by aptitude or dose3.

Exactly! That's probably my main concern, it'd be great to standarize
on one single solver to rule them all.

> Would it be sufficient for you if I picked a snapshot, rebuilt all packages in
> experimental with the apt resolver and then rebuild those that fail with apt
> with the aptitude as well as with the aspcud resolver and then compare the
> results from the aptitude to the aspcud resolver? And by compare I mean: make
> sure that there is no FTBFS with the aspcud resolver where there is none with
> the aptitude resolver?

Not sure if sufficient, but at least it is interesting test case to
check. I would also like PPA-like (bikesheds) scenario to be tested,
even the case of multiple chained PPA-likes, but I do not think we
currently have real data. Also considering Haskell packages as initial
set sounds good to me, as those have quite tight versioned
dependencies having extremely long dependency resolution chain which
exposes aptitude solver bug.

Regards,
-- 
 Héctor Orón  -.. . -... .. .- -.   -.. . ...- . .-.. --- .--. . .-.


Reply to: