[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Clean run on lintian.d.o complete



On Sat, May 18, 2013 at 1:29 PM, Niels Thykier <niels@thykier.net> wrote:
>
> Hi,

Thanks niel,

Could you add me as a lintian mainteners ?

>
> Over the past 75 hours, lintian has done a "clean" run on the archive.
> I just started the "catch-up" incremental, but we should be back on cron
> later today or tomorrow.  The choice of "clean" (instead of just a
> "full") was due to commit 6dde1956.  The missing bump occasionally
> caused a lot of warnings to the log[1].
>
> For the people interested in some performance related information (and
> for reference for next time we do a full run).  We had 21.2k source
> packages and 47.9 binary packages (at the end of the run)[2].  The clean
> run had 65.4k groups to process to be split over 128 rounds[3]. The 75
> hours does not include removing the full lab[4].
>
>  * On average, lintian would process 660.4 groups per hour (or 1.3
>    rounds of 512 groups per hour).
>  * In 26 (out of 128) rounds, Lintian exited with code 2.  The rest
>    were either 0 or 1.
>    - By far, most of these errors appear to be cases where the
>      underlying package disappeared
>  * At least two bugs were triggered in Lintian.
>    - First one being fixed as 2252f05.
>    - Second one is being filed as a bug.
>
> ~Niels
>
> [1] c/cruft was passing the directory name without the slash to
> file_info which cause undef and cruft did not check for it, since it
> "shouldn't happen".  This problem only occurred for binNMUs (or similar)
> rebuilds where the old source package was reused.  Other than the noise
> in the log, it did not affect the results to my knowledge.
>
> [2]
> $ wc -l laboratory/info/*
>    47899 laboratory/info/binary-packages
>        2 laboratory/info/lab-info
>    21282 laboratory/info/source-packages
>      457 laboratory/info/udeb-packages
>    69640 total
>
> (during the run, lintian will purge entries that cannot be processed.
> Usually happens as entries "disappear" from our mirror and leaves a
> dangling symlink behind).
>
> [3] This number looks a bit weird, since the number of groups should be
> equal to the number of (source, source-version)-pairs seen at the start
> of the run.  It could be inflated by "old" arch:all packages kept due to
> incomplete builds and some of the groups were removed during the run.
> That said, I find the 40k groups difference (even over 3½ days)
> questionable.
>
> [4] lintian.d.o was rebooted/crashed at some point.  Removal of the lab
> took at least an hour with me adding a couple of extra "rm -rf" to speed
> up the removal.  The normal lab removal would probably at least have
> taken 3 hours.
>
> The 75 hours are based on two date calls wrapping the harness call,
> which printed:
>   Wed May 15 06:14:31 UTC 2013
>   Sat May 18 09:37:22 UTC 2013

Could we have some stats by time passing for each tag emited (by
category) ? What are the most cpu intensive check ?

Bastien


Reply to: