[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Low Averge Bug Counts



On Fri, Dec 07, 2001 at 11:20:52AM -0800, Grant Bowman wrote:

> Christian Kurz and I came up with some other factors that are measurable
> and may need to be taken into account to increase the validity of the
> data.  Just the average open unarchived bugs isn't very accurate though
> I feel it's a good start.
> 
>   * severity (critical, grave, serious, important, normal, minor,
>     wishlist, fixed) of bug reports 
>   * status (open, forwarded, pending, fixed, done) of bug reports
>   * age of bug
>   * responsiveness of user (scan log for From: & Date: and compute)
>   * responsiveness of maintainer (scan log and compute)
> 
> In project management there are formulas for prioritization and
> scheduling that may be applicable.  I can pull out the information I
> have.

This is just the kind of reporting I had in mind when I prototyped a backend
for debbugs which stored all of the messages and bug data in the database.
The From and Date fields were to be extracted, and some attempt could be
made to guess whether the message was sent by the maintainer, the submitter,
or someone else.  However, there were questions about the reliability of
PostgreSQL, concerns about overcomplexity, and other, non-technical reasons
why it wasn't feasible.

While I like the idea of giving hard-working developers a pat on the back,
I'm not sure that it's a good idea to rate them so mechanically.  It's even
less valid than judging real-world system performance based on artificial
benchmarks.  I think that we might do better to consider a user- and
developer-driven system, maybe more along the lines of the netfilter
scoreboard:

http://netfilter.samba.org/scoreboard.html

-- 
 - mdz



Reply to: