[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Self-assessment of the quality of the maintenance work



On Mon, Dec 22, 2008 at 09:25:32AM +0100, Raphael Hertzog wrote:
> On Sun, 21 Dec 2008, Mark Brown wrote:

> > I'm not sure that the e-mail bit of this really adds anything - the

> I don't understand why you say that:
> - email is the primary way to get in touch with a maintainer
> - making sure that the maintainer responds to an email query is
>   the traditional way used by the MIA team/DAM to see if a maintainer is
>   still active

Right, but on the other hand the MIA and DAM teams tend to be careful to
try to avoid making work for people who are clearly actively doing
things.  My concern with sending out the e-mails to all and sundry is
that it's taking things too far in the initial stages of the process and
that adding that further down the line when the system is established
would be a better approach.

> > information reported is only going to be as good as the people filling
> > it in make it and there's little motiviation to make much effort with
> > the data.

> Can you expand ?

The data being entered on the form has pretty much no utility for the
person filling in the form which means that a lot of people are just
going to fill it in as quickly as possible.  This will degrade the
quality of information produced, particularly when people are
resubmitting the second time around (since repetitiveness reduces the
amount of thought required).  If people don't feel that there's some
importance beyond the act of filling in the form then there's a real
possibility that many of them are only going to care about the fact the
form has been submitted, not what was on it.  More targetted, human
generated, requests tend not to have this problem so much since the
human part of it provides a cue that the data will be used.

That may change once there is a reasonable data set and people come up
with good ways to use it which show the value of the information being
collected but before that has happened it might be safer to start off
with a combination of opt in and more targetted mails.  This would avoid
starting off by irritating people and should help increase the quality
of the data for people to analyse.

Speaking personally, if I saw such an e-mail there would be a very good
chance that I would just delete it without even filling in the form (or
probably even reading the mail properly) since it sounds very much like
the sort of questions you get in the "why do you participate in free
software" things that social science students are fond of sending round
from time to time.  I've seen enough of those for mass-mailed surveys to
tend to loose my attention rapidly.

> > Something that used existing metrics to try to determine if
> > the package was actively maintained before sending the mail might be
> > more useful there and would avoid making work for people who are clearly
> > active.

> I wish to collect more information than "active"/"not active" so I think
> that the answers of the active ones are as interesting than those of the
> less-active ones. 

It's not that much more information than that, really.

> And as already said, "active"/"not active" is really specific to each
> package and someone with more than a few packages is likely to be active
> on some packages and less-active on some others. What would you suggest in
> this case ?

I was thinking of per-package metrics here, though there may possibly be
some interesting cross-package metrics which could provide a good guess
about the role someone has on each of their packages.

-- 
"You grabbed my hand and we fell into it, like a daydream - or a fever."


Reply to: