[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Support for insecure applications



Hi,

On Feb/12/2021, Brian May wrote:

[...]

If this is off-topic in the list feel free to answer to me only,
redirect to another mailing list and apologies for the noise.

> But I am not sure that treating all software as equal, when it obviously
> isn't, is a good thing for our users.


> Yes, users can look up our security trackers, not sure how much this
> helps though. A lot of these open security issues aren't necessarily
> serious issues that warrant concern.
> 
> Any ideas, comments?

Some months ago I was thinking something along the same lines. I was
comparing the source code of certain packages (10 packages more or less)
searching for certain bugs but I saw how packages that might look
similar from the outside varied a lot in quality, maintenance, etc. The
problem is that from a user point of view it would be hard to know which
one is better maintained.

I think that when a package is distributed in Debian some users might
expect certain quality because the package is in Debian.

When I was discussing this with a friend I had thought if Debian could
make available and visible for the users some metrics, contextualised in
similar (per functionality) packages:

-popularity
-number of recent updates in upstream
-number of contributors
-usage of control version system
-test coverage
-continous integration
-upstream activity (issues, PRs, etc. with more the better GitHub or
similar places stars, forks, etc?)
-translations? (the more, more popualar the software is?)
-warnings from the compilers?
-static code analyser?
-documentation?
-CVEs?

So, when a user chooses a package the user would have more information
to decide

I was trying to think of metrics that could be automated (to a certain
extend) to avoid flamewars between reviewers and upstream but metrics
that might indicate software quality (hard to measure!). Another option
would be like science publications: reviewers reviewing the code and
rating it for different aspects but I agree that might be a never ending
story, unless there is a clear cut.

The Journal of Open Source Software (https://joss.theoj.org/) has a
review criteria:
https://joss.readthedocs.io/en/latest/review_criteria.html ,
https://joss.readthedocs.io/en/latest/review_checklist.html but some are
still subjective, IMHO.

May I ask: how do people choose (security wise or in general) between
packages for a certain task? Could this be automated? Part of the
process for my decision is seen above, plus looking at dependencies and
sometimes at source code.

Cheers,

-- 
Carles Pina i Estany
https://carles.pina.cat


Reply to: