[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Dealing with renamed source packages during CVE triaging



Antoine Beaupré <anarcat@orangeseeds.org> writes:

> I've finalized a prototype during my research on this problem, which I
> have detailed on GitLab, as it's really code that should be merged. It
> would also benefit from wider attention considering it affects more than
> LTS now. Anyways, the MR is here:
>
> https://salsa.debian.org/security-tracker-team/security-tracker/merge_requests/4
>
> Comments are welcome there or here.
>
> For what it's worth, I reused Lamby's crude parser because I wanted to
> get the prototype out the door. I am also uncertain that a full parser
> can create the CVE/list file as is reliably without introducing
> inconsistent diffs...
>
> I also drifted into the core datastructures of the security tracker, and
> wondered if it would be better to split up our large CVE/list file now
> that we're using git. I had mixed results. For those interested, it is
> documented here:
>
> https://salsa.debian.org/security-tracker-team/security-tracker/issues/2

So if I understand correctly, the parts that aren't done yet are:

1. Tagging with <removed>/<unfixed> instead of <undetermined>.
2. Not processing old entries that we don't care about anymore.
3. Resolve general issue regarding CVE/list, and if it should be split up.

For these:

1. We need to be able if the package still exists or not in a given
distribution. This information is not available from the security-tacker
database, we would need to get it using online json calls. For each and
every package we look at. Which is likely to be very slow, although
incremental processing might help (????).

2. For incrememntal updates, coming up with a definition of old entries
that is easy to check seems to be the stumbling point here. Particularly
as entries in CVE/list can be created not in order, and old CVEs might
still be very relevant.

Maybe we need to create/update a list of all CVEs we have processed
before?  Would this work, or is there some problem I haven't thought of?

Ideally for this to work properly we would also need to ensure that it
updates all entries in one run, as one run would be all we get. Not
multiple runs as can be the case now.

3. I have not noticed git operations being slow, but then again I don't
often update this file. As a potential compromise, maybe instead of one
file per CVE, one file per year?
-- 
Brian May <bam@debian.org>


Reply to: