An early report, this month, as I've ran out of work hours earlier than
GnuPG & Enigmail
To get Enigmail working properly with the Thunderbird upload from last
week, we need GnuPG 2.1 in jessie. I [backported GnuPG 2.1] to Debian
jessie directly, using work already done to backport the required
libraries from jessie-backports.
It was [proposed] to ship the libraries as private libraries or
statically link GnuPG itself. I believe this is the wrong approach and
besides I'm unsure as to how this would work in practice, so I recommend
going forward with the libraries backport. I provided a [summary] of
the conversation to try and bring a conclusion.
: [🔎] firstname.lastname@example.org">https://lists.debian.org/[🔎] email@example.com
: [🔎] firstname.lastname@example.org">https://lists.debian.org/[🔎] email@example.com
: [🔎] firstname.lastname@example.org">https://lists.debian.org/[🔎] email@example.com
Once Spamassassin 3.4.2 was [accepted] in the latest stable point
release, I went back to work on the jessie upgrade I [proposed last
month] and uploaded the resulting package as [DLA-1578-1].
I worked on a few sticky parts of the security tracker.
After an internal discussion about work procedures, a friend pointed me
at the [don't lick the cookie] article which I found really
interesting. The basic idea is that our procedure for work distribution
is based on "claims" which mean some packages remain claimed for
extended periods of time.
For some packages it makes sense: the kernel updates, for example, have
been consistently and dilligently performed by Ben Hutchings for as long
as I remember, and I myself would be very hesitant in claiming that
package myself. In that case it makes sense that the package remains
claimed for a long time.
But for some other packages, it's just an oversight: you claim the
package, work on it for a while, then get distracted by more urgent
work. It happens all the time, to everyone. The problem is then that the
work is stalled and, in the meantime, other people looking for work are
faced with a long list of claimed packages.
In theory, we are allowed to "unclaim" a package that's been idle for
too long, but as the article describes, there's a huge "emotional cost"
associated with making such a move.
So I looked at automating this process and [unclaim packages
automatically]. This was originally rejected by the security team
which might have confused the script implementation with a separate
[proposal] to add a cronjob on the security tracker servers to
automate the process there.
After some tweaking and bugfixing, I believe the script is ready for use
and our new LTS coordinator will give it a try, in what I would describe
as a "manually triggered automatic process" while with figure out if the
process will work for us.
Splitting huge files in the repository
I once again looked at splitting the large (17MB and counting)
`data/CVE/list` file in the security tracker. While my [first
attempt] was just trying to improve performance in my own checkouts,
the heaviness of the repository has now been noticed by the Salsa
administrators (bug #908678) as it triggers several performance issues
And while my first attempt clearly not a good tradeoff and made
performance worse (by splitting each CVE in its own file), the new
proposal (split by year) actually brings significant performance
improvements. Clones takes 11 times less space (145MB vs 1.6GB) and
resolve ten times faster (2 vs 21 minutes, local only).
Running annotate on one year takes 26 seconds while running this takes
around 10 minutes over the whole file. This arguably, is less
impressive: there are, after all, twenty years of history in that
repository, so to be fair, we'd need to run annotate against all of
those. But obviously, earlier years are smaller than the latest, so the
total is also faster (2 minutes). And besides, we don't really need to
run annotate against the *entire* file: when I do this, I usually want
to know who to contact about a comment in the file, which is usually a
The conversion itself was an interesting exercise in optimisation. The
original tool was a simple bash script, which split the file in 15
seconds, which is fine if we are ready to lose history in the
repository, But that is probably unacceptable, so I rewrote the script
in Python, which gave a huge performance improvement, processing in less
than a second. This was still a bit slow, so I rewrote it in go which
gave another leap in performance, until a colleague noticed the
resulting files were all empty. After fixing that shameful bug,
performance of the go implementation actually become worse than the
Python, something I was quite surprised about, considering Python is not
known for its fast startup times or performance. I have yet to explain
Unfortunately, the split proposal doesn't seem to match the workflow of
the security team, which seem to be strongly attached to having the
entire history of CVE identifiers in a single file still. Instead, a bug
report against git (#913124) was open in the hope git could fix the
issue. Considering how git is designed and how it's reknowned for not
dealing well with large files, however, I have very little hope
something like that could happen and I do not see why we are trying to
fit that proverbial round peg in the square hole that is Git.
Other reviews and fixes
While I was working on the security tracker, I also fixed a trivial
issue with the test pipeline which was [promptly merged]
I also provided reviews of merge requests ,  and ,
some of which were eventually merged.
I also participated in informal discussion surrounding the DLA issuance
process to make sure they are reflected on the security website, as part
of bug #859122, which, I was surprised to realize, I opened more than a
year ago. I will continue working on this next month, unless someone
beats me to it. :)
Finally, I took a deep dive in systemd, trying to address the worrisome
security issues that came up recently. Many of those are mitigated by
the way Debian uses systemd: for example, systemd-networkd is not used
by default, so there's no remote root execution (!).
The issues fixes were:
* CVE-2018-1049 - automounter race condition, easy backport
* CVE-2018-15688 - dhcp6 client buffer overflow, trivial backport
* CVE-2018-15686 - deserialization privilege escalation, more involved
backport, required changes to the logging and error reporting as
`log_error_errno` doesn't exist in v215 and it's part of a large
tangle of macros that was unwidely to backport
Regarding the latter patch, I [asked upstream] if this was the
correct patch to backport, but haven't received an answer yet.
Finally, I also worked on the tmpfiles issues which were marked as not
affecting wheezy back then, but that *do* affect jessie. This is
CVE-2018-6954, but also CVE-2017-18078, which is actually trivial to
fix. The problem is that upstream fixed the issue first with a [small
PR] and that was fixed as part of `229-4ubuntu21.8`. But
unfortunately, that fix was found to be incomplete and a massive rewrite
of the tmpfiles handling was done in a [much larger PR]. And
because that touches on many more parts of the files, it was much more
difficult to backport. I ended up giving up and it will probably be
easier to simply backport the entire `tmpfiles.c` from upstream,
removing the parts that are not currently supported, than trying to
backport each of the 26 upstream commits into the jessie release.
So after uploading a [test package], which provided a welcome backup
to the above mess I introduced in my source package, I uploaded the test
unchanged to jessie, and announced it as [DLA-1580-1].
: [🔎] firstname.lastname@example.org">https://lists.debian.org/[🔎] email@example.com
Seul a un caractère scientifique ce qui peut être réfuté. Ce qui n'est
pas réfutable relève de la magie ou de la mystique.
- Karl Popper