On Sat, Sep 20, 2025, 16:43 Jonas Smedegaard <
dr@jones.dk> wrote:
Quoting Alexander Kjäll (2025-09-20 16:08:26)
> Two quick thoughts from my mobile:
>
> Since transitioning through NEW takes so long, I typically upload as much
> of the dependency tree as possible. But NEW is not a FIFO, so sometimes
> thing pop out that is not buildable. A similar problem happens if something
> is in NEW a long time and upgrades of it's deps happen.
Do you then upload to experimental? And if not, why?
I have not used experimental, as I do far haven't seen the need. I have no objections to it if it helps someone.
The processing time through NEW is exactly one of the situations I
consider a known breakage, when not all dependencies are already
available in unstable.
> And regarding running tests in debug, debug have additional checks for
> numeric over-/underflows, something that easily happens when switching
> between 64 and 32 bit architectures. I believe we catch more bugs if we
> don't run the tests with --release.
Interesting! Can you point to documentation for this? I would like to
understand that better.
It has been my understanding that debug mode builds without optimization
and therefore fails to catch optimization-related errors.
This is also true I think.
This would be two different classes of errors, don't know how to prioritise between them or if we can have them both.