[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Why is ggml not migrating?



On Tue, Dec 09, 2025 at 11:04:17AM +0100, Christian Kastner wrote:
> Hi Adrian,

Hi Christian,

> let me start by re-iterating that I'd be on board to switching back to a
> src:ggml-cuda if this cooperates more with our tooling/infra/practices.
> 
> My counter-arguments below aim only to ensure that we cover all the
> bases. It's not even that much about ggml -- I think we're touching an
> interesting problem in general here.

nothing here is new, packages like starpu and slurm-wlm already have 
additional -contrib source packages for Nvidia reasons.

> On 2025-12-09 07:04, Adrian Bunk wrote:
> > I am not a member of the RT, but having binaries from the same source 
> > package built in two different ways in the archive sounds just wrong.
> 
> The build profile argument hinges on three ideas, namely that
> 
>   (1) only main is part of Debian proper, and ggml has exactly one
>       policy-conform process for ggml in main, so all good there
> 
>   (2) because the other binary goes into contrib, a second build process
>       is needed anyway

It's not relevant here, but the actual reason is the build dependency 
that is not in main.

Otherwise building main+contrib from the same source package works fine
(e.g. src:openh264).

>   (3) because of the nature of this contrib package, this second build
>       process will always be manual.
> 
> Because of (3), I assumed that the same source/two different builds
> approach wouldn't be much of an issue.
> 
> A contributor will always have to read debian/README.Source to update
> the package, and I didn't see much difference in whether this doc says
> "run debian/cudabuild.sh, then build" or "build with build profile".
> 
> > I am someone who does sometimes over 100 NMUs for RC bugs in a single 
> > week - after an NMU the CUDA packages would just silently disappear?
> 
> That might actually be the best outcome, as a consequence of (3) above.
> 
> Because if you NMU src:ggml, you'd also need to NMU src:ggml-cuda. And
> the only way you can NMU src:ggml-cuda is via debian/README.Source, and
> whether that points to debian/cudabuild.sh or --profiles='pkg.ggml.cuda'
> is just a detail.

This is not true.

~99% of my NMUs for RC bugs do not touch the upstream tarball,
it's either editing debian/rules or adding to debian/patches.

Like I did a lot of "FTBFS with CMake 4" fixing recently, which most of 
the time was just either bumping the version in cmake_minimum_required 
or doing the same in debian/rules with -DCMAKE_POLICY_VERSION_MINIMUM=3.5

>...
> > The security tracker would also not notice if there are binary packages
> > that have not been rebuilt, leaving users vulnerable to a fixed CVE.
>
> On the other hand, two sources packages mean that the trackers for each
> need to be maintained separately.
>...

Better maintaining them separately than giving users a false sense of
security.

contrib and non-free are not supported by the Security team, which makes 
it not unlikely that only the packages in main will be fixed as part of
a DSA.   

You as maintainer can provide security fixes for contrib, but this is 
not something our users can rely on.

>...
> Assuming the NMU is for something bad, it might be better to have the
> CUDA backend removed instead of stale.
> 
> > A variant of that would be if someone other than you prepares a security
> > update for stable.
> 
> Here, I'd definitely prefer removal if the second build doesn't happen.

During a DSA no one might notice the hidden CUDA packages, I am not sure 
anything would flag before the next point release day that these hidden 
packages are missing.

And again, for something like CVE-2025-53630 a new patch would get added
to debian/patches and no one reads debian/README.Source in that case.

Also, if you would prefer removal when only the packages in main get
a security fix, this implies that the CUDA packages should not be in 
stable releases at all.

> > A similar issue is also present for downstream distributions like 
> > Ubuntu or Raspbian, that might have different rules regarding what
> > is autobuildable for them.
> 
> > A successful "dpkg-buildpackage -b" producing all binary packages is
> > a pretty fundamental assumption in various places.
> 
> I agree that these are good counter-arguments to assumption (3) above.
> 
> The simple local rebuild alone was already a concern to me. But as per
> (1) above, the result in main is correct and policy-conform, so I didn't
> see a problem there. And I felt that contrib/non-free, by their nature,
> offered more lee-way.

(1) is not true.

Policy 2.2.3. says that packages in contrib must meet all policy 
requirements.

>...
> As a final argument in favor build profiles, it felt to me like it was
> the intuitively correct approach to handle optional plugins, as is the
> case here with ggml. Note that ggml has a build profile to disable HIP
> builds (for AMD GPUs) because that takes substantial resources, and
> would be irrelevant in an NVIDIA GPU-based context.

Profiles are fine for enabling developers/users to build a package in a 
different configuration, but they shouldn't be used in the archive.

> But infra/tooling/practices trump my personal intuition, of course.
> 
> Best,
> Christian
>...

cu
Adrian


Reply to: