[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Want some advice for arch and gpu depends



On Sat, 2025-06-21 at 08:16 +0900, 千代航平 wrote:
> Hi, 
> 
> From now on, I need to start packaging with some architecture dependencies and gpu / cpu dependencies. vllm also has such a kind of dependency.
> For the first step, I think xgrammer might be good for practice.  https://github.com/mlc-ai/xgrammar/blob/main/pyproject.toml

mlx is Apple's library and we can ignore.
Triton is only available on amd64 but I'm not sure the package is in good
shape to support xgrammar: https://tracker.debian.org/pkg/triton
The triton dependency is optional anyway. We can revisit it later.
The rest dependencies are already in the archive.

To specify a architecture-specific build dependency, it can be done like this
https://www.debian.org/doc/debian-policy/ch-relationships.html#relationships-between-source-and-binary-packages-build-depends-build-depends-indep-build-depends-arch-build-conflicts-build-conflicts-indep-build-conflicts-arch

> If you can show me some good example for this task, it will be so helpful.

Deep learning team has many examples for this. For example, src:pytorch
builds CPU-only version of pytorch by default:
https://salsa.debian.org/deeplearning-team/pytorch/-/blob/master/debian/control?ref_type=heads
And a script can convert the same source in-place into src:pytorch-cuda:
https://salsa.debian.org/deeplearning-team/pytorch/-/blob/master/debian/cudabuild.sh?ref_type=heads

The same method is used for src:gloo{,-cuda}, tensorpipe{,-cuda}, ggml{,-cuda}, etc:
https://salsa.debian.org/deeplearning-team/gloo
https://salsa.debian.org/deeplearning-team/tensorpipe
https://salsa.debian.org/deeplearning-team/ggml

In summary, for a package that can be built for either CPU-only mode or CUDA mode,
we prepare two versions based on the same repository.

Since xgrammar depends on pytorch, a very IMPORTANT note is that, generally when you
build a package against CPU version of pytorch, its dependency will resolve to the CPU
version, and not replaceable with CUDA version. But that's not the case for the
real pytorch package. I've overridden the dependency template for pytorch:

When a python package depends on "torch", it will resolve into
"python3-torch | python3-torch-cuda | python3-torch-rocm"
https://salsa.debian.org/deeplearning-team/pytorch/-/blob/master/debian/python3-torch.pydist?ref_type=heads
To allow replacement among CPU/CUDA/ROCm version.
See pybuild, dh_python3 documentation for details.

You only need to focus on the CPU and CUDA versions.
ROCm version is handled by Cordell :-)

If a package has some deeper dependency on pytorch beyond the python API,
the overridden shared object dependency template can help with that.
https://salsa.debian.org/deeplearning-team/pytorch/-/blob/master/debian/libtorch2.6.shlibs.in?ref_type=heads
See dh_shlibdeps, dpkg-shlibdeps

I took a glance at xgrammar, it seems to only rely on the python API
of pytorch. That means it can be seen as a normal python package with
some compilations. Let me know if my guess is wrong.

> Also, I add link to ITP and fix autopkgtest for rust crates. https://salsa.debian.org/k1000dai/gsoc-status/-/issues, please have a look if you have some time!.

Will do so!


Reply to: