Bits from /me: Difficulties in Deep Learning Framework Packaging
This message is neither a good news, nor asking for help. I'm writing
to share some of my points about Deep Learning Framework packaging,
after a re-evaluation of the status of TensorFlow's latest build
systems. My thoughts are concluded from failures instead of success.
That said, they should be helpful to future maintainers who'd like to
maintain similar packages. And you would probably find some of my
root initiatives for DUPR or SIMDebian in the points.
In Debian's context, maintainers have to face three obstacles:
1. License. Unfortunately the de facto dominating performance library is
cuDNN. I'd say no serious user would use a D-L framework without
cuDNN or TPU acceleration. Maintaining a bunch of contrib or non-free
stuff is not good experience in Debian. Packaging for cuDNN is avaialble
under Salsa:nvidia-team, but the plan for uploading it had been aborted
because it's license looks too scary.
2. ISA Baseline. If you remember SIMDebian, or some of my motivations
of DUPR, it would be very easy to understand how the absense of SIMD
code affects the critical computational performance. People provided
helpful suggestions at this point, including ld.so tricks and some
gcc features which allows run-time code selection according to cpu
capability. The ld.so trick would bloat the resulting .deb packages
but it's the most applicable solution. In contrast, patching a million
lines of Tensorflow code to enable the "function attributes" feature
is probably impossible to a volunteer.
3. Build system. Look at the build systems of TensorFlow and PyTorch.
They are volatile due to the fast pace of development. Specifically,
TensorFlow's build system "bazel" is very hard to package for Debian,
and an anount of patching work is still required to prevent
bazel from downloading ~3.0GiB of ??? before building TensorFlow.
PyTorch's setup.py+cmake+shell build system ... requires some patching
So I recommend any future contributor who is about to deal with any deep
learning packages to carefully assess the 3 aspects above. To some
extent I envy some other distros such as Arch and Gentoo, since they
already made a great progress in this field.
Sometimes ago (maybe several months?) in debian science team I said I'm
aborting D-L framework related development. Today Paul Liu poked me and
asked me about the status of src:tensorflow (in experimental). I spent
several hours re-evaluating the situation, and finally decided to fully
give up and write the above points, because I'm not willing to undertake
the workload any more. At the same time, I filed Orphan bugs against
tensorflow and several of its dependencies, except for src:nsync which
contains a neat set of cmake files. I plan to convert those Orphan bugs
into RM bugs after a year, if no one would touch them.
I do research with neural networks and I use these frameworks
frequently. Anadonda and Pip are already good enough for me. So DUPR
is the best choice to me if I'd like some .deb packages.
This time I'm really giving up all related efforts , and shall never
touch them again. I don't feel pity, even if these points seem to be
tightly connected to some of my Debian activities. Apart from that, I'm
still willing to provide personal opinions about related packaging
works, or machine learning datasets, pretrained neural networks, etc.
Well, this result looks bad. Let's hope for a sun rise.
 Please take extra care in computational performance.
 (non-free) https://developer.nvidia.com/cudnn
 Bussiness groups, researchers.
 Google's computation acceleration hardware.
 man ld.so -> search for "hardware capabilities"
 info gcc "Function Attributes";
See Guillem's recent reply to "SIMDebian: ..." (email@example.com)
 I don't know what they are. They are more than build-deps.
 They are the top-2 frameworks.
 What a relief.
 My on-going works about intel-mkl / BLAS / LAPACK are unrelated.
I still have strong interest in many other aspects of Debian development.