[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

ownership transfer of some pckages from science team to deep learning team



Hi science team,

I want to transfer the salsa repository ownership of the following
packages from science team to the debian learning team [1]:

 1. caffe (educational deep learning framework)
 2. onednn (intel's CPU/SYCL-based deep learning acceleration library)

Maybe src:tensorflow should be transferred to deep learning team as
well, but I think I'm in no position of making that decision, since I'm
no longer the maintainer of that repository even if I was the creator.
I'll leave the decision to the de-facto maintainers.

The maintainer mail address of Debian Deep Learning Team is <debian-ai@l.d.o>.
Which means we are no longer reusing the old alioth science team mailing
address as the address of Deep Learning Team. At this point you may want
to subscribe to our new mailing list!

Besides, I have assigned the debian science team the maintainer access
to all repositories of deep learning team on salsa. I mean science team
members can treat the deep learning team repositories just like how they
deal with those in science team.

Additionally, we are basically enforcing ML-Policy within the deep
learning team. It's recommended to put packages under deep learning team
as long as they are affected by ML-Policy.

Ok, could any science team owner help me transfer those mentioned repos?

By the way, in case you don't know, a good news is that the pytorch
package stack has passed NEW. And I'm going to migrate it from
experimental to unstable recently. I think buster+1 will ship PyTorch 1.7.0.

I need help in the arm64 build of pytorch if you think the arm64 build
is meaningful (because I have no idea about the arm-specific dependency
libraries at all and I have no arm64 hardware).

The CUDA build of pytorch is also something that I'm not quite willing
to touch. Even if I use the cuda version of pytorch for research through
anaconda every day, the NVIDIA cuDNN license always makes me uncomfortable.
I'm open to any contribution about pytorch-cuda, but note that I'll not
maintain it. (pytorch-cuda has to enter contrib section)

  AMD is terrible at making their ROCm bits tidy and well-documented.
  -- pytorch-rocm (main section) is still a distant target.

  Intel's SYCL is still not merged into LLVM.
  -- pytorch-sycl (opencl, main section) is also a distant target.

  There are some developers working on the Vulkan support in pytorch.
  Let's wait and see whether this is another reliable way of hardware
  acceleration. pytorch-vulkan should be able to enter the main section
  as well.

Ah, so much work remains to be done.

[1] we have a new mailing list debian-ai@l.d.o, in case you aren't aware of it
    https://salsa.debian.org/deeplearning-team/


Reply to: