[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: plan of deep learning team for next stable release?



On 11/29/20 4:01 PM, Mo Zhou wrote:
> Agreed. Now that the cpu-only version of pytorch has landed on the
> archive. The -cuda version should not be hard to preprae.

Great!

>> Our buildds currently don't have the necessary hardware. I believe we
>> should strive to change that. The funds are there (as recently addressed
>> by the DPL) and accelerated computing is developing into a key area that
>> Debian cannot miss out on, in my opinion.
> 
> We could make good use of our funds, but I'm not sure whether running
> non-free software on non-free hardware for our specific purpose would be
> accepted by the community. Maybe I can carefully raise an question on
> -devel when we reached some kind of consensus.

That's what I had in mind, as well. (I think debian-science should also
be CCed as most consumers of accelerated computing are bound to be on
that list).

>> With regards to software/configuration, I think the way to go would be
>> to request new build profiles (one per flavor), so that B-Ds only get
>> installed where necessary, and packages only get built where possible.
> 
> Sounds like changes in apt/dpkg will be required. I'd recommend the
> solution for caffe, see src:caffe and src:caffe-cuda (removed from
> unstable).

I'll take a look, thanks.

>> The elephant in the room is, of course, CUDA. It's non-free so that will
>> irk a lot of people, but it's also the de facto standard, and I don't
>> see what alternative we have. People needing accelerated computing today
>> will rather leave behind Debian than CUDA.
> 
> This is indeed a good question to raise again. Without CUDA acceleration
> I will not be able to use the pytorch compiled for Debian in my own
> research work, and the use case for the cpu-only version is undoubtedly
> limited.

Like anyone else needing pytorch.

The simple fact of the matter is that Debian users who own an Nvidia
card and need pytorch (or pure CUDA, as I have)
  (1) will already have the Nvidia driver and CUDA installed anyway, and
  (2) will then simply pip install or conda install pytorch, rather than
      use the non-CUDA version from our archive.

So if the Project rejects extending support for CUDA, not only would
that have absolutely zero effect on CUDA use, it would actually also
torpedo our own version of pytorch. That's hardly sensible.


Reply to: