[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Rethink about who we (Debian) are in rapid dev cycle of deep learning



Thank you for your work in this area, and wise thoughts, as always!

"M. Zhou" <lumin@debian.org> writes:

> My conclusion is: "Users with special demands can take care of themselves,
> as we are unable to go far on our own." In terms of GPU computing, Debian
> is providing a great system as a foundation for development and applications.
>
> […]
>
> Based on my interpretation, it means Debian might step aside from the world of
> AI applications to fully exercise software freedom. It's a pity but Debian's
> major role in the whole thing is a solid system.

I understand how you reach these conclusions, both from the POV of
hardware driver non-freedom and from the POV of the toxic candy problem
of trained models. And while I agree with your conclusions, I do worry
about the prospect of the lines blurring.

It's not unreasonable to expect that AI models become standard
components of certain classes of software relatively soon. Nomatter our
position on the matter, I suspect the matter will affect lots of
"non-special", "ordinary" software sooner rather than later. That is not
to say that that should change our position – it is just to say that I
think we should worry.

What do we do if/when an image compression scheme involving a deep
learning model becomes popular? What do we do if/when every new FOSS
game ships with an RL agent that takes 80 GPU-weeks of training to
reproduce (and upstream supports nvidia only)? When every new text
editor comes with an autocompleter based on some generative model that
upstream trained on an unclearly licensed scraping of a gazillion
webpages?


 -- Gard
 

Attachment: signature.asc
Description: PGP signature


Reply to: