[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bits from /me: A humble draft policy on "deep learning v.s. freedom"



On 2019-05-24 15:59, Paul Wise wrote:
> On Fri, May 24, 2019 at 1:58 AM Sam Hartman wrote:
> 
>> So for deep learning models we would require that they be retrainable
>> and typically require that we have retrained them.
> 
> I don't think it is currently feasible for Debian to retrain the
> models.

Infeasible, for sure.

> I don't think we have any buildds with GPUs yet.

Non-free nvidia driver is inevitable.
AMD GPUs and OpenCL are not sane choices.

> I don't know
> about the driver situation but for example I doubt any deep learning
> folks using the nvidia hardware mentioned in deeplearning-policy are
> using the libre nouveau drivers.

Don't doubt. Nouveau can never support CUDA well.
Unless someday nvidia rethought about everything.

Some good Xeon CPUs can train models as well,
and a well optimized linear algebra library
helps a lot (e.g. MKL, OpenBLAS). But generally
CPU training takes at least 10x longer time to
finish. (except some toy networks)

> The driver situation for TPUs might
> be better though?

IDK any software detail about TPU..

> Either way I think a cross-community effort for
> retraining and reproducibility of models would be better than Debian
> having to do any retraining.

Sounds like a good way to go. But not today.
Let's do lazy execution at this point, and
see how this subject evolves and how other
FOSS communities think.


Reply to: