[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bits from /me: A humble draft policy on "deep learning v.s. freedom"



>>>>> "Andy" == Andy Simpkins <andy@koipond.org.uk> writes:

    Andy> wouldn't put that in main.  It is my belief that we consider
    Andy> training data sets as 'source' in much the same way....  /Andy

I agree that we consider training data sets as source.

We require the binaries we ship to be buildable from source.
We typically but not always (I'm sure I can go find some pdfs that are
not rebuilt) require that someone we trust rebuild those binaries.
So for deep learning models we would require that they be retrainable
and typically require that we have retrained them.
Reproducibility is nice, but it is not a requirement at this time.
Reproducibility makes it easy for us to convince ourselves that the
model can be retrained from the training data set.
It's the best and simplest way to do so.
It's not the only way to do so.

--Sam


Reply to: