[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Non-LLM example where we do not in practice use original training data



On Tue, May 06, 2025 at 08:36:50AM -0700, Russ Allbery wrote:
Well, first, I continue to object to the idea that a model can be
DFSG-free if it's trained on non-DFSG-free data. I think that makes it
definitionally non-free. (I have read Aigars's arguments to the contrary
and do not find them at all persusasive.)

We appear to have plenty of pre-trained models, apparently trained on
non-DFSG-free data, in main right now, which strikes me as a violation
of our current "preferred form of modification" rule.


Reply to: