Thorsten Glaser <tg@debian.org> writes: > Counter-Proposal -- Interpretation of DFSG on (AI) Models (v3) > ========================================================= I don't know if further seconds are needed for each new version of a proposal, but for clarity I'll second this version too. >>I realized that I have one additional generic concern: You claim that >>models are a derivate work of their training input. > > Yes. This is easily shown, for example by looking at how they work, > https://explainextended.com/2023/12/31/happy-new-year-15/ explained > this well, and in papers like “Extracting Training Data from ChatGPT”. > It is a sort of lossy compression that has shown to be sufficiently > un-lossy enough (urgs, forgive my lack of English) that recognisable > “training data” can be recalled, and the operators’ “fix” was to add > filters to the prompts, not to make it impossible, because they cannot. I don't think this question is legally established or socially agreed on, and I think it will be an area of conflict for many years. I also don't mind this text in your proposal because I happen to agree with it. But I think it would be possible to disagree on that (and many people do) and still agree with the rest of your proposal about what Debian should do in this situation. /Simon
Attachment:
signature.asc
Description: PGP signature