On 2024-10-20 20:09, M. Zhou wrote:
> Allowing "Open Source AI" to hide their training data is nothing but
> setting up a "data barrier" protecting the monopoly, disabling
> anybody other than the first party to reproduce or replicate an AI.
> Once passed, OSI is making a historical mistake towards the FOSS
> ecosystem. I had lots of comments in their forum regarding this [4].
>
> Instead of "sign to endorse", I sign to strongly oppose it.
I fully concur.
Are [3, 4] the only options we have to voice this concern?
Those are the feedback channels mentioned in the announcement, yep. I'm not aware of other channels.
Although I'm confident that most Debian Project Members would also
oppose this and an official statement by the Project would be notable, I
don't think there is enough time until October 28th to organize something.
(Funnily enough, this *is* something that would be worth a GR, I think).
In the absence of a GR (something I can't propose as a non-DD) I've been wondering about ways to communicate the problem meaningfully.
Here's a thought experiment:
An AI system available to all 50 residents of Quxbarnia, 5 of whom are elected politicians, and 5 of whom are business leaders, that decides whether daily transit delays should be sent or witheld for people in each of those two categories, could be trained using the following method: 'dd if=/dev/urandom > of=training.dat bs=1 count=4096', and for which the complete source code for the training program plus an API to query/integrate with the model, is provided.
The creators/authors of the system, Quuxcorp (please note the double-u), who operate the official hosted instance of the AI available to all Quxbarnians, are based in a neighbouring country.
Would the system I describe qualify as Open Source AI under the rc1 text, or are adjustments to it required?