[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: A policy on use of AI-generated content in Debian



On Thu May 2, 2024 at 9:21 PM -03, Tiago Bortoletto Vaz wrote:
> Right, note that they acknowledged this policy is a working in progress. Not
> perfect, but 'something needed to be done, quickly'. It's hard to find a
> balance here, but I kind of share this sense of urgency.
>
> [...]
>
> This point resonates with problems we might be facing already, for instance
> in the NM process and also in Debconf submissions (there's no point of going
> into details here because so far we can't proof anything, and even if we could,
> of course we wouldn't bring any of the involved to the public arena). So I'm
> actually more concerned about LLM being mindlessly applied in our communication
> processes (NM, bts, debconf, irc, planet, wiki, website, debian.net stuff, etc)
> than one using some AI-assisted code in our infra, at least for now.
>

Hi Tiago,

It seems you have more context than the rest which provides a sense of
urgency for you, where others do not have this same information and
can't share this sense of urgency.

If I were to assume based on the little context you shared, I would say
there's someone doing a NM application using LLM, answering stuff with
LLM and passing all their communications through LLMs.

In that case, there's even less point in making a policy about it, in my
opinion. Since as you stated: you can't prove anything, and ultimately
it would land in the hands of the people approving submissions or NMs to
judge if the person is qualified or not. And you can't block
communications from LLM generated content when you can't even prove it's
LLM generated content. How to enforce it?

And I doubt a statement would do much, as well. What would be
communicated? "Communications produced by LLMs are troublesome"? I don't
know if there's much substance to have a statement of that sort.

OTOH, LLM-assisted rewrite of your own content may help non-native
English speakers to write better and improve communication
effectiveness. Hence, saying "communications produced by LLMs are
troublesome" would be troublesome itself, since how can you as a
receiver differentiate if it's their own content or other's content.

Some may say "a statement could at least be used as a pointer to say
'these are our expectations regarding use of AI'", but ultimately is in
the hands of those judging to filter out or not. And if those judging
can't even prove if AI was used, what's the point?

I can't see the point of "something needs to be done" without a clear
reasoning of the expectations out of that being done.

--Jose


Reply to: