[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: A policy on use of AI-generated content in Debian



Stefano Zacchiroli <zack@debian.org> writes:

> (1) You are free to use AI tools to *improve* your content, but not to
>     create it from scratch for you.

>     This point is particular important for non-native English speakers,
>     who can benefit a lot more than natives from tool support for tasks
>     like proofreading/editing. I suspect the Debian community might be
>     particularly sensible to this argument. (And note that on this one
>     the barrier between ChatGPT-based proofreading and other grammar/
>     style checkers will become more and more blurry in the future.)

This underlines a key point to me, which is that "AI" is a marketing term,
not a technical classification.  Even LLMs, a more technical
classification, can be designed to do different things, and I expect
hybrid models to become more widespread as the limitations of trying to do
literally everything via an LLM become more apparent.

Grammar checkers, automated translation, and autocorrect are all useful
tools in their appropriate place.  Some people have moral concerns about
how they're constructed and other people don't.  I'm not sure we'll have a
consensus on that.  So far, at least, there don't seem to be the sort of
legal challenges for those types of applications that there are for the
"write completely new text based on a prompt" tyle of LLM.

Just on a personal note, I do want to make a plea to non-native English
speakers to not feel like you need to replace your prose with something
generated by an LLM.

I don't want to understate the benefits of grammar checking, translation,
and other tools, and I don't want to underestimate the frustration and
difficulties in communicating in a non-native language.  I think ethical
tools to assist with that are great.  But I would much rather puzzle out
odd or less-than-fluent English, extend assumptions of good will, and work
through the occasional misunderstanding, if that means I can interact with
a real human voice.

I know, I know, supposedly this is all getting better, but so much of the
text produced by ChatGPT and similar tools today sounds like a McKinsey
consultant trying to sell war crimes to a marketing executive.  Yes, it's
precisely grammatical and well-structured English.  It's also sociopathic,
completely soulless, and almost impossible to concentrate on because it's
full of the sort of slippery phrases and opaque verbosity of a politician
trying to distract from some sort of major scandal.  I want to talk to
you, another human being, not to an LLM trained to sound like a corporate
web site.

-- 
Russ Allbery (rra@debian.org)              <https://www.eyrie.org/~eagle/>


Reply to: