[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Proposal -- Interpretation of DFSG on Artificial Intelligence (AI) Models



On Tue, 13 May 2025 at 18:53, Russ Allbery <rra@debian.org> wrote:
>
> Aigars Mahinovs <aigarius@gmail.com> writes:
>
> > This was in response to Russ articulating that: "I don't work on free
> > software because I want to make something easier for Google's LLM. I
> > work on free software because I want to give freedom and control to
> > human beings."
>
> > The false assumption here being that making "something easier for LLMs"
> > will only benefit Google (who are nowhere near top in terms of AI
> > development, btw) and not "human beings", which quite obviously fails to
> > take in account any freedom and control that a LLMs *does* in fact give
> > its users, who are also human beings.
>
> Aigars, it would be a lot easier to have this conversation with you if you
> pay somewhat closer attention to what other people are really arguing.

I did. And what you wrote is exactly what I responded to.

-----------

> Russ Allbery <rra@debian.org> wrote:
> Matthias Urlichs <matthias@urlichs.de> writes:
>
> > The problem is that all those missing factors are destined to go
> > un-missing — and then what? We can't base our rules on biological
> > exceptionalism.
>
> Why not? The entirety of law, politics, and civilization is designed by

Here you are expressing support for the biological exceptionalism
approach to copyright and creation, as discussed earlier in a thread -
that only a human may learn from a copyrighted work and produce a
non-copyrighted new work, and a machine can never do that. Or in moral
terms, that only a human is able to create something new and useful
(after learning from all others that came before them), but a machine
can not.

> humans, for humans. Free software is a movement of humans that attempts to
> provide other humans with specific freedoms and guarantees around the

Reinforcing human-centredness here. Ignoring, for example, commercial
use of software by companies, which explicitly must be allowed by
Debian Social Contract.

> software they use. I don't work on free software because I want to make
> something easier for Google's LLM. I work on free software because I want
> to give freedom and control to human beings.

Juxtaposing "Google's LLM" vs human beings.

> We're the ones building the system. Why should we not design the system
> for us, to help us, to make our lives better?

Us - humans, us - developers, us - Debian, us - ???

> The LLMs are by and large the creations of corporations because they have
> collective resources that dwarf the resources of nearly all individual
> humans. Where this line of reasoning goes in practice is to (further)
> create a legal system that treats corporations and their tools as the most
> important actors and humans as secondary material for corporations to
> consume. We already have too much of that.

"LLMs = corporate tools and are bad". Reinforcing the line that rules
*should* be made against "Google's LLMs".

> We *absolutely* should base our rules on what's best for human beings, not
> corporate constructs. That is the entire point of the free software
> movement.

Again - "human beings" vs "corporate constructs". And we should be
making rules to benefit one and not the other. And we know from
previous lines that by corporate constructs you mean "Google's LLMs".

-----------

So where in here was any consideration given to the human beings that
are users of those LLMs, the human beings whose freedoms are enabled
by the LLMs? Was that left as an "exercise to the reader"?
Your whole email message was a juxtaposition of "human beings" vs
"LLMs" and was cheered on as such. Somehow even leading a person to
call for deleting "non-free". Presumably because real human beings
only need main and non-free is a corporate tool?
If that was a wrong interpretation of your email, maybe it would have
been more constructive to respond to that?

> first you launched into extended tours of current legal thinking about
> this for people who could not possibly care less what the law says because
> their arguments were ethical and moral and law is not a reliable guide to

We are talking on a mailing list. More people read it than just the
one person being replied to. And many do care about the legal
situation.
As for ethical and moral arguments ... I have seen very little of
those on this list. More accusations of ignoring some abstract morals,
but never *actually*, *explicitly* detailing the argumentation.

> either, and now you're trying to pick a fight with me over the message
> where I was *actively agreeing* with your motives.

In what world is saying that we should be making rules, to make it
harder for Google's LLMs, is "actively agreeing with my motives"?

In the rest of the email I am directly responding to you spending a
*lot* of time making assumptions and allegations about what my
positions, my motives and my thinking might be and how I have been
"duped into" some kind of "techno-populism" that is "seriously hurting
the lives of people".

It would be a lot easier to have a conversation with you, if you would
spend more time articulating and detailing *your own* position,
instead of guessing about the positions of others (and then talking
down to those positions). Ideally in the actual manner that matters to
you.

If your key objections to having LLMs in Debian are based on moral and
ethics - then why don't you formulate *what* those objections
*actually* are. What actual and specific consequences are you
expecting to happen if LLMs are considered to be free according to the
OSI definition? How do you see that as different from the current
situation and the situation that has already existed for decades? Why
do you prefer setting up rules that would only allow functionally
useless public domain LLMs to be in Debian? Does Debian have a chance
to focus the LLM development into models that could actually provide
useful freedoms to their users and to developers of derivatives? And
why should we choose not to do so? These are not abstract or
rhetorical questions. If you are choosing to represent an ethical and
moral position, then these kinds of questions are foundational to such
a position. It is a *much* more expansive and complex thing to argue
than a relatively straightforward legal definition. But it also can
not really be skipped or assumed that we all agree on the answers to
the above.
-- 
Best regards,
    Aigars Mahinovs


Reply to: