[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bits from /me: A humble draft policy on "deep learning v.s. freedom"



Hi Adam,

On 2019-05-24 10:19, Adam Borowski wrote:
> 
> I'm not so sure this model would be unacceptable.  It's no different than
> a game's image being a photo of a tree in your garden -- not reproducible by
> anyone but you (or someone you invite).  Or, a wordlist frequency produced
> by analyzing results of a google search.
> 
> At some point, the work becomes an entity on its own rather than the result
> of processing some dataset.
> 
> A more ridiculous argument: the input is a project requirement sheet, the
> neural network being four pieces of wetware, working for 3 months.  Do you
> insist on _this_ being reproducible, or would you accept the product as free
> software?  Sufficiently advanced artificial intelligence might be not that
> different.

This is exactly the difficult question #3. The definition of ToxicCandy
is prepared for the future, and we currently lack concrete example
in this case. (A ToxicCandy Model that nobody plans to upload to the
archive is an invalid case.)

Let me first make the definition of the safest area clear. Only after
then should we try to explore more complicated cases ...


Reply to: