It doesn't seem unreasonable. If you train a model that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed? Just because it's a fancy Ai model it is ok?
> that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed?
LLMs are hardly reliable ways to reproduce copyrighted works. The closest examples usually involve prompting the LLM with a significant portion of the copyrighted work and then seeing it can predict a number of tokens that follow. It’s a big stretch to say that they’re reliably reproducing copyrighted works any more than, say, a Google search producing a short excerpt of a document in the search results or a blog writer quoting a section of a book.
It’s also interesting to see the sudden anti-LLM takes that twist themselves into arguing against tools or platforms that might reproduce some copyrighted content. By this argument, should BitTorrent also be banned? If someone posts a section of copyrighted content to Hacker News as a comment, should YCombinator be held responsible?
They're probably training them to refuse, but fundamentally the models are obviously too small to usually memorise content, and can only do it when there's many copies in the training set. Quotation is a waste of parameters better used for generalisation.
The other thing is that approximately all of the training set is copyrighted, because that's the default even for e.g. comments on forums like this comment you're reading now.
The other other thing is that at least two of the big model makers went and pirated book archives on top of crawling the web.
LLMs even fail on tasks like "repeat back to me exactly the following text: ..." To say they can exactly and reliably reproduce copyrighted work is quite a claim.
You can also ask people to repeat a text and some will fail.
What I want to say is that even if some LLMs (probably only older ones) will fail doesn't mean future ones will fail (in the majority). Especially if benchmarks indicate they are becoming smarter over time.
It is entirely unreasonable to prevent a general purpose model to be distributed for the largely frivolous reason that maybe some copyrighted works could be approximated using it. We don´t make metallurgy illegal because it's possible to make guns with metal.
When a model that has this capability is being distributed, copyright infringement is not happening. It is happening when a person _uses_ the model to reproduce a copyrighted work without the appropriate license. This is not meaningfully different to the distinction between my ISP selling me internet access and me using said internet access to download copyrighted material. If the copyright holders want to pursue people who are actually doing copyright infringement, they should have to sue the people who are actually doing copyright infringement and they shouldn't have broad power to shut down anything and everything that could be construed as maybe being capable of helping copyright infringement.
Copyright protections aren't valuable enough to society to destroy everything else in society just to make enforcing copyright easier. In fact, considering how it is actually enforced today, it's not hard to argue that the impact of copyright on modern society is a net negative.
If the Xerox machine had all of the copyrighted works in it and you just had to ask it nicely to print them I think you'd say the tool is in the wrong there, not the user.
Xerox already went through that lawsuit and won, which is why photocopiers still exist. The tool isn't in the wrong for being told to print out the copyrighted works. The user still had to make the conscious decision to copy that particular work. Hence, still the user's fault.
You take the copyrighted work to the printer, you don't upload data to an LLM first, it is already in the machine. If you got LLMs without training data (however that works) and the user needs to provide the data, then it would be ok.
You don't "upload" data to an LLM, but that's already been explained multiple times, and evidently it didn't soak in.
LLMs extract semantic information from their training data and store it at extremely low precision in latent space. To the extent original works can be recovered from them, those works were nothing intrinsically special to begin with. At best such works simply milk our existing culture by recapitulating ancient archetypes, a la Harry Potter or Star Wars.
If the copyright cartels choose to fight AI, the copyright cartels will and must lose. This isn't Napster Part 2: Electric Boogaloo. There is too much at stake this time.
One of the reasons the New York Times didn't supply the prompts in their lawsuit is because it takes an enormous amount of effort to get LLMs to produce copyrighted works. In particular, you have to actually hand LLMs copyrighted works in the prompt to get them to continue it.
It's not like users are accidentally producing copies of Harry Potter.
Helpfully the law already disagrees. That Xerox machine tampers with the printed result, leaving a faint signature that is meant to help detect forgeries. You know, for when users copy things that are actually illegal to copy. Xerox machine (and every other printer sold today) literally leaves a paper trail to trace it back to them.
You're quite right. Still, it's a decent example of blaming the tool for the actions of its users. The law clearly exerted enough pressure to convince the tool maker to modify that tool against the user's wishes.
According to the law in some jurisdictions it is. (notably most EU Member States, and several others worldwide).
In those places actually fees are included ("reprographic levy") in the appliance, and the needed supply prices, or public operators may need to pay additionally based on usage. That money goes towards funds created to compensate copyright holders for loss of profit due to copyright infringement carries out through the use of photocopiers.
Xerox is in no way singled out and discriminated against. (Yes, I know this is an Americanism)
If I've copied someone else's copyrighted work on my Xerox machine, then give it to you, you can't reproduce the work I copied. If I leave a copy of it in the scanner when I give it to you, that's another story. The issue here isn't the ability of an LLM to produce it when I provide it with the copyrighted work as an input, it's whether or not there's an input baked-in at the time of distribution that gives it the ability to continue producing it even if the person who receives it doesn't have access to the work to provide it in the first place.
To be clear, I don't have any particular insight on whether this is possible right now with LLMs, and I'm not taking a stance on copyright law in general with this comment. I don't think your argument makes sense though because there's a clear technical difference that seems like it would be pretty significant as a matter of law. There are plenty of reasonable arguments against things like the agreement mentioned in the article, but in my opinion, your objection isn't one of the.
You can train a LLM on completely clean data, creative commons and legally licensed text, and at inference time someone will just put a whole article or chapter in the model and has full access to regenerate it however they like.
Re-quoting the section the parent comment included from this agreement:
> > GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.
It sounds to me like an LLM you describe would be covered if they people distributing it put in a clause in the license saying that people can't do that.
Yes, it is covered technically. But practically nobody knows what is infringing in the non-literal infringement case. It all depends on the judge and context. Was this idea sufficiently original or was it a necessity, or a generic pattern? Each level of abstraction can get protection from copyright. You can only know if you sue/get sued.
I find non-literal copyrights (total concept and feel, abstraction filtration comparison/AFC) to be a perverse way to interpret "protected expression" as "protected abstraction". It is a betrayal of future creative activities to prop up the past ones.