The hero we need, but not the hero we deserve..
The issue is that every CS masters student & AI researcher knows how to build a SOTA LLM.. But, only a few companies have the resources.
The process:
(1) steal as much data from the internet as possible (data is everything) (2) raise incomprehensible amounts of money (3) find a location where you can take over the energy grid for training (4) put a black box around it so nobody can see the weights (5) charge users $$$ to use (6) retrain models with user session data (opt in by default) (7) peek around at how users are using, (maybe) change policies to stop them from using that way, and (maybe) rapidly develop features for that use case.
(Sorry that last one is jaded and not fair - just included to give you a picture of what could be happening with this sort of tech) …
The entire premise of the product is “built on the backs of any & everyone who has ever published a work”
Do any products exist which are not built on uncompensated work of other people in the past?
Generally speaking societies do better when knowledge is shared and not hoarded.
Hoarding knowledge via legal constructs is great at concentrating wealth to the hoarder at the expense of everyone else.
We should restore copyright to its original term lengths.
I agree with the stance of Anthropic et al that these models should be built with all possible information.
I agree with the stance of the FSF that the resulting models should be as freely usable/available as possible.
These companies do even better because we're not allowed to share the knowledge (read, illegally copy protected works) and they are.
It would be nice if members of the class could vote to force a case to trial. For the typical token settlement amount, I’m sure many would rather have the precedent-setting case instead.
But whether you can actually be compelled to do that isn't well tested in court. Challenging that the GPL is enforcable in that way leads you down the path that you had no valid license at all, and for past GPL offenders that would have been the worse outcome. AI companies could change that
This is true when talking about the infringement of the copyrights of others. But when discussing the infringement of GPL copyleft, making a potentially infringing artifact publicly available likely satisfies the license conditions.
The evil is that this case was settled, and before being settled was decided in a way contrary to all previous copyright decisions. The courts decided that rap records had to clear every single sample, thereby basically destroying the art form, but now you can literally feed every book into a blender, piece another book together out of the pieces, and sell it.
Hip-hop when it peaked with the Bomb Squad was such a frenetic mix of so many recognizable, unrecognizable, and transformed sources that it doesn't resemble anything that was made after the decisions against Biz Markie and De La Soul. Afterwards, you just licensed one song, slightly cut it up, and rapped over it. It was just a new way to sell old shit to young people unfamiliar with it.
Now you can literally just train a machine on the same stuff, and it's legal. A machine transformation was elevated over human creativity, simply because rich people wanted it.
Ignoring the fact that the statement doesn't talk about FSF code in the training data at all, [0] are you sure about that? From the start of the last of three paragraph in the statement:
Obviously, the right thing to do is protect computing freedom: share complete training inputs with every user of the LLM, together with the complete model, training configuration settings, and the accompanying software source code. Therefore, we urge Anthropic and other LLM developers that train models using huge datasets downloaded from the Internet to provide these LLMs to their users in freedom.
This seems to me to be consistent with the FSF's stance of "You told the computer how to do it. The right thing to do is to give the humans operating that computer the software, input data, and instructions that they need to do it, too.".[0] In fact, it talks about the inclusion of a book published under the terms of the GNU FDL, [1] which requires distribution of modified copies of a covered work to -themselves- be covered by the GNU FDL.
Saying nothing is an option. It is very possible (and the FSF has done it) to put yourself into a weaker position by saying something.
You don’t have to lie, but you don’t have to unpromptedly volunteer you don’t have a hand to play, either.
And then there is the chilling effect. If FSF can't enforce their license, who is going to sue to overturn the precedent? Large companies, publishers, and governments have mostly all done deals with the devil now. Joe Blow random developer is going to get a strip mall lawyer and overturn this? Seems unlikely
First, unless you can point to regurgitation of memorized code, you're not able to make an argument about distribution or replication. This is part of the problem that most publishers are having with prose text and LLMs. Modern LLMs don't memorize harry potter like GPT3 did. The memorization older models showed came from problems in the training data, e.g. harry potter and people writing about harry potter are extraordinarily over-represented. It's similar to how with stable diffusion you could prompt for anything in the region of "Van Gogh's Starry Night" and get it, since it was in the training data 50-100 different ways. You can't reliably do this with Opus or GPT5. If they're not redistributing the code verbatim, they're not in violation of the license. One could argue that the models produce "derivative works, but..."
The derivative works argument is inapt. The point of it is to disrupt someone's end-run around the license by saying that building on top of GPL code is not enough to non-GPL it. We imagine this will still work for LLMs because of the GPLs virality--I can't enclose a critical GPL module in non-GPL code and not release the GPL code. But the models aren't DOING THAT. They're not reaching for XYZ GPL'd project to build with. They're vibing out a sparsely connected network of information about literally trillions of lines of software. What comes out is a mishmash of code from here and there, and only coincidentally resembles GPL code, when it does. In order to make this argument work, you need a theory of how LLMs are trained and operate that supports it. Regardless of whether or not one of those theories exist, in court, you'd need to show that your theory was better than the company's expert witness's theory. Good luck.
Second, infringement would need discovery to uncover and would be contingent on user input. This is why the NYT sued for deleted user prompts to ChatGPT--the plaintiffs can't show in public that the content is infringing, so they need to seek discovery to find evidence. That's only going to work in cases where you survive a motion to dismiss--which is EXACTLY where a few of these suits have failed. You need to show first that you can succeed on the merits, then you proceed. That will cut down many of these challenges since they just can't show the actual infringement.
Third, and I think this is the most important, the license protections here are enforced by *copyright*. For copyright it very much matters if something is lifted verbatim vs modified. It is not like patent protection where things like clean room design are shown to have mattered to real courts on real matters. In additional contrast to patents, copyright doesn't care if the outcome is close. That's very much a concern for patents. If I patent a gizmo and you produce a gizmo that operates through nearly identical mechanisms to those I patented, then you can be sued--they don't need to be exact. If I write a novel about a boy wizard with glasses who takes a train to a school in Scotland and you write a novel about a boy wizard with glasses who takes a boat to a school in Inishmurray, I can't sue you for copyright infringement. You need to copy the words I wrote and distribute them to rise to a violation.
If you try any modern LLM, you will find that you can. Easily [0], reliably [1], consistently [2]. All these examples are with models released in 2025/26.
[0] https://arxiv.org/html/2601.02671?amp=&=
What is the real copyright risk of there being an arcane procedure to sometimes recover most of a text? So far it’s nothing. Which is what I’m saying. Pragmatically this is a loser of an argument in a court room. It is too easy for the chain of reasoning to be disrupted and even undisrupted the argument for model maker liability is attenuated.
I have, on many occasions, gotten an LLM to do just this. It's not particularly hard. In the most recent case google's search bar LLM happily regurgitated a digital ocean article as if it was it's own output. Searching for some strings in the comments located the original page and it was a 95% match between origin and output.
> The memorization older models showed came from problems in the training data,
And what proof do you have that they "fixed" this? And what was the fix?
> harry potter and people writing about harry potter
I'm not sure that's how you get GPT to reproduce upwards of 85% of Harry Potter novels.
> Second, infringement would need discovery to uncover and would be contingent on user input.
That's not at all how copyright infringement works. That would be if you wanted to prove malice and get triple damages. Copyright infringement is an exceptionally simple violation of the law. You either copied, or you did not.
> For copyright it very much matters if something is lifted verbatim vs modified.
Transformation is a valid defense for _some_ uses. It is not for commercial uses. Using LLM generated code for commercial purposes is a hazard.
We have yet to see a single judgment come down against a model maker for distributing the gist of content. We have yet to see a single judgment come down against a model maker for infringement at all.
Copyright is just an inapt tool here. It’s not going to do the job. It is not as though big interests have not tried to use this tool. It just doesn’t reflect what’s actually happening and it’s going to lose again and again.
We can imagine a theoretical legal regime where what is done with large language models counts as copyright infringement, we just don’t live in a world where that regime holds.
> "Therefore, we urge Anthropic and other LLM developers that train models using huge datasets downloaded from the Internet to provide these LLMs to their users in freedom"
They don't have the rights to distribute the training data.
The rephrased¹ title "FSF Threatens Anthropic over Infringed Copyright: Share Your LLMs Free" certainly doesn’t dramatise enough how odious an act it can be.
¹ Original title is "The FSF doesn't usually sue for copyright infringement, but when we do, we settle for freedom"
This is the reason why AI companies won't let anyone inspect which content was in the training set. It turns out the suspicions from many copyright holders (including the FSF) was true (of course).
Anthropic and others will never admit it, hence why they wanted to settle and not risk going to trial. AI boosters obviously will continue to gaslight copyright holders to believe nonsense like: "It only scraped the links, so AI didn't directly train on your content!", or "AI can't see like humans, it only see numbers, binary or digits" or "AI didn't reproduce exactly 100% of the content just like humans do when tracing from memory!".
They will not share the data-set used to train Claude, even if it was trained on AGPLv3 code.
(Edit: In the event of it being changed to match the actual article title, the current subject line for this thread is " FSF Threatens Anthropic over Infringed Copyright: Share Your LLMs Freel")
FSF licenses contain attribution and copyleft clauses. It's "do whatever you want with it provided that you X, Y and Z". Just taking the first part without the second part is a breach of the license.
It's like renting a car without paying and then claiming "well you said I can drive around with it for the rest of the day, so where is the harm?" while conveniently ignoring the payment clause.
You maybe confusing this with a "public domain" license.
I used to be on the FSF board of directors. I have provided legal testimony regarding copyleft licenses. I am excruciatingly aware of the difference between a copyleft license and the public domain.
Then why did you say "no harm was caused"? Clearly the harm of "using our copylefted work to create proprietary software" was caused. Do you just mean economic harm? If so, I think that's where the parent comments confusion originates.
The restrictions fall not only on verbatim distribution, but derivative works too. I am not aware whether model outputs are settled to be or not to be (hehe) derivative works in a court of law, but that question is at the vey least very much valid.
> the district court ruled that using the books to train LLMs was fair use but left for trial the question of whether downloading them for this purpose was legal.
The pipeline is something like: download material -> store material -> train models on material -> store models trained on material -> serve output generated from models.
These questions focus on the inputs to the model training, the question I have raised focuses on the outputs of the model. If [certain] outputs are considered derivative works of input material, then we have a cascade of questions which parts of the pipeline are covered by the license requirements. Even if any of the upstream parts of this simplified pipeline are considered legal, it does not imply that that the rest of the pipeline is compliant.
Or is the LLM going to regurgitate the same content with zero attribution, and shift all the traffic away from the original work?
When viewed in this frame, it is obvious that the work is derivative and then some.
Licences like AGPL also don't have redistribution as their only restriction.
Arguably, the use of the code in the Stack Overflow question and answer is fair use.
The problem occurs not when someone reads the Q&A with the improperly licensed code but rather when they then copy that code verbatim into their own non GPL product and distribute that without adherence to the GPL.
It's the last step - some human distributing the improperly licensed software that is the violation of the GPL.
This same chain of what is allowed and what is not is equally applicable to LLMs. Providing examples from GPL licensed material to answer a question isn't a license violation. The human copying that code (from any source) and pasting it into their own software is a license violation.
---
Some while back I had a discussion with a Swiss developer about the indefinite article used before "hobbit" in a text game. They used "an hobbit" and in the discussion of fixing it, I quoted the first line of The Hobbit. "In a hole in the ground there lived a hobbit." That cleared it up and my use of it in that (and this) discussion is fair use.
If someone listening to that conversation (or reading this one) thought that the bit that I quoted would be great on a T-shirt and them printed that up and distributed it - that would be a copyright violation.
Google's use of thumbnails for images was found to be fair use. https://en.wikipedia.org/wiki/Perfect_10,_Inc._v._Amazon.com...
The Ninth Circuit did, however, overturn the district court's decision that Google's thumbnail images were unauthorized and infringing copies of Perfect 10's original images. Google claimed that these images constituted fair use, and the circuit court agreed. This was because they were "highly transformative."
If I was to then take those thumbnails from a google image search and distribute that as an icon library, I would then be guilty of copyright infringement.I believe that Stack Overflow, Google Images, and LLM models and their output constitutes an example of transformative fair use. What someone does with that output is where copyright infringement happens.
My claim isn't that AI vendors are blameless but rather that in the issue of copyright and license adherence it is the human in the process that is the one who has agency and needs to follow copyright (and for AI agents that were unleashed without oversight, it is the human that spun them up or unleashed them).
> If what you do with a copyrighted work is covered by fair use it doesn't matter what the license says - you can do it anyway.
How is it that contracts can prohibit trial by jury but they can't ban prohibit fair use of copyrighted work? Is there a list of things a contract is and isn't allows to prohibit, and explanations/reasons for them?
It's also relevant that copyright (and fair use) is federal law, contracts are state law and federal law preempts state law.
"Sam Williams and Richard Stallman's Free as in freedom: Richard Stallman's crusade for free software"
"GNU Free Documentation License (GNU FDL). This is a free license allowing use of the work for any purpose without payment."
I'm not familiar with this license or how it compares to their software licenses, but it sounds closer to a public domain license.
> 4. MODIFICATIONS
> You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
Etc etc.
In short, it is a copyleft license. You must also license derivative works under this license.
Just fyi, the gnu fdl is (unsurprisingly) available for free online - so if you want to know what it says, you can read it!
Right. I can publish the work in whole without asking permission. That’s unrestricted duplication.
However, as i read it, an LLM spitting out snippets from the text is not “duplicating” the work. That would fall under modifications. From the license:
> A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
I read that pretty clearly as any work containing text from a gnu fdl document is a modification not a duplication.
1) Obtaining the copyrighted works used for training. Anthropic did this without asking for the copyright holders' permission, which would be a copyright violation for any work that isn't under a license that grants permission to duplicate. The GFDL does, so no issue here. 2) Training the model. The case held that this was fair use, so no issue here. 3) Whether the output is a derivative work. If so then you get to figure out how the GFDL applies to the output, but to the best of my knowledge the case didn't ask this question so we don't know.
If I took a book and cut it up into individual words (or partial words even), and then used some of the words with words from every other book to write a new book, it'd be hard to argue that I'm really "distributing the first book", even if the subject of my book is the same as the first one.
This really just highlights how the law is a long way behind what's achievable with modern computing power.
Which is all to say that the law is actually really bad at determining what is right and wrong, and our moral compasses should not defer to the law. Unfortunately, moral compasses are often skewed by money - like how normal compassess are skewed by magnets
By your description of the law, this svg file is not infringing on disney’s copyright - since it’s a program that when run creates an infringing document (the rasterized pixels of mickey mouse) but it is not an infringing document itself.
I really don’t think my “i wrote a program in the svg language” defense would hold up in court. But i wonder how many levels of abstraction before it’s legal? Like if i write the mickey-mouse-generator in python does that make it legal? If it generates a variety of randomized images of mickey mouse, is that legal? If it uses statistical anaylsis of many drawings of mickey to generate an average mickey mouse, is that legal? Does it have to generate different characters if asked before it is legal? Can that be an if statement or does it have to use statistical calculations to decide what character i want?
wikipedia used to be under FDL and they lobbied FSF to allow an escape hatch to Commons for a few months, because FDL was so annoying.
"The FSF doesn't usually sue for copyright infringement, but when we do, we settle for freedom"
and this sentence at the end
" We are a small organization with limited resources and we have to pick our battles, but if the FSF were to participate in a lawsuit such as Bartz v. Anthropic and find our copyright and license violated, we would certainly request user freedom as compensation."
could be seen as "threatening".
Not a nothing burger, but not totally insignificant either.
> We are a small organization with limited resources and we have to pick our battles, but if the FSF were to participate in a lawsuit such as Bartz v. Anthropic and find our copyright and license violated, we would certainly request user freedom as compensation.
Sounds more like “we can’t and won’t sue, but this is the kind of compensation that we think would be appropriate”
The FSF doesn't usually sue for copyright infringement, but when we do, we settle for freedom
"Yeah we can't prosecute this person for stealing your car, because you haven't considered how they're going to get to work"