Our foreparents fought for the right to implement works-a-like to corporate software packages, even if the so-called owners did not like it. We're ready to throw it all away, and let intellectual property owners get so much more control.
The implications will not end up being anti-large-corporation or pro-sharing. If you can prevent someone from re-implementing a spec or building a client that speaks your API or building a work-a-like, it will be the large corporations that exersize this power as usual.
Our "foreparents" weren't competing with corporations with unlimited access to generative AI trained on their work. The times, they're-a-changin'.
You're rehashing the argument made in one of the articles which this piece criticizes and directly addresses, while ignoring the entirety of what was written before the conclusion that you quoted.
If anyone finds themselves agreeing with the comment I'm responding to, please, do yourself a favor and read the linked article.
I would do no justice to it by reiterating its points here.
So: once it's not "hard" any more, does IP even make sense at all? Why grant monopoly rights to something that required little to no investment in the first place? Even with vestigial IP law - let's say, patents: it just becomes and input parameter that the AI needs to work around the patents like any other constraints.
I think it still does: IIRC, the current legal situation is AI-output does not qualify for IP protections (at least not without substantial later human modification). IP protections are solely reserved for human work.
And I'm fine with that: if a person put in the work, they should have protections so their stuff can't be ripped off for free by all the wealthy major corporations that find some use for it. Otherwise: who cares about the LLMs.
Then fix that instead of blowing it up. Because IP law is also literally the only thing that protects the little guy's work in many cases.
Arguments like yours are kinda unfathomably incomplete to me, almost like they're the remnants of some propaganda campaign. It's constructed to appeal to the defense of the little guy, but the actual effect would be to disempower him and further empower the wealthy major corporations with "big enough warchest[s]."
I mean, one thing I think the RIAA would love is to stop paying royalties to every artist ever. And the only thing they'd be worried about is an even bigger fish (like Amazon, Apple, or Spotify) no longer paying royalties to them. But as you said, they have a big enough war chest that they probably could force a deal somehow. All the artists without a war chest? Left out in the cold.
I beg to differ. AI-output did not entitle the person creating the prompt for IP protections, so far – but my objection is not directed towards the "so far", but towards your omission of "the person creating the prompt", because if an AI outputs copyrighted material from the training data, that material is still copyrighted. AI is not a magical copyright removal machine.
What this means in practice is that (currently), all output of an LLM is legally considered to not be copyrightable (to the extent that it's an original work). If it happens to regurgitate an existing copyrighted work, though, is that infringement? I'm not sure we have a legal precedent on that question yet.
It is entirely possible, however, that human beings will not be the primary drivers of progress on those problems.
With AI, a similar process is happening - publicly available information becomes enclosed by the model owners. We will probably get a "vestigial" intellectual property in the form of model ownership, and everyone will pay a rent to use it. In fact, companies might start to gatekeep all the information to only their own LLM flavor, which you will be required to use to get to the information. For example, product documentation and datasheets will be only available by talking to their AI.
I guess the state of play will be that for new drugs the original manufacturer will already have done that and ensured that literally anything that could be found as a workaround is included in the scope of the patent. But I feel like it will not be possible to keep that wartertight.
That also seems relevant for this whole discussion, actually -- if a work can't be copyrighted it certainly can't have a changed license, or any license at all. (I guess it's effectively public domain to the extent that it's public at all?)
"Lower courts upheld a U.S. Copyright Office decision that the AI-crafted visual art at issue in the case was ineligible for copyright protection because it did not have a human creator."
Not eligible for copyright protection does not mean it can be copyrighted "under the human creator's name". It means there is no creative work at all. No copyright.
Patents came along when farmers started making city goods, threatening guilds secrets. Copyright came when the printing press made copying and translating the bible easy and accessible to all. (Trademark admittedly does not fit this view, but doesn't seem all that damaging either)
To Protect The Arts, and To Time Limit Trade Secrets were just the Protect The Children of old times, a way to confuse people who didn't look too hard at actual consequences.
This means that the future of IP depends on what lets the powers that be pull up the ladder behind them. Long term I'd expect e.g. copyright expansion and harder enforcement, just because cloning by AI gets easy enough to threaten the status quo.
Isn’t trademark the only thing keeping a certain cartoon mouse out of the public domain, despite the fact that his earliest animations are out of copyright? Not sure if you’d consider that damaging, or if anyone has yet tested the boundaries of the House of Mouse’s patience here.
A company spends a decade and billions of dollars to develop a groundbreaking drug and patents it.
I think of a cool new character called "Mr Poop" and publish a short story about him with an hour of work.
Both of us get the exact same protection under the law (yes yes I know copyright vs patent etc., but ultimately they are all about IP protection).
Company incorporates GPL code in their product? Never once have courts decided to uphold copyright. HP did that many times. Microsoft got caught doing it. And yet the GPL was never applied to their products. Every time there was an excuse. An inconsistent excuse.
Schoolkid downloads a movie? 30,000 USD per infraction PLUS armed police officer goes in and enforces removal of any movies.
Or take the very subject here. AI training WAS NOT considered fair use when OpenAI violated copyright to train. Same with Anthropic, Google, Microsoft, ... They incorporated harry potter and the linux kernel in ChatGPT, in the model itself. Undeniable. Literally. So even if you accept that it's changed now, OpenAI should still be forced to redistribute the training set, code, and everything needed to run the model for everything they did up to 2020. Needless to say ... courts refused to apply that.
So just apply "the law", right. Courts' judgement of using AI to "remove GPL"? Approved. Using AI to "make the next Disney-style movie"? SEND IN THE ARMY! Whether one or the other violates the law according to rational people? Whatever excuse to avoid that discussion is good enough.
In all of these cases an AI model is taking a copyrighted source, reading it, jumbling the bytes and storing it in its memory as vectors.
Later a query reads these vectors and outputs them in a form which may or may not be similar to the original.
I don't know of any rulings on the context window, but it's certainly possible judges would rule that would not qualify as transformative.
I'm not sure there should be, but I think there is.
AI 1: - (reads the source), creates a spec + acceptance criteria
AI 2: - implements from spec
AI 1 is in the position of the maintainer who facilitated the license swap.
> That question is this: does legal mean legitimate?
Just because something is legal does not mean it's moral thing to do.
is it legitimate for millions of people to exploit, expound on knowledge that was perhaps, to begin with, not legitimate to use? well they did already, who's to judge the commons now?
https://en.wikipedia.org/wiki/Software_patents_under_the_Eur...
But also software patents and design patents are totally different things.
The original implementation would still have the upper hand here. OTOH if I as a nobody create something cool, there's nothing stopping a huge corporation from "reimplementing" (=stealing) it and and using their huge advertising budget to completely overshadow me.
And that's how they like it.
The test is if the output is transformative enough, and that is what NYT vs OpenAI etc. are arguing.
But agreed that we're waiting for a court case to confirm that. Although really, the main questions for any court cases are not going to be around the principle of fair use itself or whether training is transformative enough (it obviously is), but rather on the specifics:
1) Was any copyrighted material acquired legally (not applicable here), and
2) Is the LLM always providing a unique expression (e.g. not regurgitating books or libraries verbatim)
And in this particular case, they confirmed that the new implementation is 98.7% unique.
Training an LLM inherently requires making a copy of the work. Even the initial act of loading it from the internet and copying it into memory to then train the LLM is a copy that can be governed by its license and copyright law
But that's not relevant here. Because the copyleft license does not prohibit that (and it's not even clear that any license can prohibit it, as courts may confirm it's fair use, as most people are currently assuming). That's why I noted under (1) that it's not applicable here.
LLM training involves ingesting works (in a potentially transformative process) and partially reproduce them - that's a generally restricted action when it comes to licensing.
Sure, but that's not what LLM's generally do, and it's certainly not what they're intended to do.
The LLM companies, and many other people, argue that training falls under fair use. One element of fair use is whether the purpose/character is sufficiently transformative, and transforming texts into weights without even a remote 1-1 correspondence is the transformation.
And this is why LLM companies ensure that partial reproduction doesn't happen during LLM usage, using a kind of copyrighted-text filter as a last check in case anything would unintentionally get through. (And it doesn't even tend to occur in the first place, except when the LLM is trained on a bunch of copies of the same text.)
Some might hold that we've granted persons certain exemptions, on account of them being persons. We do not have to grant machines the same.
> In copyright terms, it's such an extreme transformative use that copyright no longer applies.
Has the model really performed an extreme transformation if it is able to produce the training data near-verbatim? Sure, it can also produce extremely transformed versions, but is that really relevant if it holds within it enough information for a (near-)verbatim reproduction?
No we don't have to, but so far we do, because that's the most legally consistent. If you want to change that, you're going to need to pass new laws that may wind up radically redefining intellectual property.
> Has the model really performed an extreme transformation if it is able to produce the training data near-verbatim?
Of course it has, if the transformation is extreme, as it appears to be here. If I memorize the lyrics to a bunch of love songs, and then write my own love song where every line is new, nobody's going to successfully sue me just because I can sing a bunch of other songs from memory.
Also, it's not even remotely clear that the LLM can produce the training data near-verbatim. Generally it can't, unless it's something that it's been trained on with high levels of repetition.
> you're going to need to pass new laws that may wind up radically redefining intellectual property
You're correct that this is one route to resolving the situation, but I think it's reasonable to lean more strongly into the original intent of intellectual property laws to defend creative works as a manner to sustain yourself that would draw a pretty clear distinction between human creativity and reuse and LLMs.
But you're missing the other half of copyright law, which is the original intent to promote the public good.
That's why fair use exists, for the public good. And that's why the main legal argument behind LLM training is fair use -- that the resulting product doesn't compete directly with the originals, and is in the public good.
In other words, if you write an autobiography, you're not losing significant sales because people are asking an LLM about your life.
I feel as though, from an information-theoretic standpoint, it can't be possible that an LLM (which is almost certainly <1 TB big) can contain any substantial verbatim portion of its training corpus, which includes audio, images, and videos.
BTW in 2023 I watched ChatGPT spit out hundreds of lines of F# verbatim from my own GitHub. A lot of people had this experience with GitHub Copilot. "98.7% unique" is still a lot of infringement.
That's not relevant, because you can still sue the person using the LLM and publishing the repository. Legal liability is completely unchanged.
What AI are eroding is copyright. You can re-implement not just a GPL program, but to reverse engineer and re-implement a closed source program too, people have demonstrated it already, there were stories here on HN about it.
AI is eroding copyright, so there may no longer be a need for the GPL. GNU should stop and rethink its stance, chuck away the GPL as the main tool to fight evil software corporations and embrace LLM as the main weapon.
LLM's - to date - seem to require massive capital expenditures to have the highest quality ones, which is a monumental shift in power towards mega corporations and away from the world of open source where you could do innovative work on your own computer running Linux or FreeBSD or some other open OS.
I don't think that's an exciting idea for the Free Software Foundation.
Perhaps with time we'll be able to run local ones that are 'good enough', but we're not there yet.
There's also an ethical/moral question that these things have been trained on millions of hours of people's volunteer work and the benefits of that are going to accrue to the mega corporations.
Edit: I guess the conclusion I come to is that LLM's are good for 'getting things done', but the context in which they are operating is one where the balance of power is heavily tilted towards capital, and open source is perhaps less interesting to participate in if the machines are just going to slurp it up and people don't have to respect the license or even acknowledge your work.
Yeah, a bit of a conundrum. But I don't think that fighting for copyright now can bring any benefits for FOSS. GNU should bring Stallman back and see whether he can come with any new ideas and a new strategy. Alternatively they could try without Stallman. But the point is: they should stop and think again. Maybe they will find a way forward, maybe they won't but it means that either they could continue their fight for a freedom meaningfully, or they could just stop fighting and find some other things to do. Both options are better then fighting for copyright.
> There's also an ethical/moral question that these things have been trained on millions of hours of people's volunteer work and the benefits of that are going to accrue to the mega corporations.
I want a clarify this statement a bit. The thing with LLM relying on work of others are not against GPU philosophy as I understand it: algorithms have to be free. Nothing wrong with training LLMs on them or on programs implementing them. Nothing wrong with using these LLMs to write new (free) programs. What is wrong are corporations reaping all the benefits now and locking down new algorithms later.
I think it is important, because copyright is deemed to be an ethical thing by many (I think for most people it is just a deduction: abiding the law is ethical, therefore copyright is ethical), but not for GNU.
IMO the primary significant trend in AI. Doesn't get talked about nearly enough. Means the AI is working, I guess.
>GNU should bring Stallman back ... Alternatively they could try without Stallman.
Leave Britney alone >:(
>copyright is deemed to be an ethical thing by many (I think for most people it is just a deduction: abiding the law is ethical, therefore copyright is ethical)
I've busted out "intellectual property is a crime against humanity" at layfolk to see if that shortcuts through that entire little politico-philosophical minefield. They emote the requisite mild shock when such things as crimes against humanity are mentioned; as well as at someone making such a radical statement which seems to come from no familiar species of echo chamber; and then a moment later they begin to very much look like they see where I'm coming from.
Right now, we can get local models that you can run on consumer hardware, that match capabilities of state of the art models from two years ago. The improvements to model architecture may or may not maintain the same pace in the future, but we will get a local equivalent to Opus 4.6 or whatever other benchmark of "good enough" you have, in the foreseeable future.
When the FSF and GPL were created, I don't think this was really a consideration. They were perfectly happy with requiring Big Iron Unix or an esoteric Lisp Machine to use the software - they just wanted to have the ability to customize and distribute fixes and enhancements to it.
There are near-SOTA LLM's available under permissive licenses. Even running them doesn't require prohibitive expenses on hardware unless you insist on realtime use.
What async tasks could a local LLM accomplish on Intel 11th gen CPU with 32GB RAM?
It's nowhere near the order of magnitude of the kind of spending they're sinking into LLM's. The FSF and other groups were reasonably successful at enforcing the GPL, operating on a budget 1000's of times smaller than that of AI companies.
Being able to coat efficiently run frontier models is i think, not a high priced endeavor for an org (compared to an individual).
IMO the proposition is little fishy, but its not totally without merit and imo deserves investigation. If we are all worried about our jobs, even via building custom for sale software, there is likely something there that may obviate the need at least for end user applications. Again, im deeply skeptical, but it is interesting.
Running proprietary model would make you subject to whatever ToS the LLM companies choose on a particular day, and what you can produce with them, which circles back to the raison d'etre for the GPL and GNU.
Until all software copyright is dead and buried, there is no need for copyleft to change tack. Otherwise there rising tide may rise high enough to drown GPL, but not proprietary software.
Open source is easier to counterfeit/license-launder/re-implement using LLMs because source code is much lower-hanging fruit, and is understood by more people than closed-source assembly.
This was already the case and it just got worse, not better.
Now they've just hoovered up all the free stuff into machines that can mix it up enough to spit it out in a way that doesn't even require attribution, and you have to pay to use their machine.
Before we had RedHat and Ubuntu, who at least were contributing back, now we have Microsoft, Anthropic and OpenAI who are racing to lock the barn door around their new captive sheep. It's just a massive IP laundromat.
Unfortunately, there are cases where you simply can't just "re-implement" something. E.g., because doing so requires access to restricted tools, keys, or proprietary specifications.
"So, I looked for a way to stop that from happening. The method I came up with is called “copyleft.” It's called copyleft because it's sort of like taking copyright and flipping it over. [Laughter] Legally, copyleft works based on copyright. We use the existing copyright law, but we use it to achieve a very different goal."
https://writings.hongminhee.org/2026/03/legal-vs-legitimate/
i.e. mirroring it
> use it to achieve a very different goal."
"very different goal" isn't the same as "fundamentally destroying copyright"
the very different goal include to protect public code to stay public, be properly attributed, prevent companies from just "sizing" , motivate other to make their code public too etc.
and even if his goals where not like that, it wouldn't make a difference as this is what many people try to archive with using such licenses
this kind of AI usage is very much not in line with this goals,
and in general way cheaper to do software cloning isn't sufficient to fix many of the issues the FOSS movement tried to fix, especially not when looking at the current ecosystem most people are interacting with (i.e. Phones)
---
("sizing"): As in the typical MS embrace, extend and extinguish strategy of first embracing the code then giving it proprietary but available extensions/changes/bug fixes/security patches to then make them no longer available if you don't pay them/play by their rules.
---
Through in the end using AI as a "fancy complicated" photocopier for code is as much removing copyright as using a photocopier for code would. It doesn't matter if you use the photocopier blind folded and never looked at the thing you copied.
For the right goal, he should have called it "rightcopy".
It also grants one major right/feature to the creator, the ability to spread their work while keeping it as open as they intend.
LLMs are one of the primary manifestations of 'evil software corporations' currently.
Is this LLM thing freely available or is it owned and controlled by these companies? Are we going to rent the tools to fight "evil software corporations"?
It’s probably only a matter of time before open models are as good as Claude code is today.
it's not that simple
yes, GPLs origins have the idea of "everyone should be able to use"
but it also is about attribution the original author
and making sure people can't just de-facto "size public goods"
the kind of AI usage is removing attribution and is often sizing public goods in a way far worse then most companies which just ignored the license did
so today there is more need then ever in the last few decades for GPL like licenses
Reducing it to "well you can clone the proprietary software you're forced to use by LLM" is really missing the soul of the GPL.
A court ordered the first Nosferatu movie to be destroyed because it had too many similarities to Dracula. Despite the fact that the movie makes rather large deviations from the original.
If Claude was indeed asked to reimplement the existing codebase, just in Rust and a bit optimized, that could well be a copyright violation. Just like rephrasing A Song ot Ice and Fire a bit, and switching to a different language, doesn't remove its copyright.
Allegedly. There have been several people who doubted this story. So how to find out who is right? Well, just let Claude compare the sources. Coincidentally, Claude Opus 4.6 doesn't just score 75.6% on SWE-bench Verified but also 90.2% on BigLaw Bench.
It's like our copyright lawyer is conveniently also a developer. And possibly identical to the AI that carried out the rewrite/reimplemention in question in the first place.
There is some precedent for this, e.g. Alchemised is a recent best seller that had just enough changed from its Harry Potter fan fiction source in order to avoid copyright infringement: https://en.wikipedia.org/wiki/Alchemised
(I avoided the term “remove copyright” here because the new work is still under copyright, just not Harry Potter - related copyright.)
At the moment it's people that are eroding copyright. E.g. in this case someone did something.
"AI" didn't have a brain, woke up and suddenly decided to do it.
Realistically nothing to do with AI. Having a gun doesn't mean you randomly shoot.
Unless it is IP of the same big corpos that consumed all content available. Good luck with eroding them.
Generative models (AI) are not really eroding copyright. They are calling its bluff. The very notion of intellectual property depends on a property line: some arbitrary boundary where the property begins and ends. Generative models blur that line, making it impractical to distinguish which property belongs to whom.
Ironically, these models are made by giant monopolistic corporations whose wealth is quite literally a market valuation (stock price) of their copyrights! If generative models ever become good enough to reimplement CUDA, what value will NVIDIA have left?
The reality is that generative models are nowhere near good enough to actually call the bluff. Copyright is still the winning hand, and that is likely to continue, particularly while IP holders are the primary authors of law.
---
This whole situation is missing the forest for the trees. Intellectual Property is bullshit. A system predicated on monopoly power can only result in consolidated wealth driving the consolidation of power; which is precisely what has happened. The words "starving artist" ring every bit as familiar today as any time in history. Copyright has utterly failed the very goals it was explicitly written with.
It isn't the GPL that needs changing. So long as a system of copyright rules the land, copyleft is the best way to participate. What we really need is a cohesive political movement against monopoly power; one that isn't conveniently ignorant of copyright as its most significant source.
> He fed only the API and the test suite to Claude and asked it to reimplement the library from scratch.
From GPL2:
> The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable.
Is a project's test suite not considered part of its source code? When I make modifications to a project, its test cases are very much a part of that process.
If the test suite is part of this library's source code, and Claude was fed the test suite or interface definition files, is the output not considered a work based on the library under the terms of LGPL 2.1?
Legally, using the tests to help create the reimplementation is fine.
However, it seems possible you can't redistribute the same tests under the MIT license. So the reimplementation MIT distribution could need to be source code only, not source code plus tests. Or, the tests can be distributed in parallel but still under LGPL, not MIT. It doesn't really matter since compiled software won't be including the tests anyways.
I'm not following your logic there, and I don't see any mention of "transformative" in the license. Can you explain what you mean?
And so, a work being sufficiently transformative is one way in which copyright no longer applies, but that's not the case here specifically. The specific case here is essentially just a clean-room reimplementation (though technically less "clean", but still presumably the same legally). But the end result is still a completely different expression of underlying non-copyrightable ideas.
And in both cases, it doesn't matter what the original license was. If a resulting work is sufficiently transformative or a reimplementation, copyright no longer applies, so the license no longer applies.
The library's test suite and interfaces were apparently used directly, not transformed. If either of those are considered part of the library's source code, as the license's wording seems to suggest, then I think output from their use could be considered a work based on the library as defined in the license.
Google LLC v Oracle America assumed (though didn't establish) that API's are copyrightable... BUT that developing against them falls under fair use, as long as the function implementations are independent.
Test suites are again generally considered copyrightable... but the behavior being tested is not.
So no, it's not considered to be a work based on the library. This seems pretty clear-cut in US law by now.
Also, the LGPL text doesn't say "work based on the library". It says "If you modify a copy of the Library", and this is not a "combined work" either. And the whole point is that this is not a modified copy -- it's a reimplementation.
In theory, a license could be written to prevent running its tests from being run against software not derived from the original, i.e. clean-room reimplementations. In practice, it remains dubious whether any court would uphold that. And it would also be trivial to then get around it, by taking advantage of fair use to re-implement the tests in e.g. plain English (or any specification language), and then re-implementing those back into new test code. Because again, test behaviors are not copyrightable.
Software patents would work as you describe, but not copyright.
Regarding chardet, I'm not sure "I wanted to circumvent the license" is a good way to argue fair use.
This feels sort of like saying "I just blindly threw paint at that canvas on the wall and it came out in the shape of Mickey Mouse, and so it can't be copyright infringement because it was created without the use of my knowledge of Micky Mouse"
Blanchard is, of course, familiar with the source code, he's been its maintainer for years. The premise is that he prompted Claude to reimplement it, without using his own knowledge of it to direct or steer.
I would argue it's irrelevant if they looked or didn't look at the code. As well as weather he was or wasn't familiar with it.
What matters is, that they feed to original code into a tool which they setup to make a copy of it. How that tool works doesn't really matter. Neither does it make a difference if you obfuscate that it's an copy.
If I blindfold myself when making copies of books with a book scanner + printer I'm still engaging in copyright infringement.
If AI is a tool, that should hold.
If it isn't "just" a tool, then it did engage in copyright infringement (as it created the new output side by side with the original) in the same way an employee might do so on command of their boss. Which still makes the boss/company liable for copyright infringement and in general just because you weren't the one who created an infringing product doesn't mean you aren't more or less as liable of distributing it, as if you had done so.
Well, no. They fed the spec (test cases, etc) into a tool which made a new program matching the spec. This is not a copy of the original code.
But also this feels like arguing over the color of the iceberg while the titanic sinks. If you have a tool that can make code to spec, what is the value in source code anymore? Even if your app is closed-source, you can just tell claude to write new code that does the same thing.
Yes...
> and Anthropic fed the code to the tool,
Presumably, as part of the massive amount of open-source code that must have been fed in to train their model.
> so Blanchard didn't do anything wrong, and Anthropic didn't do anything wrong. Nothing to see here.
This is meant as irony, right?
So when you clone the behavior of a program like chardet without referencing the original source code except by executing it to make sure your clone produces exactly the same output, you may still be infringing its copyright if that output reflects creative choices made in the design of chardet that aren't fully determined by the functional purpose of the program.
I'm not sure how you square the circle of "it's alright to use the LLM to write code, unless the code is a rewrite of an open source project to change its license".
> I'm not sure how you square the circle of "it's alright to use the LLM to write code
You seem like you're on the cusp of stating the obvious correct conclusion: it isn't.
That's your opinion (since you said "IMO"), not the actual legal definition.
Then onto prompting: 'He fed only the API and (his) test suite to Claude'
This is Google v Oracle all over again - are APIs copyrightable?
About this specific point, it is unclear how much of a defect memorization actually is - there are also reasons to see it as necessary for effective learning. This link explains it well:
https://infinitefaculty.substack.com/p/memorization-vs-gener...
Yes this is the best way to ask the question. If I take a public facing API and reimplement everything, whether it's by human or machine, it should be sufficient. After all, that's what Google did, and it's not like their engineers never read a single line of the Java source code. Even in "clean room" implementations, a human might still have remembered or recalled a previous implementation of some function they had encountered before.
> But how far away from direct and explicit representations do we have to go before copyright no longer applies?
Copyright infringement is a thing humans do. It's not a human.
Just like how the photos taken by a monkey with a camera have no copyright. Human law binds humans.
If we are saying AI is "more than a tool", which seems to be the case courts are leaning since they've ruled AI output without direct human involvement is not copyrightable[0], then the above seems like it would be entirely legal.
Even if the final output doesn't have copyright protection it might still be copyright violation. I think it could be reasonable to have work that itself violates copyright when distributed even if it does not have copy right itself.
If I know it is legal to make a turn at a red light. And I know a court will uphold that I was in the right but a police officer will fine me regardless and I would need to go to actually pursue some legal remedy I'm unlikely to do it regardless of whether it is legal because it is expensive, if not in money but time.
In the case of copyright lawsuits they are notoriously expensive and long so even if a court would eventually deem it fine, why take the chance.
Anything you put out can and will be used by whatever giant company wants to use it with no attribution whatsoever.
Doesn’t that massively reduce the incentive to release the source of anything ever?
It's the same question as, if an AI can generate "art", or photographers can capture a scene better than any (realistic) painter, then will people still create art? Obviously yes, and we see it of course after Stable Diffusion was released three years ago, people are still creating.
So ignoring people who are being paid by corporations directly to work on open source, in my experience the vast majority of contributors expect to be able to monetize their work eventually in a way that requires attribution. And out of the small number who don’t expect a monetary return of any kind, a still smaller number don’t expect recognition.
If this weren’t the case you’d see a much larger amount of anonymous contributions. There are people who anonymously donate to charity. The vast majority want some kind of recognition.
Obviously we still see art, if you greatly reduce the monetary benefit to producing art, you’ll see a lot less of it. This is especially true of non trivial open source software that unlike static artwork requires continual maintenance.
The non IP protection has largely been in the effort involved in replicating an application's behavior and that effort is dropping precipitously.
In this case, we could theoretically prove that the new chardet is a clean reimplementation. Blanchard can provide all of the prompts necessary to re-implement again, and for the cost of the tokens anyone can reproduce the results.
IANAL, but that analogy wouldn't work because Mickey Mouse is a trademark, so it doesn't matter how it is created.
My understanding was that his claim was that Claude was not looking at the existing source code while writing it.
> He fed only the API and the test suite to Claude and asked it
Difference being Claude looked; so not blind. The equivalent is more like I blindly took a photo of it and then used that to...
Technically did look.
What he claimed, and what was interesting, was that Claude didn't look at the code, only the API and the test suite. The new implementation is all Claude. And the implementation is different enough to be considered original, completely different structure, design, and hey, a 48x improvement in performance! It's just API-compatible with the original. Which as per the Google Vs oracle 2021 decision is to be considered fair use.
Who opened the PR? Who co-authored the commits? It's clearly on Github.
> Blanchard was a chardet maintainer for years. Of course he had looked at its code!
So there you have it. If he looked, he co-authored then there's that.
Blanchard is very clear that he didn't write a single line of code. He isn't an author, he isn't a co-author.
Signing GitHub commit doesn't change that.
He used Claude to write it. Difference? The fact that I write on the notepad vs printed it out = I didn't do it?
> Signing GitHub commit doesn't change that.
That's the equivalent of me saying I didn't kill anyone. The fingerprints on the knife doesn't change that.
This would make it so relicensing with AI rewrites is essentially impossible unless your goal is to transition the work to be truly public domain.
I think this also helps somewhat with the ethical quandary of these models being trained on public data while contributing nothing of value back to the public, and disincentivize the production of slop for profit.
https://www.carltonfields.com/insights/publications/2025/no-...
> No Copyright Protection for AI-Assisted Creations: Thaler v. Perlmutter
> A recent key judicial development on this topic occurred when the U.S. Supreme Court declined to review the case of Thaler v. Perlmutter on March 2, 2026, effectively upholding lower court rulings that AI-generated works lacking human authorship are not eligible for copyright protection under U.S. law
This was AI summary? Those words were not in the article.
The courts said Thaler could not have copyright because he refused to list himself as an author.
That's not true at all. Anyone could follow these steps:
1. Have the LLM rewrite GPL code.
2. Do not publish that public domain code. You have no obligation to.
3. Make a few tweaks to that code.
4. Publish a compiled binary/use your code to host a service under a proprietary license of your choice.
Sec has a deny by default policy. Eng has a use-more-AI policy. Any code written in-house is accepted by default. You can see where this is going.
We've been using AI to reimplement tooling that security won't approve. The incentives conspired in the worst outcome, yet here we are. If you want a different outcome, you need to create different incentives.
There is a fundamental corpo-cognitive dissonance, to boot. If "AI" is cheap enough and good enough to implement security-relevant software from `git init` repeatedly, why isn't it also cheap enough and good enough to assess and approve the security of third-party software at pace with internal adoption? Is there some basis to believe LLMs' leverage on production differs from its leverage on analysis of existing code?
But a point that was not made strongly, which highlights this even more, is that this goes in every direction.
If this kind of reimplementation is legal, then I can take any permissive OSS and rebuild it as proprietary. I can take any proprietary software and rebuild it as permissive. I can take any proprietary software and rebuild it as my own proprietary software.
Either the law needs to catch up and prevent this kind of behavior, or we're going to enter an effectively post-copyright world with respect to software. Which ISN'T GOOD, because that will disincentivize any sort of open license at all, and companies will start protecting/obfuscating their APIs like trade secrets.
Companies can take open-source software and make a proprietary reimplementation. You can't take a proprietary software and make an open source GPL version.
I am absolutely certain that if you tried you would be sued to oblivion. But big company screwing up open source is not even news anymore. In fact I (still) believe that the fact that even though LLMs were trained on tons of GPL and AGPL or even unlicensed software it's considered ok to use LLM code in proprietary projects is example of just that.
Crazy that only now we're seeing a bunch of articles coming to the same conclusion now.
I think copyright should still apply, but if it doesn't, we need new laws - ones which protect all human work, creative or not. Laws should serve and protect people, not algorithms and not corporations "owning" those algorithms.
I put owning in quotes because ownership should go to the people who did the work.
Buying/selling ownership of both companies and people's work should be illegal just like buying/selling whole humans is. Even if it took thousands of years to get here.
Money should not buy certain things because this is the root cause of inequality. Rich people are not getting richer at a faster rate by being more productive than everyone else but by "owning" other people's work and using it as leverage to extract even more from others.
Maybe LLM and mass unemployment of white collar workers will be the wakeup call needed for a reform. Or revolution.
Last time this happened was during the second industrial revolution and that's how communism got popular. We should do better this time because this is the last revolution which might be possible.
When I first read about the chardet situation, I was conflicted but largely sided on the legal permissibility side of things. Uncomfortably I couldn't really fault the vibers; I guess I'm just liberal at heart.
The argument from the commons has really invoked my belief in the inherent morality of a public good. Something being "impermissible" sounds bad until you realize that otherwise the arrow of public knowledge suddenly points backwards.
Seeing this example play out in real life has had retroactive effects on my previously BSD-aligned brain. Even though the argument itself may have been presented before, I now understand the morals that a GPL license text underpins better.
If he is claiming to have been somehow substantively "enough" involved to make the code copyrightable, then his own familiarity with the previous LGPL implementation makes the new one almost certainly a derivative of the original.
The "clean room rewrite" is just an extreme way to have a bulletproof shield against litigation. Not doing it that way doesn't automatically make all new code he writes derivative solely because he saw how the code worked previously.
And if he was in fact more involved (which he appears to deny) that it's a bit weak to say that someone with huge familiarity with chardet could choose to reimplement chardet without the result being derivative.
1. The cost continues to trend to 0, and _all_ software loses value and becomes immediately replaceable. In this world, proprietary, copyleft and permissive licenses do not matter, as I can simply have my AI reimplement whatever I want and not distribute it at all.
2. The coding cost reduction is all some temporary mirage, to be ended soon by drying VC money/rising inference costs, regulatory barriers, etc. In that world we should be reimplementing everything we can as copyleft while the inferencing is good.
but AI assisted code has an author and claiming it's AI assisted even if it is fully AI build is trivial (if you don't make it public that you didn't do anything)
also some countries have laws which treat it like a tool in the sense that the one who used it is the author by default AFIK
More and more I am drawn to these kinds of ideas lately, perhaps as a kind of ethical sidestep, but still:
- https://wiki.xxiivv.com/site/permacomputing.html
It's not going to solve any general issue here, but the one thing these freaks need that can't be generated by their models is energy, tons of it. So, the one thing I can do as an individual and in my (digital) community is work to be, in a word, self-sustainable. And depending on my company I guess, if I was a CEO I would hope I was wise enough to be thinking on the same lines.
Everyone is making beautiful mountains from paper and wire. I will just be happy to make a small dollhouse of stone, I think it will be worth it. How can we see not just at least some small-level of hubris otherwise?
1. An LLM recreating a piece of software violates its copyright and is illegal, in which case LLM output can never be legally used because someone somewhere probably has a copyright on some portion of any software that an LLM could write.
2. You read my example as "copying a project without distributing it", vs. "having an LLM write the same functionality just for me"
Nothing was stolen, not even copied, lamest piracy I've heard of.
It also doesn't talk about the far more interesting philosophical queston. Does what Blanchard did cover ALL implementations from Claude? What if anyone did exactly what he did, feed it the test cases and say "re-implement from scratch", ostensibly one would expect the results to be largely similar (technically under the right conditions deterministically similar)
could you then fork the project under your own name and a commercial license? when you use an LLM like this, to basically do what anyone else could ask it to do how do you attach any license to it? Is it first come first serve?
If an agent is acting mostly on its own it feels like if you found a copy of Harry Potter in the fictional library of Babel, you didn't write it, just found it amongst the infinite library, but if you found it first could you block everyone else that stumbles on a near-identical copy elsewhere in the library? or does each found copy represent a "Re-implementation" that could be individually copyrighted?
Everything for memory safety.
Looks like Wikipedia has an example of Traditional Chinese vertical layout with the Latin letters rotated as in TFA's layout (https://en.wikipedia.org/wiki/Horizontal_and_vertical_writin...)
So of course we feel that something wrong has happened even if it's not easy to put one's finger on it.
This whole article is just complaining that other people didn't have the discussion he wanted.
Ronacher even acknowledged that it's a different discussion, and not one they were trying to have at the moment.
If you want to have it, have it. Don't blast others for not having it for you.
> But law only says what conduct it will not prevent—it does not certify that conduct as right. Aggressive tax minimization that never crosses into illegality may still be widely regarded as antisocial. A pharmaceutical company that legally acquires a patent on a long-generic drug and raises the price a hundredfold has not done something legal and therefore fine. Legality is a necessary condition; it is not a sufficient one.
It might even be morally abhorrent to have such a discussion in the first place!
Copyleft could be seen as an attempt to give Free Software an edge in this competition for users, to counter the increased resources that proprietary systems can often draw on. I think success has been mixed. Sure, Linux won on the server. Open source won for libraries downloaded by language-specific package managers. But there’s a long tail of GPL apps that are not really all that appealing, compared to all the proprietary apps available from app stores.
But if reimplementing software is easy, there’s just going to be a lot more competition from both proprietary and open source software. Software that you can download for free that has better features and is more user-friendly is going to have an advantage.
With coding agents, it’s likely that you’ll be able to modify apps to your own needs more easily, too. Perhaps plugin systems and an AI that can write plugins for you will become the norm?
It was due to access.
Think about it; the license says that copies of the work must be reproduced with the copyright notice and licensing clauses intact. Why would anyone obey that, knowing it came from AI?
Countless instances of such licenses were ignored in the training data.
A lego sculpture is copyrighted. Lego blocks are not. The threshold between blocks and sculpture is not well-defined, but if an AI isn't prompted specifically to attempt to mimic an existing work, its output will be safely on the non-copyrighted side of things.
A derivative work is separately copyrightable, but redistribution needs permission from the original author too. Since that usually won't be granted or would be uneconomical, the derivative work can't usually be redistributed.
AI-produced material is inherently not copyrightable, but not because it's a derivative work.
I dispute the idea that token sequences reproduced from the model are not derived works.
I predict, no pun intended, that a time is coming when the idea that it's not a derived work will be challenged in mainstream law.
The slop merchants are getting a free ride for the time being.
As you said, it's lossy. Try it with any other distinctive but non-famous passage, and you won't get a correct prediction for the immediately following clause, much less for multiple sentences or paragraphs.
That's the case even when an LLM correctly identifies which book the prompted text is from. It still won't accurately continue on from some arbitrary passage. By the time you ask it to reproduce hundreds of words, you're into brand new book territory. Even when it's slop content, it's distinct slop.
The exceptions are cases where a significant number of humans would also know a particular quote from memory. Then, chances are, a frontier LLM will too.
You know how else you can reproduce a quote? Search for it on google, and search the resulting top hits; if it's a significant quote, multiple people have probably quoted it -- legally. You can also search a pirate library for the actual book, and search the book for the quote; while illegal, it's very simple to do, so unless you propose to make the free and open internet illegal, I'd suggest that banning LLMs for being "derivative work" creation engines is not so different from destroying the internet.
> I predict, no pun intended, that a time is coming when the idea that it's not a derived work will be challenged in mainstream law.
If judges have any sense whatsoever, LLM generations (without specific prompt crafting to mimic existing works) will be judged to not be derived works and therefore not be violating copyright, in the same sense that you can live and breathe Taylor Swift's music, create new music in the same style, and still not be violating copyright.
More likely, perhaps, is that everything will be so infused with LLM output that copyright ceases to be relevant, or forces copyright law to be rewritten from the ground up.
That’s a weird statement while releasing the new version of the same project. Maybe just release it as a new project, chardet-ai v1.0 or whatever.
TheThe fundamental problem is that once you take something outside the realm of law and rule of law in its many facets as the legitimizing principal, you have to go a whole lot further to be coherent and consistent.
You can’t just leave things floating in a few ambiguous things you don’t like and feel “off” to you in some way- not if you’re trying to bring some clarity to your own thoughts, much less others. You don’t have to land on a conclusion either. By all means chew over things, but once you try to settle, things fall apart if you haven’t done the harder work of replacing the framework of law with that of another conceptual structure.
You need to at least be asking “to what ends? What purpose is served by the rule?” Otherwise you’re stuck in things where half the time you end up arguing backwards in ways that put purpose serving rules, the maintenance of the rule with justifications ever further afield pulled in when the rule is questioned and edge cases reached. If you’re asking, essentially, “is the spirit of the rule still there?” You’ve got to stop and fill in what that spirit is or you or people that want to control you or have an agenda will sweep in with their own language and fill the void to their own ends.
This is an interesting reversal in itself. If you make the specification protected under copyright, then the whole practice of clean room implementations is invalid.
That’s just your subjective opinion which many other people would disagree. I bet Armin Ronacher would agree that an MIT licensed library is even freer than an LGPL licensed library. To them, the vector is running from free to freer.
no, it isn't. The point of the GPL is to grant users of the software four basic freedoms (run, study, modify and redistribute). There's no restriction to distribution per se, other than disallowing the removal of these freedoms to other users.
I downloaded both 6.0 and 7.0 and based on only a light comparison of a few key files, nothing would suggest to me that 7.0 was copied from 6.0, especially for a 41x faster implementation. It is a lot more organized and readable in my armature opinion, and the code is about 1/10th the size.
If we protect API under copyright, it makes it easier to prevent interoperability. We obviously do NOT want that. It would give big companies even more power.
Now in the US, the Supreme Court that the output of an LLM is not copyrightable. So even a permissive licence doesn't work for that reimplementation: it should be public domain.
Disclaimer: I am all for copyleft for the code I write, but already without LLMs, one could rewrite a similar project and use the licence they please. LLMs make them faster at that, it's just a fact.
Now I wonder: say I vibe-code a library (so it's public domain in the US), I don't publish that code but I sell it to a customer. Can I prevent them from reselling it? I guess not, since it's public domain?
And as an employee writing code for a company. If I produce public domain code because it is written by an LLM, can I publish it, or can the company prevent me from doing it?
There was an issue where Google did something similar with the JVM, and ultimately it came down to whether or not Oracle owned the copyright to the header files containing the API. It went all the way to the US supreme court, and they ruled in Google's favour; finding that the API wasn't the implementation, and that the amount of shared code was so minimal as to be irrelevant.
They didn't anticipate that in less than half a decade we'd have technology that could _rapidly_ reimplement software given a strong functional definition and contract enforcing test suite.
I like the article's point of legal vs. legitimate here, though; copyright is actually something of a strange animal to use to protect source code, it was just the most convenient pre-existing framework to shove it in.
which is the actual relevant part: they didn't do that dance AFIK
AI is a tool, they set it up to make a non-verbatim copy of a program.
Then they feed it the original software (AFIK).
Which makes it a side by side copy, as in the original source was used as reference to create the new program. Which tend to be seen as derived work even if very different.
IMHO They would have to:
1. create a specification of the software _without looking at the source code_, i.e. by behavior observation (and an interface description). I.e. you give the AI access to running the program, but not to looking into the insides of it. I really don't think they did it as even with AI it's a huge pain as you normally can't just brute force all combinations of inputs and instead need to have a scientific model=>test=>refine loop (which AI can do, but can take long and get stuck, so you want it human assisted, and the human can't have inside knowledge about the program).
2. then generate a new program from specification, And only from it. No git history, no original source code access, no program access, no shared AI state or anything like that.
Also for the extra mile of legal risk avoidance do both human assisted and use unrelated 3rd parties without inside knowledge for both steps.
While this does majorly cut cost of a clean room approach, it still isn't cost free. And still is a legal mine field if done by a single person, especially if they have enough familiarity to potentially remember specific peaces of code verbatim.
So my understanding was that the original code was specifically not fed into Claude. But was almost certainly part of its training data, which complicates things, but if that's fair use then it's not relevant? If training's not fair use and taints the output, then new-chardet is a derivative of a lot of things, not just old-chardet...
This is all new legal ground. I'm not sure if anyone will go to court over chardet, though, but something that's an actual money-maker or an FSF flagship project like readline, on the other hand, well that's a lot more likely.
My understanding is they did do the dance. From the article: "He fed only the API and the test suite to Claude and asked it to reimplement the library from scratch."
One could still make the argument that using the test suite was a critical contributing factor, but it is not a part of the resulting library. So in my uninformed opinion, it seems to me like the clean room argument does apply.
I'm glad we can fork things at a point and thumb our noses at those who wish to cash in on other's work.
But what happens with the new things? Has the era of software-making (or creating things at large) finished, and from now on everything will be re-(gurgitated|implemented|polished) old stuff?
Or all goes back to proprietary everything.. Babylon-tower style, noone talks to noone?
edit: another view - is open-source from now on only for resume-building? "see-what-i've-built" style
It does feel like open source is about to change. My hunch is that commercial open source (beyond the consultation model) risks disappearing. Though I'd be happy to be proven wrong.
That's what something like AGPL does.
The answer to that, I think, is that the authors wanted to squat an existing successful project and gain a platform from it. Hence we have news cycle discussing it.
Nobody cares about a new library using AI, but squash an existing one with this stuff, and you get attention. It’s the reputation, the GitHub stars, whatever
Honestly it's a weird test case for this sort of thing. I don't think you'd see an equivalent in most open source projects.
No, AI does not mean the end of either copyright or copyleft, it means that the laws need to catch up. And they should, and they will.
2) Copyright was the wrong mechanism to use for code from the start, LLMs just exposed the issue. The thing to protect shouldn't be creativity, it should be human work - any kind of work.
The hard part of programming isn't creativity, it's making correct decisions. It's getting the information you need to make them. Figuring out and understanding the problem you're trying to solve, whether it's a complex mathematical problem or a customer's need. And then evaluating solutions until you find the right one. (One constrains being how much time you can spend on it.)
All that work is incredibly valuable but once the solution exists, it's each easier to copy without replicating or even understanding the thought process which led to it. But that thought process took time and effort.
The person who did the work deserved credit and compensation.
And he deserves it transitively, if his work is used to build other works - proportional to his contribution. The hard part is quantifying it, of course. But a lot of people these days benefit from throwing their hands up and saying we can't quantify it exactly so let's make it finders keepers. That's exploitation.
3) Both LLM training and inference are derivative works by any reasonable meaning of those words. If LLMs are not derivative works of the training data then why is so much training data needed? Why don't they just build AI from scratch? Because they can't. They just claim they found a legal loophole to exploit other people's work without consent.
I am still hoping the legal people take time to understand how LLMs work, how other algorithms, such as synonym replacement or c2rust work, decide that calling it "AI" doesn't magically remove copyright and the huge AI companies will be forced to destroy their existing models and train new ones which respect the licenses.
Add something like this to NEW gpl /bsd/mit licenses:
'you are forbidden from reimplementing it with AI'
or just:
'all clones, reimpletetions with ai etc must still be GPL'
One thing is certain, however: copyleft licenses will disappear: If I can't control the redistribution of my code (through a GPL or similar license), I choose to develop it in closed source.
If you have software your testsuite should be your testsuite, you do dev with a testsuite and then mit without releasing one. Depending on the test-suite it may break clean room rules, especially for ttd codebases.
- proprietary
- free
- slop-licensed
software?
This argument makes no sense. Are they arguing that because Vercel, specifically, had this attitude, this is an attitude necessitated by AI, reimplementation, and those who are in favor of it towards more permissive licenses? That certainly doesn't seem to be an accurate way to summarize what antirez or Ronacher believe. In fact, under the legal and ethical frameworks (respectively) that those two put forward, Vercel has no right to claim that position and no way to enforce it, so it seems very strange to me to even assert that this sort of thing would be the practical result of AI reimplementations. This seems to just be pointing towards the hypocrisy of one particular company, and assuming that this would be the inevitable universal, attitude, and result when there's no evidence to think so.
It's ironic, because antirez actually literally addresses this specific argument. They completely miss the fact that a lot of his blog post is not actually just about legal but also about ethical matters. Specifically, the idea he puts forward is that yes, corporations can do these kinds of rewrites now, but they always had the resources and manpower to do so anyway. What's different now is that individuals can do this kind of rewrites when they never have the ability to do so before, and the vector of such a rewrite can be from a permissive to copyleft or even from decompile the proprietary to permissive or copyleft. The fact that it hasn't been so far is a more a factor of the fact that most people really hate copyleft and find an annoying and it's been losing traction and developer mind share for decades, not that this tactic can't be used that way. I think that's actually one of the big points he's trying to make with his GNU comparison — not just that if it was legal for GNU to do it, then it's legal for you to do with AI, and not even just the fundamental libertarian ethical axiom (that I agree with for the most part) that it should remain legal to do such a rewrite in either direction because in terms of the fundamental axioms that we enforce with violence in our society, there should be a level playing field where we look at the action itself and not just whether we like or dislike the consequences, but specifically the fact that if GNU did it once with the ability to rewrite things, it can be done again, even in the same direction, it now even more easily using AI.
Honestly I was confused about the summarization of my blog post into just a legal matter as well. I hope my blog post will be able to flash at least a short time in the HN front page so that the actual arguments it contain will get a bit more exposure.
I think the industry will realize that it made a huge mistake by leaning on copyright for protection rather than on patents.
I did not study in detail if copyright "has always been nonsense", but I do agree that nowadays some of the copyright regulations are nonsense (for example the very long duration of life + 70 years)
The idea that "information wants to be free" was always a lie, meant to transfer value from creators to platform owners. The result of that has been disastrous, and it's long past time to push the pendulum in the other direction.
AI will destroy the current paradigm, completely and utterly, and there's nothing they can do to stop it. It's unclear if they can even slow it, and that's a good thing.
We will be forced to legislate a modern, digital oriented copyright system that's fair and compatible with AI. If producing any software becomes a matter of asking a machine to produce it - if things like AI native operating systems come about, where apps and media are generated on demand, with protocols as backbone, and each device is just generating its own scaffolding around the protocols - then nearly none of modern licensing, copyright, software patents, or IP conventions make any sense whatsoever.
You can't have horse and buggy traffic conventions for airplanes. We're moving in to a whole new paradigm, and maybe we can get legislation that actually benefits society and individuals, instead of propping up massive corporations and making lawyers rich.
If corporations are allowed to launder someone else work as their own people will simply stop working and just start endlessly remixing a la popular music.