The second part here is problematic, but fascinating: "I then started in an empty repository with no access to the old source tree, and explicitly instructed Claude not to base anything on LGPL/GPL-licensed code." Problem - Claude almost certainly was trained on the LGPL/GPL original code. It knows that is how to solve the problem. It's dubious whether Claude can ignore whatever imprints that original code made on its weights. If it COULD do that, that would be a pretty cool innovation in explainable AI. But AFAIK LLMs can't even reliably trace what data influenced the output for a query, see https://iftenney.github.io/projects/tda/, or even fully unlearn a piece of training data.
Is anyone working on this? I'd be very interested to discuss.
Some background - I'm a developer & IP lawyer - my undergrad thesis was "Copyright in the Digital Age" and discussed copyleft & FOSS. Been litigating in federal court since 2010 and training AI models since 2019, and am working on an AI for litigation platform. These are evolving issues in US courts.
BTW if you're on enterprise or a paid API plan, Anthropic indemnifies you if its outputs violate copyright. But if you're on free/pro/max, the terms state that YOU agree to indemnify THEM for copyright violation claims.[0]
[0] https://www.anthropic.com/legal/consumer-terms - see para. 11 ("YOU AGREE TO INDEMNIFY AND HOLD HARMLESS THE ANTHROPIC PARTIES FROM AND AGAINST ANY AND ALL LIABILITIES, CLAIMS, DAMAGES, EXPENSES (INCLUDING REASONABLE ATTORNEYS’ FEES AND COSTS), AND OTHER LOSSES ARISING OUT OF … YOUR ACCESS TO, USE OF, OR ALLEGED USE OF THE SERVICES ….")
> I've been the primary maintainer and contributor to this project for >12 years
> I have had extensive exposure to the original codebase: I've been maintaining it for over a decade. A traditional clean-room approach involves a strict separation between people with knowledge of the original and people writing the new implementation, and that separation did not exist here.
> I reviewed, tested, and iterated on every piece of the result using Claude.
> I was deeply involved in designing, reviewing, and iterating on every aspect of it.
The idea is you have some window size, maybe 32 tokens. Hash it into a seed for a pseudo random number generator. Generate random numbers in the range 0..1 for each token in the window. Compare this number against a threshold. Don't count the loss for any tokens with a rng value higher than the threshold.
It learns well enough because you get the gist of reading the meaning of something when the occasional word is missing, especially if you are learning the same thing expressed many ways.
It can't learn verbatim however. Anything that it fills in will be semantically similar, but different enough to get cause any direct quoting onto another path after just a few words.
I think it's more subtle than that. IIUC the tokens were all present for the purpose of computing the output and the score is based on the output. It's only the weight update where some of the tokens get ignored. So the learning is lossy but the inference driving the learning is not.
Rather than a book that's missing words it's more like a person with a minor learning disability that prevents him from recalling anything perfectly.
However it occurs to me that data augmentation could easily break the scheme if care isn't taken.
There was recently https://news.ycombinator.com/item?id=47131225.
How can the user know if the LLM produces anything that violates copyright?
(Of course they shouldn't have trained it on infringing content in the first place, and perhaps used a different model for enterprise, etc.)
So, Supreme Court has said that. AI-produced code can not be copyrighted. (Am I right?). Then who's to blame if AI produces code large portions of which already exist coded and copyrigted by humans (or corporations).
I assume it goes something like this:
A) If you distribute code produced by AI, YOU cannot claim copyright to it.
B) If you distribute code produced by AI, YOU CAN be held liable for distributing it.
Functionally speaking, AI is viewed as any machine tool. Using, say, Photoshop to draw an image doesn't make that image lose copyright, but nor does it imbue the resulting image with copyright. It's the creativity of the human use of the tool (or lack thereof) that creates copyright.
Whether or not AI-generated output a) infringes the copyright of its training data and b) if so, if it is fair use is not yet settled. There are several pending cases asking this question, and I don't think any of them have reached the appeals court stage yet, much less SCOTUS. But to be honest, there's a lot of evidence of LLMs being able to regurgitate training inputs verbatim that they're capable of infringing copyright (and a few cases have already found infringement in such scenarios), and given the 2023 Warhol decision, arguing that they're fair use is a very steep claim indeed.
So the LLM training first needs to be settled, then we talk whether retelling a whole software package infringes anyone's right. And even if it does, there are no laws in place to chase it.
Surely that varies on a case by case basis? With agentic coding the instructions fed in are often incredibly detailed.
Actually, most of the time, it is not.
The Supreme Court has "original jurisdiction" over some types of cases, which means if someone brings such a case to them they have to accept it and rule on it, and they have "discretionary jurisdiction" over many more types of cases, which means if someone brings one of those they can choose whether or not they have to accept it. AI copyright cases are discretionary jurisdiction cases.
You generally cannot reliable infer what the Supreme Court thinks of the merits of the case when they decline to accept it, because they are often thinking big picture and longer term.
They might think a particular ruling is needed, but the particular case being appealed is not a good case to make that ruling on. They tend to want cases where the important issue is not tangled up in many other things, and where multiple lower appeals courts have hashed out the arguments pro and con.
When the Supreme Court declines the result is that the law in each part of the country where an appeals court has ruled on the issue is whatever that appeals court ruled. In parts of the country where no appeals court has ruled, it will be decided when an appeal reaches their appeals courts.
If appeals courts in different areas go in different directions, the Supreme Court will then be much more likely to accept an appeal from one of those in order to make the law uniform.
Further, you know that ideas are not protected by copyright. The code comparison in this demonstrates a relatively strong case that the expression of the idea is significantly different from that of the original code.
If it were the case that the LLM ingested the code and regurgitated it (as would be the premise of highlighting the training data provenance), that similarity would be much higher. That is not the case.
That said, even if model training is fair use, model output can still be infringing. There would be a strong case, for example, if the end user guides the LLM to create works in a way that copies another work or mimics an author or artist's style. This case clearly isn't that. On the similarity at issue here, I haven't personally compared. I hope you're right.
Can I use one AI agent to write detailed tests based on disassembled Windows, and another to write code that passes those same function-level tests? If so, I'm about to relicense Windows 11 - eat my shorts, ReactOS!
The actual meaning of a "clean room implementation" is that it is derived from an API and not from an implementation (I am simplifying slightly). Whether the reimplementation is actually a "new implementation" is a subjective but empirical question that basically hinges on how similar the new codebase is to the old one. If it's too similar, it's a copy.
What the chardet maintainers have done here is legally very irresponsible. There is no easy way to guarantee that their code is actually MIT and not LGPL without auditing the entire codebase. Any downstream user of the library is at risk of the license switching from underneath them. Ideally, this would burn their reputation as responsible maintainers, and result in someone else taking over the project. In reality, probably it will remain MIT for a couple of years and then suddenly there will be a "supply chain issue" like there was for mimemagic a few years ago.
That's not what the law says [1]. If two people happen to independently create the same thing they each have their own copyright.
If it's highly improbable that two works are independent (eg. the gcc code base), the first author would probably go to court claiming copying, but their case would still fail if the second author could show that their work was independent, no matter how improbable.
[1] https://lawhandbook.sa.gov.au/ch11s13.php?lscsa_prod%5Bpage%...
It is also true that in all the cases that I know about where that has occurred the courts have taken a very, very, very close look at the situation and taken extensive evidence to convince the court that there really wasn't any copying. It was anything but a "get out of jail free" card; it in fact was difficult and expensive, in proportion to the size of the works under question, to prove to the court's satisfaction that the two things really were independent. Moreover, in all the cases I know about, they weren't actually identical, just, really really close.
No rational court could possibly ever come to that conclusion if someone claimed a line-by-line copy of gcc was written by them, they must have independently come up with it. The probably of that is one out of ten to the "doesn't even remotely fit in this universe so forget about it". The bar to overcoming that is simply impossibly high, unlike two songs that happen to have similar harmonies and melodies, given the exponentially more constrained space of "simple song" as compared to a compiler suite.
I know it's a popular misconception that "impossible" = a strict, statistical, mathematical 0, but if you try to use that in real life it turns out to be pretty useless. It also tends to bother people that there isn't a bright shining line between "possible" and "impossible" like there is between "0 and strictly not 0", but all you can really do is deal with it. Where ever the line is, this is literally millions of orders of magnitude on the wrong side of it. Not a factor of millions, a factor of ten to the millions. It's not possible to "accidentally" duplicate a work of that size.
I suppose a different way of stating my position is that some activities that don't look like copying are in fact copying. For instance it would not be required to find a literal copy of the GCC codebase inside of the LLM somehow, in order for the produced work to be a copy. Likewise if I specify that "Harry Potter and the Philosopher's Stone is the text file with hash 165hdm655g7wps576n3mra3880v2yzc5hh5cif1x9mckm2xaf5g4" and then someone else uses a computer to brute force find a hash collision, I suspect this would still be considered a copy.
I think there is a substantial risk that the automatic translation done in this case is, at least in part, copying in the above sense.
It's an interesting case. As I understand it, there is an ongoing debate within the AI research community as to whether neural nets are encoding verbatim blocks of information or creating a model which captures the "essence" or "ideas" behind a work. If they are capturing ideas, which are not copyrightable, it would suggest that LLMs can be used to "launder" copyright. In this case, I get the feeling that, for clarity, we would both say that the work in question (or works derived from it) should not be part of the training set or prompt, emulating a clean room implementation by a human. (Is that a fair comment?)
I've no direct experience here, but I would come down on the side of "LLMs are encoding (copyrightable) verbatim text", because others are reporting that LLMs do regurgitate word-for-word chunks of text. Is this always the case though? Do different AI architectures, or models that are less well fitted, encode ideas rather than quotes?
[1] https://en.wikipedia.org/wiki/Hutter_Prize
Edit: It would be an interesting experiment to use two LLMs to emulate a clean room implementation. The first is instructed to "produce a description of this program". The second, having never seen the program, in its prompt or training set, would be prompted to "produce a program based on this description". A human could vet the description produced by the first LLM for cleanliness. Surely someone has tried this, though it might be a challenge to get an LLM that is guaranteed not to have been exposed to a particular code base or its derivatives?
Patent law is different and doesn't rely on information flow in the same way.
"Like any reader aspiring to be a writer, Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different"
However, describing the path you need to get there requires copyright infringement.
I know you were simplifying, and not to take away from your well-made broader point, but an API-derived implementation can still result in problems, as in Google vs Oracle [1]. The Supreme Court found in favor of Google (6-2) along "fair use" lines, but the case dodged setting any precedent on the nature of API copyrightability. I'm unaware if future cases have set any precedent yet, but it just came to mind.
[1]: https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_....
Also, I find it important that here the API is really minimal (compared to the Java std lib), the real value of the library is in the internal detection logic.
I think there is precedence that says exactly this - for example the BIOS rewrites for the IBM PC from people like Phoenix. And it would be trivial to instruct an LLM to prefer to use (say, in assembler) register C over register B wherever that was possible, resulting in different code.
If you somehow actually randomly produce the same code without a reference, it's not a copy and doesn't violate copyright. You're going to get sued and lose, but platonically, you're in the clear. If it's merely somewhat similar, then you're probably in the clear in practice too: it gets very easy very fast to argue that the similarities are structural consequences of the uncopyrightable parts of the functionality.
> The actual meaning of a "clean room implementation" is that it is derived from an API and not from an implementation (I am simplifying slightly).
This is almost the opposite of correct. A clean room implementation's dirty phase produces a specification that is allowed to include uncopyrightable implementation details. It is NOT defined as producing an API, and if you produce an API spec that matches the original too closely, you might have just dirtied your process by including copyrightable parts of the shape of the API in the spec. Google vs Oracle made this more annoying than it used to be.
> Whether the reimplementation is actually a "new implementation" is a subjective but empirical question that basically hinges on how similar the new codebase is to the old one. If it's too similar, it's a copy.
If you follow CRRE, it's not a copy, full stop, even if it's somehow 1:1 identical. It's going to be JUDGED as a copy, because substantial similarity for nontrivial amounts of code means that you almost certainly stepped outside of the clean room process and it no longer functions as a defense, but if you did follow CRRE, then it's platonically not a copy.
> What the chardet maintainers have done here is legally very irresponsible.
I agree with this, but it's probably not as dramatic as you think it is. There was an issue with a free Japanese font/typeface a decade or two ago that was accused of mechanically (rather than manually) copying the outlines of a commercial Japanese font. Typeface outlines aren't copyrightable in the US or Japan, but they are in some parts of Europe, and the exact structure of a given font is copyrightable everywhere (e.g. the vector data or bitmap field for a digital typeface, as opposed to the idea of its shape). What was the outcome of this problem? Distros stopped shipping the font and replaced it with something vaguely compatible. Was the font actually infringing? Probably not, but better safe than sorry.
I don't believe this, and I doubt that the sense of copying in copyright law is so literal. For instance, if I generated the exact text of a novel by looking for hash collisions, or by producing random strings of letters, or by hammering the middle button on my phone's autosuggestion keyboard, I would still have produced a copy and I would not be safe to distribute it. There need not have been any copy anywhere near me for this to happen. Whether it is likely or not depends on the technique used - naive techniques make this very unlikely, but techniques can improve.
It is also true that similarity does not imply copying - if you and I take an identical photograph of the same skyline, I have not copied you and you have not copied me, we have just fixed the same intangible scene into a medium. The true subjective test for copying is probably quite nuanced, I am not sure whether it is triggered in this case, but I don't think "clean room LLMs" are a panacea either.
> dirty phase produces a specification ... it is NOT defined as producing an API
This does not really sound like "the opposite of correct". APIs are usually not copyrightable, the truth is of course more complicated, if you are happy to replace "API" with "uncopyrightable specification" then we can probably agree and move on.
> it's probably not as dramatic as you think it is
In reality I am very cynical and think nothing will come of this, even if there are verbatim snippets in the produced code. People don't really care very much, and copyright cases that aren't predicated on millions of dollars do not survive the court system very long.
It is actually that literal, really.
> For instance, if I generated the exact text of a novel by looking for hash collisions,
This is a copyright violation because you're using the original to construct the copy. It's not a pure RNG.
> or by producing random strings of letters,
This wouldn't be a copyright violation, but nobody would believe you.
> or by hammering the middle button on my phone's autosuggestion keyboard, I would still have produced a copy and I would not be safe to distribute it.
This would probably be a copyright violation.
You probably think that this is hypothetical, but problems like this do actually go to court all the time, especially in the music industry, where people try to enforce copyright on melodies that have the informational uniqueness of an eight-word sentence.
> APIs are usually not copyrightable,
This was commonly believed among developers for a long time, but it turned out to not be true.
> This does not really sound like "the opposite of correct".
The important part is that information about the implementation can absolutely be in the spec without necessarily being copyrightable (and in real world clean room RE, you end up with a LOT of implementation details). You were saying the opposite, that it was a spec of the API as opposed to a spec of the implementation.
What color are your bits? That's all the law cares about.
The first sentence is the title of an essay.
a bunch of people get together, rewrite something while making a pinky promise not to look at the original source code
guaranteeing the premise is basically impossible, it sounds like some legal jester dance done to entertain the already absurd existing copyright laws
Clean room implementations are a jester dance around the judiciary. The whole point is to avoid legal ambiguity.
You are not required to do this by law, you are doing this voluntarily to make potential legal arguments easier.
The alternative is going over the whole codebase in question and arguing basically line by line whether things are derivative or not in front of a judge (which is a lot of work for everyone involved, subjective, and uncertain!).
I've always taken "clean room" to be the kind of manufacturing clean room (sealed/etc). You're given a device and told "make our version". You're allowed to look, poke, etc but you don't get the detailed plans/schematics/etc.
In software, you get the app or API and you can choose how to re-implement.
In open source, yes, it seems like a silly thing and hard to prove.
This is incorrect and thinking this can get you sued
https://en.wikipedia.org/wiki/Structure,_sequence_and_organi...
Per your link, the Supreme Court's thinking on "structure, sequence and organization" (Oracle's argument why Google shouldn't even be allowed to faithfully produce a clean-room implementation of an an API) has changed since the 1980s out of concern that using it to judge copyright infringement risks handing copyright holders a copyright-length monopoly over how to do a thing:
> enthusiasm for protection of "structure, sequence and organization" peaked in the 1980s [..] This trend [away from "SS&O"] has been driven by fidelity to Section 102(b) and recognition of the danger of conferring a monopoly by copyright over what Congress expressly warned should be conferred only by patent
The Supreme Court specifically recognised Google's need to copy the structure, sequence and organization of Java APIs in order to produce a cleanroom Android runtime library that implemented Java APIs so that that existing Java software could work correctly with it.
Similarly, see Oracle v. Rimini Street (https://cdn.ca9.uscourts.gov/datastore/opinions/2024/12/16/2...) where Rimini Street has been producing updates that work with Oracle's products, and Oracle claimed this made them derivative works. The Court of Appeals decided that no, the fact A is written to interoperate with B does not necessarily make A a derivative work of B.
When a developer reimplements a complete new version of code from scratch, with an understanding only, a new implementation generally should be an improvement on any source code not equal.
In today’s world, letting LLMs replicate anything will generate average code as “good” and generally create equivalent or more bloat anyways unless well managed.
https://www.joelonsoftware.com/2000/04/06/things-you-should-...
> They did it by making the single worst strategic mistake that any software company can make: They decided to rewrite the code from scratch.
Finding a middle ground of building a roadmap to refactoring your way forward is often much better.
Appreciate the Joel link, nice to see that kind of stuff again.
With that being said if it's the same small team that built the first version, there can be a calculated risk to driving a refactor towards a rewrite with the right conditions. I says this because I have been able to do it in this conditions a few times, it still remains very risky. If it's a new or different team later on trying to rewrite, all bets are off anyways.
We have to remember 70% of software projects fail at the best of times, independent of rewrites.
Perhaps the maintainer wants to force the issue?
> Any downstream user of the library is at risk of the license switching from underneath them.
Checking the license of the transitive closure of your dependencies is table stakes for using them.
I doubt it, and I don't see any evidence that's what they're doing. There are probably better ways, if that's what they want.
> Checking the license of the transitive closure of your dependencies is table stakes for using them.
Checking the license of the transitive closure of your dependencies is only feasible when the library authors behave responsibly.
i.e. a re-implementation
which can either
- be still derived work, i.e. seen as you just obfuscating a copyright violation
- be a new work doing the same
nothing prevents an AI from producing a spec based on a API, API documentation and API usage/fuzzing and then resetting the AI and using that spec to produce a rewrite
I mean "doing the same" is NOT copyright protection, you need patent law for that. Except even with patent law you need to innovations/concepts not the exact implementation details. Which means that even if there are software patents (theoretically,1) most things done in software wouldn't be patentable (as they are just implementation details, not inventions)
(1): I say theoretically because there is a very long track record of a lot of patents being granted which really should never be granted. This combined with the high cost of invalidating patents has caused a ton of economical damages.
Ted Nelson was years ahead of the future where we really needed his Xanadu to keep track of fractional copyright. Likely if we had such a mechanism, and AI authors respected it then we would be able to say that your work is derived from 3000 other original works and that you added 6 lines of new code.
AI/ML is complex, so as a simpler analogy: If I watch The Simpsons, and I create an amusing infographic of how often Homer says "D'oh!" over time, my infographic would be an original work. AI training follows the same principle.
> AI training follows the same principle.
If you really believe that then we can't have a meaningful conversation about this, that's not even ELIF territory, that's just disconnected. You should be asking questions, not telling people how it works.
In fact we could make this concrete: use the model as the prediction stage in a compressor, and compress gcc with it. The residual is the extent to which it doesn't contain gcc.
https://osyuksel.github.io/blog/reconstructing-moby-dick-llm...
I see a test where one model managed to 85% reproduce a paragraph given 3 input paragraphs under 50% of the time.
So it can't even produce 1 paragraph given 3 as input, and it can't even get close half the time.
"Contains Moby Dick" would be something like you give it the first paragraph and it produces the rest of the book. What we have here instead is a statistical model that when given passages can do an okay job at predicting a sentence or two, but otherwise quickly diverges.
Getting close less than half the time given three paragraphs as input still sounds like red-handed copyright infringement to me.
If I sample a copyrighted song in my new track, clip it, slow it down, and decimate the bit rate, a court would not let me off the hook.
It doesn't matter how much context you push into these things. If I feed them 50% of Moby Dick and they produce the next word, and I can repeatedly do that to produce the entire book (I'm pretty sure the number of attempts is wholly irrelevant: we're impossibly far from monkeys on typewriters) then we can prove the statistical model encodes the book. The further we are from that (and the more we can generate with less) then the stronger the case is. It's a pretty strong case!
> If I feed them 50% of Moby Dick and they produce the next word and I can repeatedly do that to produce the entire book... then we can prove the statistical model encodes the book.
It can't because it doesn't. That's what it means to say it diverges.
The "number of attempts" is you cheating. You're giving it the book when you let it try again word by word until it gets the correct answer, and then claiming it produced the book. That's exactly the residual that I said characterizes the extent to which it doesn't know the book. Trivially, no matter how bad the model is, if you give it the residual, it can losslessly compress anything at all.
If you had a simple model that just predicts next word given current word (trained on word pair frequency across all English text, or even all text excluding Moby Dick), and then give it retries until it gets the current word right, it will also quickly produce the book. Because it was your retry policy that encoded the book, not the model. Without that policy, it will get it wrong within a few words, just like these models do.
If you had access to a model's top p selection then I'd bet the book is in there consistently for every token. Is it statistically significant? Might be!
I'm not cheating because the number of attempts is so low it's irrelevant.
If I were to take a copyrighted work and chunk it up into 1000 pieces and encrypt each piece with a unique key, and give you all the pieces and keys, would it still be the copyrighted work? What if I shave off the last bit of each key before I give them to you, so you have a 50% chance of guessing the correct key for each piece? What if I shave two bits? What if it's a million pieces? When does it become transformative or no longer infringing for me to distribute?
The answer might surprise you.
Consider a password consisting of random words each chosen from a 4k dictionary. Say you choose 10 words. Then your password has log_2(4k)*10 entropy.
Now consider a validator that tells you when you gets a word right. Then you can guess one word at a time, and your password strength is log_2(4k*10). Exponentially weaker.
You're constructing the second scenario and pretending it's the first.
Also in your 50% probability scenario, each word is 1 bit, and even 50-100 bits is unguessable. A 1000 word key where each word provides 1 bit would be absurdly strong.
I wonder what the results would be if I spent time to train a model up from scratch without any such constraints. But I'm much too busy with other stuff right now, but that would be an interesting challenge.
These companies just don't want to deal with people complaining that it reproduces something when they don't understand that they're literally giving it the answer.
For a fan fiction episode that is different from all official episodes, you may cross your fingers.
For a remake of one of the episodes with a different camera angle and similar dialog, I expect that you will get in problems.
I beg to differ. Please examine any of my recent codebases on github (same username); I have cleanroom-reimplemented par2 (par2z), bzip2 (bzip2z), rar (rarz), 7zip (z7z), so maybe I am a good test case for this (I haven't announced this anywhere until now, right here, so here we go...)
https://github.com/pmarreck?tab=repositories&type=source
I was most particular about the 7zip reimplementation since it is the most likely to be contentious. Here is my repo with the full spec that was created by the "dirty team" and then worked off of by the LLM with zero access to the original source: https://github.com/pmarreck/7z-cleanroom-spec
Not only are they rewritten in a completely different language, but to my knowledge they are also completely different semantically except where they cannot be to comply with the specification. I invite you and anyone else to compare them to the original source and find overt similarities.
With all of these, I included two-way interoperation tests with the original tooling to ensure compatibility with the spec.
Researchers have shown that an LLM was able to reproduce the verbatim text of the first 4 Harry Potter books with 96% accuracy.
Kinda weird argument, in their research (https://forum.gnoppix.org/t/researchers-extract-up-to-96-of-...) LLM was explicitly asked to reproduce the book. There are people that can do so without LLMs out there, by this logic everything they write is a copyright infringement an every book they can reproduce.
> Yes if you are solving the exact problem that the original code solved and that original code was labeled as solving that exact problem then that’s very good reason for the LLM to produce that code.
I think you're overestimating LLM ability to generalize.
My understanding of cleanroom is that the person/team programming is supposed to have never seen any of the original code. The agent is more like someone who has read the original code line by line, but doesn't remember all the details - and isn't allowed to check.
Even copyright laws with provisions for machine learning were written when that meant tangential things like ranking algorithms or training of task-specific models that couldn't directly compete with all of their source material.
For code it also completely changes where the human-provided value is. Copyright protects specific expressions of an idea, but we can auto-generate the expressions now (and the LLM indirection messes up what "derived work" means). Protecting the ideas that guided the generation process is a much harder problem (we have patents for that and it's a mess).
It's also a strategic problem for GNU. GNU's goal isn't licensing per se, but giving users freedom to control their software. Licensing was just a clever tool that repurposed the copyright law to make the freedoms GNU wanted somewhat legally enforceable. When it's so easy to launder code's license now, it stops being an effective tool.
GNU's licensing strategy also depended on a scarcity of code (contribute to GCC, because writing a whole compiler from scratch is too hard). That hasn't worked well for a while due to permissive OSS already reducing scarcity, but gen AI is the final nail in the coffin.
(interestingly asking it to make him some friends it gave me more 'original' ideas, but asking it to give him a brother and I can hear the big N's lawyers writing a letter already...)
Is it? I think the law is truly undeveloped when it comes to language models and their output.
As a purely human example, suppose I once long ago read through the source code of GCC. Does this mean that every compiler I write henceforth must be GPL-licensed, even if the code looks nothing like GCC code?
There's obviously some sliding scale. If I happen to commit lines that exactly replicate GCC then the presumption will be that I copied the work, even if the copying was unconscious. On the other hand, if I've learned from GCC and code with that knowledge, then there's no copyright-attaching copy going on.
We could analogize this to LLMs: instructions to copy a work would certainly be a copy, but an ostensibly independent replication would be a copy only if the work product had significant similarities to the original beyond the minimum necessary for function.
However, this is intuitively uncomfortable. Mechanical translation of a training corpus to model weights doesn't really feel like "learning," and an LLM can't even pinky-promise to not copy. It might still be the most reasonable legal outcome nonetheless.
Copyright laws are predicated on the idea that valuable content is expensive and time consuming to create.
Ideas are not protected by copyright, expression of ideas is.
You can't legally copy a creative work, but you can describe the idea of the work to an AI and get a new expression of it in a fraction of the time it took for the original creator to express their idea.
The whole premise of copyright is that ideas aren't the hard part, the work of bringing that idea to fruition is, but that may no longer be true!
That individual artists are still defending this system is baffling to me.
I think that's maybe misunderstanding. GNU wants everyone to be able to use their computers for the purposes they want, and software is the focus because software was the bottleneck. A world where software is free to create by anyone is a GNU utopia, not a problem.
Obviously the bigger problem for GNU isn't software, which was pretty nicely commoditized already by the FOSS-ate-the-world era of two decades ago; it's restricted hardware, something that AI doesn't (yet?) speak to.
Also the mentioned SCOTUS decision is concerned with authorship of generative AI products. That's very different of this case. Here we're talking about a tool that transformed source code and somehow magically got rid of copyright due to this transformation? Imagine the consequences to the US copyright industry if that were actually possible.
There is an act of copying, and there is whether or not that copying was permitted under copyright law. If the author of the code said you can copy, then you can. If the original author didn't, but the author of a derivative work, who wasn't allowed to create a derivative work, told you you could copy it, then it's complicated.
And none of it's enforced except in lawsuits. If your work was copied without permission, you have to sue the person who did that, or else nothing happens to them.
(IANAL)
but also probably not fully right
as far as I understand they avoid the decision of weather an AI can produce creative work by saying that the neither the AI nor it's owner/operator can claim ownership of copyright (which makes it de-facto public domain)
this wouldn't change anything wrt. derived work still having the original authors copyright
but it could change things wrt. parts in the derived work which by themself are not derived
iff it is a complete new implementation with completely different internal then it could also still be no LGPL even if produced by a person with in depth knowledge. Copyright only cares if you "copied" something not if you had "knowledge" or if it "behaves the same". So as long as it's distinct enough it can still be legally fine. The "full clean room" requirement is about "what is guaranteed to hold up in front of a court" not "what might pass as non-derivative but with legal risk".
How would that work? We still have no legal conclusion on whether AI model generated code, that is trained on all publicly available source (irrespective of type of license), is legal or not. IANAL but IMHO it is totally illegal as no permission was sought from authors of source code the models were trained on. So there is no way to just release the code created by a machine into public domain without knowing how the model was inspired to come up with the generated code in the first place. Pretty sure it would be considered in the scope of "reverse engineering" and that is not specific only to humans. You can extend it to machines as well.
EDIT: I would go so far as to say the most restrictive license that the model is trained on should be applied to all model generated code. And a licensing model with original authors (all Github users who contributed code in some form) should be setup to be reimbursed by AI companies. In other words, a % of profits must flow back to community as a whole every time code-related tokens are generated. Even if everyone receives pennies it doesn't matter. That is fair. Also should extend to artists whose art was used for training.
That license is called "All Rights Reserved", in which case you wouldn't be able to legally use the output for anything.
There are research models out there which are trained on only permissively licensed data (i.e. no "All Rights Reserved" data), but they're, colloquially speaking, dumb as bricks when compared to state-of-art.
But I guess the funniest consequence of the "model outputs are a derivative work of their training data" would be that it'd essentially wipe out (or at very least force a revert to a pre-AI era commit) every open source project which may have included any AI-generated or AI-assisted code, which currently pretty much includes every major open source project out there. And it would also make it impossible to legally train any new models whose training data isn't strictly pre-AI, since otherwise you wouldn't know whether your training data is contaminated or not.
Models whose authors tried to train only on permissively licensed data.
For example https://huggingface.co/bigcode/starcoder2-15b tried to be a permissively licensed dataset, but it filtered only on repository-level license, not file-level. So when searching for "under the terms of the GNU General Public License" on https://huggingface.co/spaces/bigcode/search-v2 back when it was working, you would find it was trained on many files with a GPL header.
That's what the whole copyright and patent regimes are designed to achieve.
It's to encourage the creation of knowledge.
US Constitution, Article I, section 8:
To promote the Progress of Science and useful Arts, by
securing for limited Times to Authors and Inventors the
exclusive Right to their respective Writings
and Discoveries;That's not what I favor because you are inserting a middleman, the Government, into the mix. The Government ALWAYS wants to maximize tax collections AND fully utilize its budget. There is no concept of "savings" in any Government anywhere in the World. And Government spending is ALWAYS wasteful. Tenders floated by Government will ALWAYS go to companies that have senators/ministers/prime ministers/presidents/kings etc as shareholders. In other words, the tax money collected will be redistributed again amongst the top 500 companies. There is no trickle down. Which is why agreements need to be between creators and those who are enjoying fruits of the creation. What have Governments ever created except for laws that stifle innovation/progress every single time?
https://www.youtube.com/watch?v=Qc7HmhrgTuQ
In all seriousness without the government you would have no innovation and progress, because it's the public school system, functioning roads, research grants a stable and lawful society that allow you to do any kind of innovation.
Apart from that, you have answered to a strawman. I said redistribute, not give to the government. I explicitly worded things that way because I don't think we should not be having a discussion on policy.
I think we are moving to an economy where the share of profits taken by capital becomes much larger than the one take from labor. If that happens then laborers will have very little discretionary income to fuel consumption and even capitalists will end up suffering. We can choose to redistribute now or wait for it to happen naturally, however that usually happens in a much more violent way, be it hyperinflation, famine, war or revolution.
You said: "let's just tax the hell out of AI companies and redistribute.". Only the Government has the power to tax. Question of redistribution does not even arise without first having the power to the coffers of the Company. Which you nor I have. Government CAN have if it wants to by either Nationalizing the Company or as you said "taxing the hell out of" the company. Please explain how you would go about taxing and redistributing without involving the Government?
> In all seriousness without the government you would have no innovation and progress, because it's the public school system, functioning roads, research grants a stable and lawful society that allow you to do any kind of innovation.
These fall under the ambit of governance and hence why you have a Government. That's the only power Governments should have. Governments SHOULD NOT be managing private enterprises.
> I think we are moving to an economy where the share of profits taken by capital becomes much larger than the one take from labor. If that happens then laborers will have very little discretionary income to fuel consumption and even capitalists will end up suffering. We can choose to redistribute now or wait for it to happen naturally, however that usually happens in a much more violent way, be it hyperinflation, famine, war or revolution.
Agreed. Which is why I was proposing private agreements in the first place (without involving a third-party like the Government which, more often than not, mismanages funds).
Just because you have a failure of imagination for how government should work, doesn’t mean it can’t work. And stifling innovation is exactly what I want, when that innovation is “steal from everyone so we can invent the torment nexus” or whatever’s going on these days.
> As its name suggests, the Government Pension Fund Global is invested in international financial markets, so the risk is independent from the Norwegian economy. The fund is invested in 8,763 companies in 71 countries (as of 2024).
Basically what I said above. You give your tax dollars to Government and it will invest it into top 500 companies. In the Norway Pension Fund case it is 8,763 companies in 71 countries. None of them are startups/small businesses/creators.
> And stifling innovation is exactly what I want, when that innovation is “steal from everyone so we can invent the torment nexus” or whatever’s going on these days.
You are confusing current lack of laws regulating this space with innovation being evil. Innovation is not evil. The technology per se is not evil. Every innovation brings with it a set of challenges which requires us to think of new legislation. This has ALWAYS been the case for thousands of years of human innovation.
That wouldn't be fair because these models are not only trained on code. A huge chunk of the training data are just "random" webpages scraped off the Internet. How do you propose those people are compensated in such a scheme? How do you even know who contributed, and how much, and to whom to even direct the money?
I think the only "fair" model would be to essentially require models trained on data that you didn't explicitly license to be released as open weights under a permissive license (possibly with a slight delay to allow you to recoup costs). That is: if you want to gobble up the whole Internet to train your model without asking for permission then you're free to do so, but you need to release the resulting model so that the whole humanity can benefit from it, instead of monopolizing it behind an API paywall like e.g. OpenAI or Anthropic does.
Those big LLM companies harvest everyone's data en-masse without permission, train their models on it, and then not only they don't release jack squat, but have the gall to put up malicious explicit roadblocks (hiding CoT traces, banning competitors, etc.) so that no one else can do it to them, and when people try they call it an "attack"[1]. This is what people should be angry about.
[1] -- https://www.anthropic.com/news/detecting-and-preventing-dist...
well, assuming all data that is itself not permissively licensed is excluded
AI can't claim ownership, humans can't either as they haven't produced it. If there is guaranteed no one which can claim ownership it often seen as being in the public domain.
In general it is irrelevant what the copyright of the AI training data is. At least in the US judges have been relevant clear about that. (Except if the AI reproduced input data close to verbatim. _But in general we aren't speaking about AI being trained on a code base but an AI using/rewriting it_.)
(1): Which isn't the same as no one seems to know who has ownership. It also might be owned by no-one in the sense that no one can grant you can copyright permission (so opposite of public domain), but also no-one can sue (so de-facto public domain).
The main analogy is this one: you take a massive pile of copyrighted works, cut them up into small sections and toss the whole thing in a centrifuge, then, when prompted to produce a work you use a statistical method to pull pieces of those copyrighted works out of the centrifuge. Sometimes you may find that you are pulling pieces out of the laundromat in the order in which they went in, which after a certain number of tokens becomes a copyright violation.
This suggests there are some obvious ways in which AI companies can protect themselves from claims of infringement but as far as I'm aware not a single one has protections in place to ensure that they do not materially reproduce any fraction of the input texts other than that they recognize prompts asking it to do so.
So it won't produce the lyrics of 'Let it be'. But they'll be happy to write you mountains of prose that strongly resembles some of the inputs.
The fact that they are not doing that tells you all you really need to know: they know that everything that their bots spit out is technically derived from copyrighted works. They also have armies of lawyers and technical arguments to claim the opposite.
sure,
but that is completely unrelated to this discussion
which is about AI using code as input to produce similar code as output
not about AI being trained on code
> not about AI being trained on code
The two are very directly connected.
The LLM would not be able to do what it does without being trained, and it was trained on copyrighted works of others. Giving it a piece of code for a rewrite is a clear case of transformation, no matter what, but now it also rests on a mountain of other copyrighted code.
So now you're doubly in the wrong, you are willfully using AI to violate copyright. AI does not create original works, period.
it isn't clear how/if llm is different from the brain but we all have training by looking at copywrited source code at some time.
It's very clear: the one is a box full of electronics, the other is part of the central nervous system of a human being.
> but we all have training by looking at copywrited source code at some time.
That may be so, but not usually the copyrighted source code that we are trying to reproduce. And that's the bit that matters.
You can attempt to whitewash it but at its core it is copyright infringement and the creation of derived works.
The single word "training" is here being used to describe two very different processes; what an LLM does with text during training is at basically every step fundamentally distinct from what a human does with text.
Word embedding and gradient descent just aren't anything at all like reading text!
I have a lot of music in my head that I've listened to for decades. I could probably replicate it note-for-note given the right gear and enough time. But that would not make any of my output copyrightable works. But if I doodle for three minutes on the piano, even if it is going to be terrible that is an original work.
Says who?. The US ruling the article refers to does not cover this.
It is different in other countries. Even if US law says it is public domain (which is probably not the case) you had better not distribute it internationally. For example, UK law explicitly says a human is the author of machine generated content: https://news.ycombinator.com/item?id=47260110
I think it will depend on the way HOW the AI arrived to the new code.
If it was using the original source code then it probably is guilty-by-association. But in theory an AI model could also generate a rewrite if being fed intermediary data not based on that project.
it depends on the country you are in
but overall in the US judges have mostly consistently ruled it as legal
and this is extremely unlikely to change/be effectively interpreted different
but where things are more complex is:
- model containing training data (instead of generic abstractions based on it), determined by weather or not it can be convinced to produce close to verbatim output of the training data the discussion is about
- model producing close to verbatim training data
the later seems to be mostly? always? be seen as copyright violation, with the issue that the person who does the violation (i.e. uses the produced output) might not known
the former could mean that not just the output but the model itself can count as a form of database containing copyright violating content. In which case they model provider has to remove it, which is technically impossible(1)... The pain point with that approach is that it will likely kill public models, while privately kept models will for every case put in a filter and _claim_ to have removed it and likely will get away with it. So while IMHO it should be a violation conceptually, it probably is better if it isn't.
But also the case the original article refers to is more about models interacting/using with code base then them being trained on.
(1): For LLMs, it is very much removable for knowledge based used by LLMs.
That horse has bolted. No one knows where all the AI code any more, and it would no longer possible to be compliant with a ruling that no one can use AI generated code.
There may be some mental and legal gymnastics to make it possible, but it will be made legal because it’s too late to do anything else now.
I think this is down the community and the culture to draw our red lines on and enforce them. If we value open source, we will find a way to prevent its complete collapse through model-assisted copyright laundering. If not, OSS will be slowly enshittified as control of projects slowly flows to the most profit-motivated entities.
I don’t know what happens next, honestly.
A silver lining if this maintainer ends up being in the right is that any proprietary software can easily be reverse engineered and stripped of it's licensing by any hobbyist with enough free time and claude tokens.
Personally, I'd welcome a post-copyright software era
I think the main question is when a rewrite is a clean rewrite, via AI. If it is a clean rewrite they can choose any licence.
> chardet , a Python character encoding detector used by requests and many others, has sat in that tension for years: as a port of Mozilla’s C++ code it was bound to the LGPL, making it a gray area for corporate users and a headache for its most famous consumer.
True, but too weak. It ends copyright entirely. If I can do this to a code base, I can do it to a movie, to an album, to a novel, to anything.
As such, we can rest assured that for better or for worse this is going to be resolved in favor of this not being enough to strip the copyright off of something and the chardet/chardet project would be well advised not to stand in front of the copyright legal behemoth and defeat it in single combat.
Im struggling to see where this conclusion came from. To me it sounds like the AI-written work can not be coppywritten, and so its kind of like a copy pasting the original code. Copy pasting the original code doesnt make it public domain. Ai gen code cant be copywritten, or entered into the public domain, or used for purposes outside of the original code's license. Whats the paradox here?
If I train a limerick generator on the contents of Project Gutenberg, no matter how creative its outputs, they’re not copyrightable under this interpretation. And it’s by far the most reasonable interpretation of the law as both intended and written. Entities that are not legal persons cannot have copyright, but legal persons also cannot claim copyright of something made by a nonperson, unless they are the "creative force" behind the work.
I think we didn't even began to consider all the implications of this, and while people ran with that one case where someone couldn't copyright a generated image, it's not that easy for code. I think there needs to be way more litigation before we can confidently say it's settled.
If "generated" code is not copyrightable, where do draw the line on what generated means? Do macros count? Does code that generates other code count? Protobuf?
If it's the tool that generates the code, again where do we draw the line? Is it just using 3rd party tools? Would training your own count? Would a "random" code gen and pick the winners (by whatever means) count? Bruteforce all the space (silly example but hey we're in silly space here) counts?
Is it just "AI" adjacent that isn't copyrightable? If so how do you define AI? Does autocomplete count? Intellisense? Smarter intellisense?
Are we gonna have to have a trial where there's at least one lawyer making silly comparisons between LLMs and power plugs? Or maybe counting abacuses (abaci?)... "But your honour, it's just random numbers / matrix multiplications...
> If "generated" code is not copyrightable, where do draw the line on what generated means? Do macros count?
Does the output of the macro depend on ingesting someone else's code?
> Does code that generates other code count?
Does the output of the code depend on ingesting someone else's code?
> Protobuf?
Does your protobuf implementation depend on ingesting someone else's code?
> If it's the tool that generates the code, again where do we draw the line?
Does the tool depend ingestion of of someone else's code?
> Is it just using 3rd party tools?
Does the 3rd party tool depend on ingestion of someone else's code?
> Would training your own count?
Does the training ingest someone else's code?
> Would a "random" code gen and pick the winners (by whatever means) count?
Does the random codegen depend on ingesting someone else's code?
> Bruteforce all the space (silly example but hey we're in silly space here) counts?
Does the bruteforce algo depend on ingesting someone else's code?
> Is it just "AI" adjacent that isn't copyrightable?
No, it's the "depends on ingesting someone else's code" that makes it not copyrightable.
> If so how do you define AI?
Doesn't matter whether it is AI or not, the question is are you ingesting someone else's code.
> Does autocomplete count?
Does the specific autocomplete in question depend on ingesting someone else's code?
> Intellisense?
Does the specific Intellisense in question depend on ingesting someone else's code?
> Smarter intellisense?
Does the specific Smarter Intellisense in question depend on ingesting someone else's code?
...
Look, I see where you're going with this - reductio ad absurdum and all - but it seems to me that you're trying to muddy the waters by claiming that either all code generation is allowed or no code generation is disallowed.
Let me clear the waters for all the readers - the complaint is not about code generation, it's about ingesting someone else's code, frequently for profit.
All these questions you are asking seem to me to be irrelevant and designed to shift the focus from the ingestion of other people's work to something that no one is arguing against.
> the complaint is not about code generation, it's about ingesting someone else's code, frequently for profit.
Why do you think that is, and what complaint specifically? I was talking about this:
> The Copyright Office reviewed the decision in 2022 and determined that the image doesn't include “human authorship,” disqualifying it from copyright protection
There seems to be 0 mentioning of training there. In fact if you read the appeal's court case [1] they don't mention training either:
> We affirm the denial of Dr. Thaler’s copyright application. The Creativity Machine cannot be the recognized author of a copyrighted work because the Copyright Act of 1976 requires all eligible work to be authored in the first instance by a human being. Given that holding, we need not address the Copyright Office’s argument that the Constitution itself requires human authorship of all copyrighted material. Nor do we reach Dr. Thaler’s argument that he is the work’s author by virtue of making and using the Creativity Machine because that argument was waived before the agency.
I have no idea where you got the idea that this was about training data. Neither the copyright office nor the appeals court even mention this.
But anyway, since we're here, let's entertain this. So you're saying that training data is the differentiator. OK. So in that case, would training on "your own data" make this ok with you? Would training on "synthetic" data be ok? Would a model that sees no "proprietary" code be ok? Would a hypothetical model trained just on RL with nothing but a compiler and endless compute be ok?
The courts seem to hint that "human authorship" is still required. I see no end to the "... but what about x", as I stated in my first comment. I was honestly asking those questions, because the crux of the case here rests on "human authorship of the piece to be copyrighted", not on anything prior.
[1] - https://fingfx.thomsonreuters.com/gfx/legaldocs/egpblokwqpq/...
> ...
> I have no idea where you got the idea that this was about training data. Neither the copyright office nor the appeals court even mention this.
In both the story and the comments, that's the prevailing complaint. FTFA:
> Their claim that it is a “complete rewrite” is irrelevant, since they had ample exposure to the originally licensed code (i.e. this is not a “clean room” implementation). Adding a fancy code generator into the mix does not somehow grant them any additional rights.
I mean, I know it's passe to read the story, but I still do it so my comments are on the story, not just the title taken out of context.
> But anyway, since we're here, let's entertain this. So you're saying that training data is the differentiator.
Well, that's the complaint in the story and in the comment section, so it makes sense to address that and that alone.
> OK. So in that case, would training on "your own data" make this ok with you?
Yes.
> Would training on "synthetic" data be ok?
If provenance of "synthetic data" does not depend on some upstream ingesting someone else's work, then yes.
> Would a model that sees no "proprietary" code be ok?
If the model does not depend on someone else's work, then Yes.
> Would a hypothetical model trained just on RL with nothing but a compiler and endless compute be ok?
Yes.
*Note: Let me clarify that "someone else's work" means someone who has not consented or licended their work for ingestion and subsequent reproduction under the terms that AI/LLM training does it. If someone licensed you their work to train a model, then have at it.
> > To me it sounds like the AI-written work can not be coppywritten
I was only commenting on that.
"Ingesting someone else's code" does not seem very useful here - it's hardly quantifiable, nor is "ingestion" the key question I believe.
I think they are rhetorically asking if your position is correct.
There is one thing arguing that the code is a one to one copy but when the comments are even the same isn’t it quite clear it’s a copy?
According to the analysis that you referenced:
> JPlag parses Python source into syntactic tokens (function definitions, assignments, control flow, etc.), discarding all variable names, comments, whitespace, and formatting
See e.g. https://banteg.xyz/posts/crimsonland/ , a single human with the help of LLMs reverse engineered a non-trivial game and rewrote it in another language + graphics lib in 2 weeks.
[0] https://reorchestrate.com/posts/your-binary-is-no-longer-saf...
[1] https://reorchestrate.com/posts/your-binary-is-no-longer-saf...
Did you find it worked reasonably well on any portion of the codebase you could throw at it? For example, if I recall correctly, all of MajorMUD's data file interactions used the embedded Btrieve library which was popular at the time. For that type of specialized low-level library, I'm curious how much effort it would take to get readable code.
LLM ripping off open source code removes that.
I think refusing to publish open source code right now is the safe bet. I know I won't be publishing anything new until this gets definitively resolved, and will only limit myself to contributing to a handful of existing open source projects.
It is my understanding that what a GPL license requires is releasing the source code of modifications.
So if we assume that a rewrite using AI retains the GPL license, it only means the rewrite needs to be open source under the GPL too.
It doesn't prevent any unwanted use, or at least that is my understanding. I guess unwanted use in this case could mean not releasing the modifications.
I agree, in theory. In practice courts will request that the decision-making process will be made public. The "we don't know" excuse won't hold; real people also need to tell the truth in court. LLMs may not lie to the court or use the chewbacca defence.
Also, I am pretty certain you CAN have AI models that explain how they originated to the decision-making process. And they can generate valid code too, so anything can be autogenerated here - in theory.
It does matter for the one who implements it.
Finding an LLM that's good enough to do the rewrite while being able to prove it wasn't exposed to the original GPL code is probably impossible.
That’s a complex question that isn’t solved yet. Clearly, regurgitating verbatim LGPL code in large chunks would be unlawful. What’s much less clear is a) how large do those chunks need to be to trigger LGPL violations? A single line? Two? A function? What if it’s trivial? And b) are all outputs of a system which has received LGPL code as an input necessarily derivative?
If I learn how to code in Python exclusively from reading LGPL code, and then go away and write something new, it’s clear that I haven’t committed any violation of copyright under existing law, even if all I’m doing as a human is rearranging tokens I understand from reading LGPL code semantically to achieve new result.
It’s a trying time for software and the legal system. I don’t have the answers, but whether you like them or not, these systems are here to stay, and we need to learn how to live with them.
You could probably use it to output code that is GPL'd though.
When you write code, it is the exact sequence of characters, the expression of the code, that is protected. If you copy it and change some lines, of course it’s still protected. Maybe some way of writing an algorithm is protected. But nothing else (under copyright).
Phase 1: extract requirements from original product (ideally not its code).
Phase 2: implement them without referencing the original product or code.
I wrote a simple "clean room" LLM pipeline, but the requirements just ended up being an exact description of the code, which defeated the purpose.
My aim was to reduce bloat, but my system had the opposite effect! Because it replicated all the incidental crap, and then added even more "enterprisey" crap on top of it.
I am not sure if it's possible to solve it with prompting. Maybe telling it to derive the functionality from the code? I haven't tried that, and not sure how well it would work.
I think this requirements phase probably cannot be automated very effectively.
[1] https://arstechnica.com/features/2025/06/study-metas-llama-3...
This is the "Don't think of a pink elephant" fallacy all over again.
e.g.
Team A:
- reads the code
- writes specifications and tests based on the code
- gives those specifications to Team B
Team B:
- reads the specs and the tests
- writes new code based on the above
The thinking being that Team B never sees the code then it's "innovative" and you are not "laundering" the code.
On a side note:
what happens in a copyright lawsuit concerning code and how hired experts investigate what happened is described in this AMAZING talk by Dave Beazley: https://www.youtube.com/watch?v=RZ4Sn-Y7AP8
Also a few years bcak theer was the csae of SAP(?) i tthink where they did a reimplementation indipendently via the design documents.
Those two were upheld on litigation and bear out to this day.
This case however is neither a clean room implementation nor relicensable.
A good example if the author had wanted to be correct would have been the sudo rewrite , which ubuntu is doing with their sudo-rs in rust.Not bug for bug compatible as they have already deviated from some usablility choices but more valid than this.
If you have a company that depends on software, the rest of the business (service, reliability, etc) better be rock solid because you can be guaranteed someone will do a rewrite of your stack.
However, this is solved if somebody trains a model with only code that does not have restrictive licenses. Then, the maintainers of the package in question here could never claim that the clean room implementation derived from their code because their code is known to not be in the training set.
It would probably be expensive to create this model, but I have to agree that especially if someone does manage this, it’s kind of the end of copyleft.
Is the "clean room" process meaningfully backed by legal precedent?
As an aside, this clean room engineering is one of the plot points of Season 1 of the TV show Halt and Catch Fire where the fictional characters do this with the BIOS image they dumped.
This doesn't prevent any form of automatic copyrighting by production of derivative code or similar. It just prevent anyone from claiming ownership of any parts unique to the derived work.
Like think about it if a natural disaster changes (e.g. water damages) a picture you did draw then a) you can't claim ownership of the natural produced changes but b) still have ownership of the original picture contained in the changed/derived work.
AI shouldn't change that.
Which brings us to another 2 aspects:
1. if you give an AI a project access to the code to rewrite it anew it _is_ a copyright violation as it's basically a side-by-side rewrite
2. but if you go the clean room approach but powered by AI then it likely isn't a copyright violation, but also now part of the public domain, i.e. not yours
So yes, doing clean room rewrites has become incredible cheap.
But no just because it's AI it doesn't make code go away.
And lets be realistic one of the most relevant parts of many open source project is it being openly/shared maintained. You don't get this with clean room rewrites no matter if AI or not.
Does this mean company X using AI coding to build their app, that they have no copyright over their AI coded app's code?
So, I dislike AI and wish it would disappear, BUT!
The argument is strange here, because ... how can a2mark ensure that AI did NOT do a clean-room conforming rewrite? Because I think in theory AI can do precisely this; you just need to make sure that the model used does that too. And this can be verified, in theory. So I don't fully understand a2mark here. Yes, AI may make use of the original source code, but it could "implement" things on its own. Ultimately this is finite complexity, not infinite complexity. I think a2mark's argument is in theory weak here. And I say this as someone who dislikes AI. The main question is: can computers do a clean rewrite, in principle? And I think the answer is yes. That is not saying that claude did this here, mind you; I really don't know the particulars. But the underlying principle? I don't see why AI could not do this. a2mark may need to reconsider the statement here.
In cases like this it is usually incumbent on the entity claiming the clean-room situation was pure to show their working. For instance how Compaq clean-room cloned the IBM BIOS chip¹ was well documented (the procedures used, records of comms by the teams involved) where some other manufacturers did face costly legal troubles from IBM.
So the question is “is the clean-room claim sufficiently backed up to stand legal tests?” [and moral tests, though the AI world generally doesn't care about failing those]
--------
[1] the one part of their PCs that was not essentially off-the-shelf, so once it could be reliably legally mimicked this created an open IBM PC clone market
> *Context:* The registry maps every supported encoding to its metadata. Era assignments MUST match chardet 6.0.0's `chardet/metadata/charsets.py` at https://raw.githubusercontent.com/chardet/chardet/f0676c0d6a...
> Fetch that file and use it as the authoritative reference for which encodings belong to which era. Do not invent era assignments.
[0] https://github.com/chardet/chardet/issues/327#issuecomment-4...
a2mark has to demonstrate that v7 is "a work containing the v6 or a portion of it, either verbatim or with modifications and/or translated straightforwardly into another language", which is different from demanding a clean-room reimplementation.
Theoretically, the existence of a publicly available commit that is half v6 code and half v7 can be used to show that this part of v7 code has been infected by LGPL and must thus infect the rest of v7, but that's IMO going against the spirit of the [L]GPL.
which, minimally instructs it to directly examine the test suite: `4. High encoding accuracy on the chardet test suite`
You can't just dismiss it then say the claimant has to provide proof.
Plus the argument put forth is that they can re-license the project. It's not a new one made from scratch.
Now if you had 2 entirely distinct humans involved in the process that might work though.
Does that make the new code a derivative of the original test suite (also lpgl)?
If yes, this in a sense allows a path around GPL requirements. Linux's MIT version would be out in the next 1-2 years.
Isn't that what https://github.com/uutils/coreutils is? GNU coreutils spec and test suite, used to produce a rust MIT implementation. (Granted, by humans AFAIK)
1. Generate specification on what the system does. 2. Pass to another "clean" system 3. Second clean system implements based just on the specification, without any information on the original.
That 3rd step is the hardest, especially for well known projects.
Then the model that is familiar with the code can write specs. The model that does not have knowledge of the project can implement them.
Would that be a proper clean room implementation?
Seems like a pretty evil, profitable product "rewrite any code base with an inconvenient license to your proprietary version, legally".
2. Dumped into a file.
3. claude-code that converts this to tests in the target language, and implements the app that passes the tests.
3 is no longer hard - look at all the reimplementations from ccc, to rewrites popping up. They all have a well defined test suite as common theme. So much so that tldraw author raised a (joke) issue to remove tests from the project.
AI muddies the water because large models trained on public repos can reproduce GPL snippets verbatim, so prompting with tests that mirror the original risks contamination and a court could find substantial similarity. To reduce risk use black-box fuzzing and property-based tools, have humans review and scrub model outputs, run similarity scans, and budget for legal review before calling anything MIT.
Our knowledge of what the person or the model actually contains regarding the original source is entirely incomplete when the entire premise requires there be full knowledge that nothing remains.
The thesis I propose is that tests are more akin to facts, or can be stated as facts, and facts are not copyright-able. That's what makes this case interesting.
If "tests" should mean a proper specification let's say some IETF RFC of a protocol, then that would be different.
Mark Pilgrim! Now that‘s a name I haven‘t read in a long time.
A lawyer could easily argue that the model itself stores a representation of the original, and thus it can never do a "fresh context".
And to be perfectly honest, LLMs can quote a lot of text verbatim.
We can't speak about clean room implementation from LLM since they are technically capable only of spitting their training data in different ways, not of any original creation.
Of course in practice it would work exactly in the opposite fashion and AI generated code would be immune even if it copied code verbatim.
I'd assume an LLM trained on the original would also be contaminated.
You cannot copyright the alphabet, but you can copyright the way letters are put together.
Now, with AI the abstraction level goes from individual letters to functions, classes, and maybe even entire files.
You can't copyright those (when written using AI), but you __can__ copyright the way they are put together.
Sort of, but not really. Copyright usually applies to a specific work. You can copyright Harry Potter. But you can't copyright the general class of "Wizard boy goes to wizard school". Copyrights generally can't be applied to classes of works. Only one specific work. (Direct copies - eg made with a photocopier - are still considered the same work.)
Patterns (of all sorts) usually fall under patent law, not copyright law. Patents have some additional requirements - notably including that a patent must be novel and non-obvious. I broadly think software patents are a bad idea. Software is usually obvious. Patents stifle innovation.
Is an AI "copy" a copy like a photocopier would make? Or is it a novel work? It seems more like the latter to me. An AI copy of a program (via a spec) won't be a copy of the original code. It'll be programmed differently. Thats why "clean room reimplementations" are a thing - because doing that process means you can't just copy the code itself. But what do I know, I'm not a lawyer or a judge. I think we'll have to wait for this stuff to shake out before anyone really knows what the rules will end up being.
Weird variants of a lot of this stuff have been tested in court. Eg the Google v Oracle case from a few years ago.
> Software is usually obvious.
Hardware and mechanical designs are usually described in CAD programs nowadays, so it comes pretty close to software; it's just that LLMs are not the right tool to "GenAI" them but I've seen plenty of these kinds of design that I know for sure that they are often not any less obvious than a lot of software. Treating software as "obvious therefore not patentable" is not accurate and not fair and is probably not going to help the profession in the AI age. But I agree that patents are bad for innovation.
It is also not fair to claim that an AI-copy is fundamentally different from photocopying.
I mean, in both cases it is like you are picking the worst case interpretation for the field of software engineering.
> I think we'll have to wait for this stuff to shake out before anyone really knows what the rules will end up being.
Yes, but it will help if we think deeply about this stuff ourselves because what law-makers come up with may not be what the profession needs.
If you clean-room copy it, I think it is different. Eg, first get one agent to make a complete spec of what the program does. And a list of all the correctness guarantees it meets. Then feed that spec into another AI model to generate a program which meets that spec.
The second program will not be based on any of the code in the first program. They'll be as different as any two implementations of the same idea are. I don't think the second program should be copyrighted. If it should, why shouldn't one C compiler should be able to own a copyright over all C compilers? Why doesn't the first JSON parsing library own JSON parsing? These seem the same to me. I don't see how AI models change anything, other than taking human effort out of the porting process.
Finally, even if your rationale is 99% correct, then there is still that 1% that makes the result a mechanistic copy.
And I see no way in which most people would 100% agree with your view.
If you want to protect the idea or the design, get a patent. A patent on one h264 encoder applies to all h264 encoders.
There is a chance the courts or the legislature will decide differently. But until then, we should assume the existing law of the land holds.
I think there are going to be a lot of these types of scenarios where the old way of doing things just doesn't hold.
If not, maybe it should not constitute a valid case in court.
Also, I'm wondering if they are not themselves liable considering they have every copyrighted work in there too.
Persumably there is already a law around why I cant just go borrow a book from my library, type out some 95% regurgitated varient on my laptop, and then try to publish it somewhere?
Edit: I looked it up and the thing that stops you from publishing a bootleg "Harold Potter and the Wizards Rock" is this legal framework around "The Abstractions Test".
> Tired of putting "Portions of this software..." in your documentation? Those maintainers worked for free—why should they get credit? ... Some licenses require you to contribute improvements back. Your shareholders didn't invest in your company so you could help strangers.
And the testimonials from "Definitely Real Corp", "MegaSoft Industries" and "Profit First LLC" are a bit suspicious, as is the fact that most of the links in the footer are not real.
I'm sure most people here would agree patents stifle innovation, but if copyright doesn't work for companies then they will turn to a different tool.
So, you can pilfer the commons ("public") but not stuff unavailable in source form.
If we expand your thought experiment to other forms of expression, say videos on YT or Netflix, then yes.
That's the core issue here. All models are trained on ALL source code that is publicly available irrespective of how it was licensed. It is illegal but every company training LLMs is doing it anyways.
Only (?) in America. In the EU, scraping is legal by default unless explicitly opted out with machine-readable instructions like robots.txt. That covers "training input". For training output, the rule is: "if the output is unrecognizable to the input, the license of the input does not matter" (otherwise, any project X could sue project Y for copyright infringement even if the projects only barely resemble each other). The cases where companies actually got sued were where the output was a direct copy or repetition of the input, even if an LLM was involved.
There is, however, a larger philosophical divide between the US and the EU based on history and religion. The US philosophy is highly individualistic, capitalistic, and considers "first-order principles." Copyright is a "property right": "I own this string of bits, you used them, therefore you owe me" (principle of absolute ownership).
Continental philosophy is more social and considers "second-order / causal effects." Copyright is a "personality right" that exists within a social ecosystem. The focus is on the effect of the action rather than a singular principle like "intellectual property." If the new code provides a secondary benefit to society and doesn't "hurt" the original creator's unique intellectual stamp, the law is inclined to view it as a new work.
In terms of legal sociology, America and Britain are more "individual-property-atomistic" thanks to their Protestant heritage, focusing on the rights of the individual (sola me, and my property, and God). Meanwhile, Europe was, at least to a large part, Catholic (esp. France), which focuses more on works, results, and effects on society to determine morality. While the states are officially secular, the heritage of this echoes in different definitions of what is considered "legal" or "moral", depending on which side of the ocean you are on.
We can debate if this law is moral. Like the GP I took agree public data in -> public domain out is what's right for society. Copyright as an artificial concept has gone on for long enough.
I don't think so. It is no where "limited use". Entirety of the source code is ingested for training the model. In other words, it meets the bar of "heart of the work" being used for training. There are other factors as well, such as not harming owner's ability to profit from original work.
Both Meta and Anthropic were vindicated for their use. Only for Anthropic was their fine for not buying upfront.
> Instead, it was a fair use because all Anthropic did was replace the print copies it had purchased for its central library with more convenient space-saving and searchable digital copies for its central library — without adding new copies, creating new works, or redistributing existing copies. [0]
It was only fair use, where they already had a license to the information at hand.
[0] https://storage.courtlistener.com/recap/gov.uscourts.cand.43...
You're holding out for some grace on this from the wrong venue. The right avenue would be lobbying for new laws to regulate and use LLMs, not try to find shelter in an archaic and increasingly irrelevant bit of legalese.
This is not the first time someone tried to say a machine is the author. The law is quite clear, the machine cant be an author for copyright purposes. Despite all the confused news articles, this does not mean if claude writes code for you it is copyright free. It just means you are the author. Machines being used as tools to generate works is quite common, even autonomously. ill steal from the opinion here:
In 1974, Congress created the National Commission on New Technological Uses of Copyrighted Works (“CONTU”) to study how copyright law should accommodate “the creation of new works by the application or intervention of such automatic systems or machine reproduction.”
...
This understanding of authorship and computer technology is reflected in CONTU’s final report: On the basis of its investigations and society’s experience with the computer, the Commission believes that there is no reasonable basis for considering that a computer in any way contributes authorship to a work produced through its use. The computer, like a camera or a typewriter, is an inert instrument, capable of functioning only when activated either directly or indirectly by a human. When so activated it is capable of doing only what it is directed to do in the way it is directed to perform.
...
IE When you use a computer or any tool you are still the author.
The court confirms this later:
Contrary to Dr. Thaler’s assumption, adhering to the human-authorship requirement does not impede the protection of works made with artificial intelligence. Thaler Opening Br. 38-39. First, the human authorship requirement does not prohibit copyrighting work that was made by or with the assistance of artificial intelligence. The rule requires only that the author of that work be a human being—the person who created, operated, or used artificial intelligence—and not the machine itself. The Copyright Office, in fact, has allowed the registration of works made by human authors who use artificial intelligence.
There are cases where the use of AI made something uncopyrightable, even when a human was listed as the author, but all of the ones i know are image related.
Did you reply to the wrong comment? I was just saying I like the idea of AI-generated anything being public domain, not that it currently is/isn't.
But what about training without having seen any human written program? Coul a model learn from randomly generated programs?
Hm... I mean this is really one for the lawyers, but IMO you would likely successfully be able to argue that the marginal knowledge of general coding from a particular library is likely close to nil.
The hard part here imo would be convincingly arguing that you can wipe out knowledge of the library from the training set, whether through fine tuning or trying to exclude it from the dataset.
> But what about training without having seen any human written program? Coul a model learn from randomly generated programs?
I think the answer at this point is definitely no, but maybe someday. I think it's a more interesting question for art since it's more subjective, if we eventually get to a point where a machine can self-teach itself art from nothing... first of all how, but second of all it would be interesting to see the reaction from people opposed to AI art on the basis of it training off of artists.
Honestly given all I've seen models do, I wouldn't be too surprised if you could somehow distill a (very bad) image generation model off of just an LLM. In a sense this is the end goal of the pelican riding a bicycle (somewhat tongue in cheek), if the LLM can learn to draw anything with SVGs without ever getting visual inputs then it would be very interesting :)
Who is its most famous consumer?
Does this argument make sense? Even before LLMs, a developer could "rewrite this in a different style" and release it under a different license. Why are LLMs a new element in this argument?
Not quite. A cert denial isn’t a merits ruling and doesn’t "solidify" anything as Supreme Court precedent. It simply leaves the DC Circuit decision binding (within that circuit) and the Copyright Office’s human-authorship policy intact, for now.
SCOTUS doesn’t explain cert denials, so why they denied is guesswork. my guess: they’re letting it percolate while the tech matures and we all start to realize how deep this seismic fracture really is.
(For example: what does "ownership" of intellectual "property" even mean, once "authorship" is partly probabilistic/synthetic, and once almost everything humans create is AI assisted? Hard to draw bright lines.)
The more restrictive licences perhaps, though only if the rewriter convinces everyone that they can properly maintain the result. For ancient projects that aren't actively maintained anyway (because they are essentially done at this point) this might make little difference, but for active projects any new features and fixes might result in either manual reimplementation in the rewritten version or the clean-room process being repeated completely for the whole project.
> chardet 7.0 is a ground-up, MIT-licensed rewrite of chardet. Same package name, same public API —
(from the github description)
The “same name” part to me feels somewhat disingenuous. It isn't the same thing so it should have a different name to avoid confusion, even if that name is something very similar to the original like chardet-ng or chardet-ai.
A bit of public domain code can be used in a hidden way in perpetuity.
A bit of code covered by AGPL3 (for instance) (and other GPLs depending on context) can be used for free too, but with the extra requirement that users be given a copy of the code, and derivative works, upon request.
This is why the corps like MIT and similar and won't touch anything remotely like GPL (even LGPL which only covers derivative works of the library not the wider project). The MIT licence can be largely treated as public domain.
With the incentives set up like that, the era of open software cooperation would be ended rapidly.
People who understand and care about the implications of https://xkcd.com/2347/
Which admittedly is not nearly enough of us…
This isn't even limited to "the end of copyleft"; it's the end of all copyright! At least copyright protecting the little guy. If you have deep enough pockets to create LLMs, you can in this potential future use them to wash away anyone's copyright for any work. Why would the GPL be the only target? If it works for the GPL, it surely also works for your photographs, poetry – or hell even proprietary software?
and while particularly diehard believers in democracy may insist that if they kvetch hard enough they can get things they don't like regulated out of existence, they pointedly ignore the elephant in the room. they could succeed beyond their wildest dreams - get the West to implement a moratorium on AI, dismantle every FAGMAN, Mossad every researcher, send Yudkowskyjugend death squads to knock down doors to seize fully semiautomatic assault GPUs, and none of it will make any fucking difference, because China doesn't give a fuck.
Why would anyone waste their time reading what they wrote then?
Hoping the HN community can bring more color to this, there are some members who know about these subjects.
But if that were true, every single LLM is illegal, because they’ve all stolen terabytes of books and code.
But if it’s making the original author unhappy then why do it.
The key leap from gpt3 to gpt-3.5 (aka ChatGPT) was code-davinci-002, which is trained upon Github source code after OpenAI-Microsoft partnership.
Open source code contributed much to LLM's amazing CoT consistency. If there's no Open Source movement, LLM would be developed much later.
I believe this is a misunderstanding of the ruling. The code can’t be copyrighted by a LLM. However, the code could be copyrighted by the person running the LLM.
There is no such thing as the output of an LLM as a 'new' work for copyright purposes, if it were then it would be copyrightable and it is not. The term of art is 'original work' instead of 'new'.
The bigger issue will be using tools such as these and then humans passing off the results as their own because they believe that their contribution to the process whitewashes the AI contributions to the point that they rise to the status of original works. "The AI only did little bits" is not a very strong defense though.
If you really want to own the work-product simply don't use AI during the creation. You can use it for reviews, but even then you simply do not copy-and-paste from the AI window to the text you are creating (whether code or ordinary prose isn't really a difference).
I've seen a copyright case hinge on 10 lines of unique code that were enough of a fingerprint to clinch the 'derived work' assessment. Prize quote by the defendant: "We stole it, but not from them".
There is a very blurry line somewhere in the contents of any large LLM: would a model be able to spit out the code that it did if it did not have access to similar samples and to what degree does that output rely on one or more key examples without which it would not be able to solve the problem you've tasked it with?
The lower boundary would be the most minimal training set required to do the job, and then to analyze what the key corresponding bits were from the inputs that cause the output to be non-functional if they were dropped from the training set.
The upper boundary would be where completely non-related works and general information rather than other parties copyrighted works would be sufficient to do the creation.
The easiest way to loophole this is to copyright the prompt, not the work product of the AI, after all you should at least be able to write the prompt. Then others can re-create it too, but that's usually not the case with these AI products, they're made to be exact copies of something that already exists and the prompt will usually reflect that.
That's why I'm a big fan of mandatory disclosure of whether or not AI was used in the production of some piece of text, for one it helps to establish whether or not you should trust it, who is responsible for it and whether the person publishing it has the right to claim authorship.
Using AI as a 'copyright laundromat' is not going to end up well.
Because if this isn't allowed, that makes all of the AI models themselves illegal. They are very much the product of using others' copyrighted stuff and rewriting it.
But of course this will be allowed because copyright was never meant to protect anyone small. And that it's in direct contradiction with what applies to large companies? Courts won't care.
You can do this a lot by saying things like: complete the code "<snippet from gpl licensed code>".
And if now the models are GPL licensed the problem of relicensing is gone since the code produced by these models should in theory be also GPL licensed.
Unfortunately, there is a dumb clause that computer generated code cannot be copyrighted or licensed to begin with.
Can you point to the clause? I have never seen it in any GPL license.
Software in the AI era is not that important.
Copyleft has already won, you can have new code in 40 seconds for $0.70 worth of tokens.