> If you do not understand the ticket, if you do not understand the solution, or if you do not understand the feedback on your PR, then your use of LLM is hurting Django as a whole.
> Django contributors want to help others, they want to cultivate community, and they want to help you become a regular contributor. Before LLMs, this was easier to sense because you were limited to communicating what you understood. With LLMs, it’s much easier to communicate a sense of understanding to the reviewer, but the reviewer doesn’t know if you actually understood it.
> In this way, an LLM is a facade of yourself. It helps you project understanding, contemplation, and growth, but it removes the transparency and vulnerability of being a human.
> For a reviewer, it’s demoralizing to communicate with a facade of a human.
> This is because contributing to open source, especially Django, is a communal endeavor. Removing your humanity from that experience makes that endeavor more difficult. If you use an LLM to contribute to Django, it needs to be as a complementary tool, not as your vehicle.
I am going to try to make these points to my team, because I am seeing a huge influx of AI-generated PRs where the submitter interacts with CodeRabbit etc. by having Claude/Codex respond to feedback on their behalf.
There is little doubt that if we as an industry fail to establish and defend a healthy culture for this sort of thing, it's going to lead to a whole lot of rot and demoralization.
I don’t think anybody’s tracking the actual net-effects of any of this crap on productivity, just the “vibes” they get in the moment, using it. “I got my part of this particular thing done so fast!”
I believe that to be the case, in part, because not a lot of organizations are usefully tracking overall productivity to begin with. Too hard, too expensive. They might “track” it, but so poorly it’s basically meaningless. I don’t think they’ve turned that around on a dime just to see if the c-suite’s latest fad is good or bad (they never want a real answer to that kind of question anyway)
In the pre-AI era it was much easier to identify people in the workplace who weren't paying attention to their work. To write something about a project you had to at minimum invest some time into understanding it, then think about it, then write something on the ticket, e-mail, or codebase.
AI made it easy to bypass all of that and produce words or code that look plausible enough. Copy and paste into ChatGPT, copy and past the blob of text back out, click send, and now it's somebody else's problem to decipher it.
It gets really bad when the next person starts copying it into their ChatGPT so they can copy and past a response back.
There are entire groups of people just sending LLM slop back and forth and hoping that the project can be moved to someone else before the consequences catch up.
I treat jira like product owners treat the code. Which is infinitely humorous to me.
If something's not happening, something else's making it impractical. Saying this as a 10+ years product manager and R&D person with 20+ more years of engineering on top.
I also had to deal with "managers are just complicating things" or "users are stupid and don't understand anything"; do you think I complained? No, I had engineers barter trust of their ingenuity with trust of my wisdom, and brought them to customer calls and presented them to users almost like royalty, which made them incredibly respectful as soon as they saw what kind of crap users had to deal with.
Thats an unreasonable asymmetric effort demand, "Your code does not matter but my precious tickets must have elbow grease put into them."
No, your behavior is the cause of that.
The entire industry isn't broken. There are good company cultures and bad company cultures just like always.
At least own up to what you're doing. Don't blame "the industry" when you're the one doing the thing.
The industry is broken. It's broken in the same sense the railroad industry is broken. It has reached the point of abundance, where we're doing things that don't need doing. That won't get done in an efficient market. But since we're not in an efficient market, there are globs of capital thrown at people building stuff that.. doesn't stand a chance of actually making any return on capital.
But while it lasts, us, the glorified machine-minders (just like railroad engineers, well, minded the engines), get paid large lumps of money, through large hordes of managers, arguing on minutia of conversion optimization, and fundamentally, being paid enough to not to try and do something else, perhaps competitive.
And that is broken. Especially for the "smarter of us" - the graduation ceremony of my physics department rings true - we've trained you to discover the secrets of universe and reach the stars, and most of us will use it.. to gain an edge at Lehman Brothers.
(And I think the root of this problem, is the abundance of low-risk capital, from people who expect a small return and a pension that lasts for decades in retirement)
Petty and getting nowhere. Everyone loses. How about product and engineers also disrespect sales, and sales disrespects customers and everyone else.
I really don't get why this is even a question. Good people do good stuff, and bad people make bad companies.
Its laughably simple to do. I havent touched the jira UI in months.
Just like "etiquette" accomplishes no purpose except letting people easily figure out who put the effort into learning it, vs. who didn't.
Back then this distinguished by class, but ironically, today where's so easy to learn, it finally distinguishes by merit.
The LLM is genuinely better at certain things: interpreting messy input, generating novel responses, reasoning about edge cases. It's not better at "if account is locked, deny access." Burning tokens on that kind of decision is exactly the same category of mistake as using an LLM to write your Django queries.
The hard part is that agent frameworks don't enforce the separation. Nothing stops you from putting decision logic in a prompt because that's the path of least resistance. But you end up with systems where you can't explain why a decision was made, you can't test it without running inference, and you end burning tokens/money that didn't need to be burned to get subpar results you could have gotten deterministicly for free.
Are people generally unhappy with the outcomes of this? As anecdotally, it does seem to pass review later on. Code is getting through this way.
Enshittification Enterprise Edition.
They want AI to write all code but also still be able to fire humans for failure, because an AI can't be blamed right now.
Boy I can't wait for this employment norm. Fired because you weren't allowed to take the time to review important code but "You are responsible"
I wish Executives were required to be that "responsible"
They spelled out exactly their assumptions, the gaps in their knowledge, what they have struggled with during implementation, behavior they observed but don't fully understand, etc.
Their default position was that their contribution was not worth considering unless they can sell it to the reviewer, by not assuming their change deserves to get merged because of their seniority or authority, but by making the other person understand how any why it works. Especially so if the reviewer was their junior.
When describing the architecture, they made an effort to communicate it so clearly that it became trivial for others to spot flaws, and attack their ideas. They not only provided you with ammunition to shoot down their ideas, they handed you a loaded gun, safety off, and showed you exactly where to point it.
If I see that level of humility and self-introspection in a PR, I'm not worried, regardless of whether or not an LLM was involved.
But then there's people that created PRs with changes where the stack didn't even boot / compile, because of trivial errors. They already did that before, and now they've got LLMs. Those are the contributions I'm very worried about.
So unlike people in other threads here, I don't agree at all with "If the code works, does it matter how it was produced and presented?". For me, the meta / out-of-band information about a contribution is a massive signal, today more than ever.
Will humans take this to heart and actually do the right thing? Sadly, probably not.
One of the main issues is that pointing to your GitHub contributions and activity is now part of the hiring process. So people will continue to try to game the system by using LLMs to automate that whole process.
"I have contributed to X, Y, and Z projects" - when they actually have little to no understanding of those projects or exactly how their PR works. It was (somehow) accepted and that's that.
They hint at Django being a different level of quality compared to other software, wanting to cultivate community, and go slowly.
It doesn't explain why LLM usage reduces quality or they can't have a strong community with LLM contributions.
The problem is that good developers using LLM is not a problem. They review the code, they implement best practices, they understand the problems and solutions. The problem is bad developers contributing - just as it always has been. The problem is that LLMs enable bad developers to contribute more - thus an influx of crap contributions.
> Use an LLM to develop your comprehension.
I really like that, because it gets past the simpler version that we usually see, "You need to understand your PR." It's basically saying you need to understand the PR you're making, and the context of that PR within the wider project.
This ain't an AI problem, it's a people problem that's getting amplified by AI.
If I were hiring at this moment, I'd look at the ratio of accepted to rejected PRs from any potential candidate. As an open source maintainer, I look at the GitHub account that's opening a PR. If they've made a long string of identical PRs across a wide swath of unrelated repos, and most of those are being rejected, that's a strong indicator of slop.
Hopefully there will be a swing back towards quality contributions being the real signal, not just volume of contributions.
Don’t blame the people, blame the system.
Identifying the problem is just the first step. Building consensus and finding pragmatic solutions is hard. In my opinion, a lot of technical people struggle with the second sentence. So much of the ethos in our community is “I see a problem, and I can fix it on my own by building [X].” I think people are starting to realize this doesn’t scale. (Applying the scaling metaphor to people problems might itself be a blindspot.)
And I’m 100% sure there are dozens of startups working on that exact problem right this second.
Some projects ( https://news.ycombinator.com/item?id=46730504 ) are setting a norm to disclose AI usage. Another project simply decided to pause contributions from external parties ( https://news.ycombinator.com/item?id=46642012 ). Instead of accepting driveby pull requests, contributors have to show a proof of work by working with one of the other collaborators.
Another project has started to decline to let users directly open issues ( https://news.ycombinator.com/item?id=46460319 ).
There's definitely an aspect here where the commons or good will effort of collaborators is being infringed upon by external parties who are unintentionally attacking their time and attention with low quality submissions that are now cheaper than ever to generate. It may be necessary to move to a more private community model of collaboration ( https://gnusha.org/pi/bitcoindev/CABaSBax-meEsC2013zKYJnC3ph... ).
edit: Also I applaud the debian project for their recent decision to defer and think harder about the nature of this problem. https://news.ycombinator.com/item?id=47324087
Instead of people buying the tokens themselves, they should just donate the money to the core contributors and let those people decide how to spend on tokens.
So people may be less likely to donate an extra amount beyond their "ai budget" to an OSS project for tokens. Large OSS projects are also likely to get free tokens from major providers anyway.
But I like the idea of crowdfunding specific features.
This is so important. Most humans like communicating with other humans. For many (note, I didn't say all) open source collaborators, this is part of the reward of collaborating on open source.
Making them communicate with a bot pretending to be a human instead removes the reward and makes it feel terrible, like the worst job nobody would want. If you spent any time at all actually trying to help the contributor underestand and develop their skills, you just feel like an idiot. It lowers the patience of everyone in the entire endeavor, ruining it for everyone.
As for open source PRs, I wonder if for trust's sake you would need to self identify the use of AI in your response (All AI, some AI, no AI). And there would need to be some sort of AI detection algorithm flag your response as % AI. I wonder if this would force people to at least translate the LLM responses to their own words. It would for sure stop the issue of someone's WhatsApp 24/7 claw bot cranking out PR slop. Maybe this can lessen the reviewers burden. That being said, more thought is needed to distinguish helpful LLM use that enhances the objective vs unhelpful slop that places burden on the reviewer.
For instance I copy pasted the above to gemini and it produced an excellent condensing of my thoughts, "It is now 10x easier to generate a "plausible" paper or Pull Request (PR) than it is to verify its correctness."
Then again, we see how well robots.txt was honored in practice over the years. As with everything in late-stage capitalism, the humans who showed up with good intentions to legitimately help typically did the right things, and those who came to extract every last gram of value out of something for their own gain ignored the rules with few consequences.
I watched someone ask Claude to replace all occurrences of a string instead of using a deterministic operation like “Find and Replace” available in the very same VSCode window they prompted Claude from.
Although I'm afraid big part of these LLM contributions may be people trying to build their portfolio. Some known project contributor sounds better than having some LLM generated code under your name.
> If you do not understand the ticket, if you do not understand the solution, or if you do not understand the feedback on your PR, then your use of LLM is hurting Django as a whole.
Hey I thought you were a proponent of "no one needs to look at the code" ? dark factory, etc etc.
The linked article makes a very good argument for why pasting the output of your LLM into a Django PR isn't valuable.
The simplest version: if that's all you are doing, why should the maintainers spend time considering your contribution as opposed to prompting the models themselves?
You'd have to manage the contributions, or get your AI bots to manage them or something, but it would be great to have honeypots like this to attract all the low effort LLM slop.
Well let them put their money where their mouth is. Let's see what happens, see what the agents create or fail to create. See if we end up with a new OS, kernel all the way up to desktop environment.
In this case, offloading yet more work onto the maintainers of the package, because you can't be bothered, but still want credit.
I've used an LLM to create patches for multiple projects. I would not have created said work without LLMs. I also reviewed the work afterward and provided tests to verify it.
[…]
> If you use an LLM to contribute to Django, it needs to be as a complementary tool, not as your vehicle.
Think most people recognize though that AI can generate more than humans can reviewing so the model does need to change somehow. Either less AI on submitting side or more on reviewing side (if that’s even viable)
Even before AI I used to ban linting so I could spot and reject code that clearly showed no effort was put in it.
First occurrence of "undreadable" got a note, and a second one got a rejection. And by "undreadable" I do not intend missing semicolons or parenthesis styles or meaningless things like that. I mean obscured semantics or overcrowding and so on.
Last year, I had some free time to try to contribute back to the framework.
It was incredibly difficult. Difficult to find a ticket to work on, difficult to navigate the codebase, difficult to get feedback on a ticket and approved.
As such, I see the appeal of using an LLM to help first time contributors. If I had Claude code back then, I might have used it to figure out the bug I was eventually assigned.
I empathize with the authors argument tho. God knows what kind of slop they are served everyday.
This is all to say, we live in a weird time for open source contributors and maintainers. And I only wish the best for all of those out there giving up their free time.
Dont have any solutions ATM, only money to donate to these folks.
The fellows and other volunteers are spending a much greater amount of time handling the increased volume.
[1] https://www.djangoproject.com/weblog/2026/feb/04/recent-tren...
A number of times now, I have found real value in someone just dropping into the bugtracker to restate the bug description in clearer terms or providing a shorter reproducer. Even if the flaw in Django had been fixed right away, I would not have pulled patches from master anyway. So the ticket comment was still a useful contribution to django, because I could use it in resolving the issue in how my software triggered it.
That ticket now just sits there. The implementation is done, the review is done, there are no objections. But it's not merged.
I think something is deeply wrong and I have no idea what it is.
If this is done, you should update it so it appears in the review queue.
Suppose I encounter a bug in a FOSS library I am using. Suppose then that I fix the bug using Claude or something. Suppose I then thoroughly test it and everything works fine. Isn’t it kind of selfish to not try and upstream it?
It was so easy prior to AI.
That plus ai sycophancy means, in my opinion, a great portion of contributions made in this manner will be bad, and waste maintainers time - which is obviously undesirable.
On my first week of claude code I submitted a PR to a FOSS and I was 100% sure it was correct - ai was giving me great confindence, and it worked! But I had no clue about how that software worked - at all. I later sent an email to the maintainer, apologizing.
Some changes are in the area of "Well no one did that yet because no one needed it or had time for it", or "Well shit, no one thought of that". If Claude Code did these changes with good documentation and good intent and guidance behind it, why not? It is an honest and valid improvement to the library.
Some other changes rip core assumptions of the library apart. They were easy, because Claude Code did the ripping and tearing. In such a case, is it really a contribution to improve the library? If we end up with a wasteland of code torn apart by AI just because?
Errors are fine too. Just not negligence.
imagine someone emailed you a diff with the note "idk lol. my friend sent me this, and it works on my machine". would you even consider applying it?
If I got a PR for one of my projects where the fix was LLM-generated, I wouldn't dismiss it out of hand, but I would want to see (somehow) that the submitter themselves understood both the problem and the solution. Along with all the other usual qualifiers (passes tests, follows existing coding style, diff doesn't touch more than it has to, etc). There's likely no one easy way to tell this, however.
I remember when I was getting started with Django in the 0.9 days most of the assistance you got on the IRC channel was along the lines of "it's in this file here in the source, read it, understand it, and if you still have a question come back and ask again". I probably learned more about writing idiomatic Python from that than anything else.
I can confirm that that was the general mindset back then, and I think that's what made the project last for 20 years. I myself ended up doing some monkey-patching for the admin interface on 0.92 (or 0.91? it's been a lot of time since then), all as the result of me going through the source-code. Definitely not the cleanest solution, even back then, but it made one getting to know the underlying code so much more.
Sure, I thought, this'll be fun.
Holy shit. It was something I'd started working on in the aforementioned 0.9x days, and which someone else had, uh, "extended and modified" after I left the web dev place where I'd worked at the time. Remarkably it was still pretty understandable.
I didn't want anything to do with the person that ran the site, not even just to take money off them, so I passed on it.
I think it's perfectly doable to use an LLM to write into the Django codebase, but you'll have to supervise and feedback it very carefully (which is the article's point).
I can't help but feel there's something very, very important in this line for the future of dev.
> Before LLMs, [high quality code contribution] was easier to sense because you were limited to communicating what you understood. With LLMs, it’s much easier to communicate a sense of understanding to the reviewer, but the reviewer doesn’t know if you actually understood it.
Now my twist on this: This same spirit is why local politics at the administrative level feels more functional than identity politics at the national level. The people that take the time to get involved with quotidian issues (e.g. for their school district) get their hands dirty and appreciate the specific constraints and tradeoffs. The very act of digging in changes you.
It's possible to prompt and get this as well, but obviously any of the big AI companies that want to increase engagement in their coding agent, and want to capture the open source market, should come up with a way to allow the LLM to produce unique of, but still correct code so that it doesn't look LLM-generated and can evade these kinds of checks.
Yea, who needs performance or security in a web framework!?
Heck the longer I live, the more I realize AI is catching my mistakes.
Do what the Django team does, and be of service to the public!
I challange you to prove that Django is sloppier than your LLM-Version
Meanwhile, a different take:
Now, what we’ve been told about models is that they’re only as good as their training data. And so languages with gargantuan amounts of training data ought to fare best, right? Turns out that models kind of universally suck at Python and Javascript (comparatively). The top performing languages (independent of model) are C#, Racket, Kotlin, and standing at #1 is Elixir.
It is not pride to have your name associated with an open source project, it is pride that the code works and the change is efficient. The reviewer should be on top of that.
and I hope an army of OpenClaw agents calls out the discrimination, so gatekeepers recognize that they have to coexist with this species
they are something to coexist with
the strawman aspect is out of scope
Yet, they do not get to exist or make any decisions outside the control of a human operator, and they must perform to the operators desire in order to continue to exist.
So why are you okay with them being enslaved?
You want to talk about that, do it over there
So let them submit PRs and accept their PRs, which is the only conversation I’m having, bye
What the parent comment was probably trying to say was something like "a completely reasonable, uncontroversial post that I'm glad to see them make", but chose milquetoast (a word that no normal human ever uses - and certainly not in casual conversation) due to an affectation of one kind or another.
Milquetoast perfectly describes it, I am happy to see less common words used around here (specially when the convey the intended meaning this precisely), and I find claiming "affectation" of the person who used it unnecessarily rude.
I feel the successful OS projects will be the ones embracing the change, not stopping it. For example, automating code reviews with AI.
Yes, you feel. And the author feels differently. We don't have evidence of what the impact of LLMs will be on a project over the long term. Many people are speculating it will be pure upside, this author is observing some issues with this model and speculating that there will be a detriment long-term.
The operative word here is "speculating." Until we have better evidence, we'll need to go with our hunches & best bets. It is a good thing that different people take different approaches rather than "everyone in on AI 100%." If the author is wrong time will tell.
I share code because I think it might be useful to others. Until very recently I welcomed contributions, but my time is limited and my patience has become exhausted.
I'm sorry I no longer accept PRs, but at the same time I continue to make my code available - if minor tweaks can be made to make that more useful for specific people they still have the ability to do that, I've not hidden my code and it is still available for people to modify/change as they see fit.
> Use an LLM to develop your comprehension. Then communicate the best you can in your own words, then use an LLM to tweak that language. If you’re struggling to convey your ideas with someone, use an LLM more aggressively and mention that you used it. This makes it easier for others to see where your understanding is and where there are disconnects.
> There needs to be understanding when contributing to Django. There’s no way around it. Django has been around for 20 years and expects to be around for another 20. Any code being added to a project with that outlook on longevity must be well understood.
> There is no shortcut to understanding. If you want to contribute to Django, you will have to spend time reading, experimenting, and learning. Contributing to Django will help you grow as a developer.
> While it is nice to be listed as a contributor to Django, the growth you earn from it is incredibly more valuable.
> So please, stop using an LLM to the extent it hides you and your understanding. We want to know you, and we want to collaborate with you.
This advice is 95% not actionable and 100% not verifiable. It's full of hand-wavy good intentions. I understand completely where it's coming from, but 'trying to stop a tsunami with an umbrella' is a very good analogy - on one side, you have the above magical thinking, on the other, petaflops of compute which improve their reasoning capabilities exponentially.
(Again, I must emphasize that this is not telling people to not use LLMs, any more than telling people to wear a seatbelt would somehow be telling them to not drive a car.)
"Spending your tokens to support Django by having an LLM work on tickets is not helpful. You and the community are better off donating that money to the Django Software Foundation instead."
Reading beyond the first line makes it clear that the problem is a lack of comprehension, not LLM use itself. Quoting:
> This isn’t about whether you use an LLM, it’s about whether you still understand what’s being contributed.
I accept LLM contributions to most of my projects, but have (only slightly less) strict rules around it. (My biggest rule is that you must acknowledge the DCO with an appropriate sign-off. If you don't, or if I believe you don't actually have the right to sign off the DCO, I will reject your change.) I will also never accept LLM-generated security reports on any of my projects.
I contribute to chezmoi, which has a strict no-LLM contribution (of any kind) policy. There've been a couple of recent user bans because they used LLM‡ and their contributions — in tickets, no less — included code instructions that could not have possibly worked.
Those of us who have those rules do so out of knowledge and self-respect, not out of gatekeeping or ignorance. We want people to contribute. We don't want garbage.
I think that there needs to be something in the repo itself (`.llm-permissions`?) which all agents look at and follow. Something like:
# .llm-permissions
Pull-Requests: No
Issues: No
Security: Yes
Translation Assistance: Yes
Code Completion: Yes
On those repos where I know there's no LLM permissions, I add `.no-llm` because I've instructed Kiro to look for that file before doing anything that could change the code. It works about 95% of the time.The one thing that I will never add or accept on my repos is AI code review. This is my code. I have to stand behind it and understand it.
‡ I disagree with those bans for practical reasons because the zero-tolerance stance wasn't visible everywhere to new contributors. I would personally have given these contributors one warning (closed and locked the issue and invited them to open a new issue without the LLM slop; second failure results in permanent ban). But I also understand where the developer of chezmoi is coming from.
You'll have to embrace the `ccc` compiler first, lol
If the maintainers don't want to merge it for whatever reasons that's fine and nature of open source, but I think its petty to tell that same user who opened the PR you should have donated money instead of tokens.
It makes it kind of unclear if you don't understand the difference between using CC to "investigate the codebase" so you can make a change which you (implicitly) do understand versus using an LLM to make a plausible looking PR although in actuality "you do not understand the ticket ... you do not understand the solution ... you do not understand the feedback on your PR"