I can't empathize with the complaint that we've "lost something" at all. We're on the precipice of something incredible. That's not to say there aren't downsides (WOPR almost killed everyone after all), but we're definitely in a golden age of computing.
Hardware that ships with documentation about what instructions it supports. With example code. Like my 8-bit micros did.
And software that’s open and can be modified.
Instead what we have is:
- AI which are little black boxes and beyond our ability to fully reason.
- perpetual subscription services for the same software we used to “own”.
- hardware that is completely undocumented to all but a small few who are granted an NDA before hand
- operating systems that are trying harder and harder to prevent us from running any software they haven’t approved because “security”
- and distributed systems become centralised, such as GitHub, CloudFlare, AWS, and so on and so forth.
The only thing special about right now is that we have added yet another abstraction on top of an already overly complex software stack to allow us to use natural language as pseudocode. And that is a version special breakthrough, but it’s not enough by itself to overlook all the other problems with modern computing.
Maybe an iteresting route is using LLMs to flatten/simplify.. so we can dig out from some of the complexity.
If GenAI could only write documentation it would still be a game changer.
And worse, if you are using it for public documentation, sometimes it hallucinate endpoints (i don't want to say too much here, but it happened recently to a quite used B2B SaaS).
I run a bunch of jobs weekly to review docs for inconsistencies and write a plan to fix. It still needs humans in the loop if the agents don’t converge after a few turns, but it’s largely automatic (I baby sat it for a few months validating each change).
In another thread, people were looking for things to build. If there's a subscription service that you think shouldn't be a subscription (because they're not actually doing anything new for that subscription), disrupt the fuck out of it. Rent seekers about to lose their shirts. I pay for eg Spotify because there's new music that has to happen, but Dropbox?
If you're not adding new whatever (features/content) in order to justify a subscription, then you're only worth the electricity and hardware costs or else I'm gonna build and host my own.
One question is how will AI factor in to this. Will it completely remove the problem? Will local models be capable of finding or fixing every dependency in your 20yo project? Or will they exacerbate things by writing terrible code with black hole dependency trees? We're gonna find out.
All in all I think the end result will be the same. I don't think any of my Go code will survive long term.
Put in a word and see what it means? That's been easy for at least a century. Have a meaning in mind and get the word? The only way to get this before was to read a ton of books and be knowledgable or talk to someone who was. Now it's always available.
Did you have trouble with this part?
And often incorrect! (and occasionally refuses to answer)
Much of the AI antipathy reminds me of Wikipedia in the early-mid 2000s. I remember feeling amazed with it, but also remember a lot of ranting by skeptics about how anyone could put anything on there, and therefore it was unreliable, not to be used, and doomed to fail.
20 years later and everyone understands that Wikipedia may have its shortcomings, and yet it is still the most impressive, useful advancement in human knowledge transfer in a generation.
I think LLMs as a technology are pretty cool, much like crowdsourcing is. We finally have pretty good automatic natural language processing that scales to large corpora. That's big. Also, I think the state of the software industry that is mostly driving the development, deployment, and ownership of this technology is mostly doing uninspired and shitty things with it. I have some hope that better orgs and distributed communities will accomplish some cool and maybe even monumental things with them over time, but right now the field is bleak, not because the technology isn't impressive (although somehow despite how impressive it is it's still being oversold) but because silicon valley is full of rotten institutions with broken incentives, the same ones that brought us social media and subscriptions to software. My hope for the new world a technology will bring about will never rest with corporate aristocracy, but with the more thoughtful institutions and the distributed open source communities that actually build good shit for humanity, time and time again
The scary applications are the ones where it's not so easy to check correctness...
this is important, i feel like a lot of people are falling in to the "stop liking what i don't like" way of thinking. Further, there's a million different ways to apply an AI helper in software development. You can adjust your workflow in whatever way works best for you. ..or leave it as is.
Then programming may not be the best hobby or career for you.
> ...to find the word, or words, by which [an] idea may be most fitly and aptly expressed
Digital reverse dictionaries / thesauri like https://www.onelook.com/thesaurus/ can take natural language input, and afaict are strictly better at this task than LLMs. (I didn't know these tools existed when I wrote the rest of this comment.)
I briefly investigated LLMs for this purpose, back when I didn't know how to use a thesaurus; but I find thesauruses a lot more useful. (Actually, I'm usually too lazy to crack out a proper thesaurus, so I spend 5 seconds poking around Wiktionary first: that's usually Good Enough™ to find me an answer, when I find an answer I can trust it, and I get the answer faster than waiting for an LLM to finish generating a response.)
There's definitely room to improve upon the traditional "big book of synonyms with double-indirect pointers" thesaurus, but LLMs are an extremely crude solution that I don't think actually is an improvement.
"What's a word that means admitting a large number of uses?"
That seems hard to find in a thesaurus without either versatile or multifarious as a starting point (but those are the end points).
> Best match is versatile which usually means: Capable of many different uses
with "multi-purpose", "adaptable", "flexible" and "multi-use" as the runner-up candidates.
---
Like you, I had no idea that tools like OneLook Thesaurus existed (despite how easy it would be to make one), so here's my attempt to look this up manually.
"Admitting a large number of uses" -> manually abbreviated to "very useful" -> https://en.wiktionary.org/wiki/useful -> dead end. Give up, use a thesaurus.
https://www.wordhippo.com/what-is/another-word-for/very_usef..., sense 2 "Usable in multiple ways", lists:
> useful multipurpose versatile flexible multifunction adaptable all-around all-purpose all-round multiuse multifaceted extremely useful one-size-fits-all universal protean general general-purpose […]
Taking advantage of the fact my passive vocabulary is greater than my active vocabulary: no, no, yes. (I've spuriously rejected "multipurpose" – a decent synonym of "versatile [tool]" – but that doesn't matter.) I'm pretty sure WordHippo is machine-generated from some corpus, and a lot of these words don't mean "very useful", but they're good at playing the SEO game, and I'm lazy. Once we have versatile, we can put that into an actual thesaurus: https://dictionary.cambridge.org/thesaurus/versatile. But none of those really have the same sense as "versatile" in the context I'm thinking of (except perhaps "adaptable"), so if I were writing something, I'd go with "versatile".
Total time taken: 15 seconds. And I'm confident that the answer is correct.
By the way, I'm not finding "multifarious" anywhere. It's not a word I'm familiar with, but that doesn't actually seem to be a proper synonym (according to Wiktionary, at least: https://en.wiktionary.org/wiki/Thesaurus:heterogeneous). There are certainly contexts where you could use this word in place of "versatile" (e.g. "versatile skill-set" → "multifarious skill-set"), but I criticise WordHippo for far less dubious synonym suggestions.
M-W gives an example use of "Today’s Thermomix has become a beast of multifarious functionality. — Matthew Korfhage, Wired News, 21 Nov. 2025 "
wordhippo strikes me as having gone beyond the traditional paper thesaurus, but I can accept that things change and that we can make a much larger thesaurus than we did when we had to collect and print. thesaurus.com does not offer these results, though, as a reflection of a more traditional one, nor does the m-w thesaurus.
We could easily approach a state of affairs where most of what you see online is AI and almost every "person" you interact with is fake. It's hard to see how someone who supposedly remembers computing in the 80s, when the power of USENET and BBSs to facilitate long-distance, or even international, communication and foster personal relationships (often IRL) was enthralling, not thinking we've lost something.
But, those days disappeared a long time ago. Probably at least 20-30 years ago.
Some of my best friends IRL today were people I first met "online" in those days... but I haven't met anyone new in a longggg time. Yeah, I'm also much older, but the environment is also very different. The community aspect is long gone.
Now the determinism is gone and computers are gaining the worst qualities of people.
My only sanctuary in life is slipping away from me. And I have to hear people tell me I'm wrong who aren't even sympathetic to how this affects me.
Fundamentally this is the only point I really have on the 'anti-AI' side, but it's a really important one.
I was born in 84 and have been doing software since 97
it’s never been easier, better or more accessible time to make literally anything - by far.
Also if you prefer to code by hand literally nobody is stopping you AND even that is easier.
Cause if you wanted to code for console games you literally couldn’t in the 90s without 100k specialized dev machine.
It’s not even close.
This “I’m a victim because my software engineering hobby isn’t profitable anymore” take is honestly baffling.
The analogy I like is it's like driving vs. walking. We were healthier when we walked everywhere, but it's very hard to quit driving and go back even if it's going to be better for you.
During the summer I’ll walk 30-50 miles a week
However I’m not going to walk to work ever and I’m damn sir not going to walk in the rain or snow unless if I can avoid it
Simon doesn't touch on my favorite part of Chris's video though, which is Chris citing his friend Jesse Kriss. This stuck out at me so hard, and is so close to what you are talking about:
> The interesting thing about this is that it's not taking away something that was human and making it a robot. We've been forced to talk to computers in computer language. And this is turning that around.
I don't see (as you say) a personality. But I do see the ability to talk. The esoteria is still here underneath, but computer programmers having this lock on the thing that has eaten the world, being the only machine whisperers around, is over. That depth of knowledge is still there and not going away! But notably too, the LLM will help you wade in, help those not of the esoteric personhood of programmers to dive in & explore.
Total dependence on a service?
They are getting better, but that doesn't mean they're good.
We have a magical pseudo-thinking machine that we can run locally completely under our control, and instead the goal posts have moved to "but it's not as fast as the proprietary could".
It's more cost effective for someone to pay $20 to $100 month for a Claude subscription compared to buying a 512 gig Mac Studio for $10K. We won't discuss the cost of the NVidia rig.
I mess around with local AI all the time. It's a fun hobby, but the quality is still night and day.
1. It costs 100k in hardware to run Kimi 2.5 with a single session at decent tok p/s and its still not capable for anything serious.
2. I want whatever you're smoking if you think anyone is going to spend billions training models capable of outcompeting them are affordable to run and then open source them.
But as it stands right now, the most useful LLMs are hosted by companies that are legally obligated to hand over your data if the US gov. had decided that it wants it. It's unacceptable.
I feel like we've reached the worst age of computing. Where our platforms are controlled by power hungry megacorporations and our software is over-engineered garbage.
The same company that develops our browsers and our web standards is also actively destroying the internet with AI scrapers. Hobbyists lost the internet to companies and all software got worse for it.
Our most popular desktop operating system doesn't even have an easy way to package and update software for it.
is there anything you use that isn't? like laptop on which you work, software that you use to browse the internet, read the email... I've heard similar comment to your before and I am not sure I understand it given everything else - why does this matter for LLMs and not the phone you use etc etc?
There are more alternatives than ever though. People are still making C64 games today, cheap chips are everywhere. Documentation is abundant... When you layer in AI, it takes away labor costs, meaning that you don't need to make economically viable things, you can make fun things.
I have at least a dozen projects going now that I would have never had time or energy for. Any itch, no matter how geeky and idiosyncratic, is getting scratched by AI.
So what is stopping you other than yourself?
I’ve been programming since 1998 when I was in elementary school. I have the technical skills to write almost anything I want, from productivity applications to operating systems and compilers. The vast availability of free, open source software tools helps a lot, and despite this year’s RAM and SSD prices, hardware is far more capable today at comparatively lower prices than a decade ago and especially when I started programming in 1998. My desktop computer is more capable than Google’s original cluster from 1998.
However, building businesses that can compete against Big Tech is an entirely different matter. Competing against Big Tech means fighting moats, network effects, and intellectual property laws. I can build an awesome mobile app, but when it’s time for me to distribute it, I have to either deal with app stores unless I build for a niche platform.
Yes, I agree that it’s never been easier to build competing products due to the tools we have today. However, Big Tech is even bigger today than it was in the past.
Look at how even the Posix ecosystem - once a vibrant cluster of a dozen different commercial and open source operating systems built around a shared open standard - has more or less collapsed into an ironclad monopoly because LXC became a killer app in every sense of the term. It’s even starting to encroach on the last standing non-POSIX operating system, Windows, which now needs the ability to run Linux in a tightly integrated virtual machine to be viable for many commercial uses.
There have been wild technological developments but we've lost privacy and autonomy across basically all devices (excepting the people who deliberately choose to forego the most capable devices, and even then there are firmware blobs). We've got the facial recognition and tracking so many sci-fi dystopias have warned us to avoid.
I'm having an easier time accomplishing more difficult technological tasks. But I lament what we have come to. I don't think we are in the Star Trek future and I imagined doing more drugs in a Neuromancer future. It's like a Snow Crash / 1984 corporate government collab out here, it kinda sucks.
But I mourned when CRT came out, I had just started programming. But I quickly learned CRTs were far better,
I mourned when we moved to GUIs, I never liked the move and still do not like dealing with GUIs, but I got used to it.
Went through all kinds of programming methods, too many to remember, but those were easy to ignore and workaround. I view this new AI thing in a similar way. I expect it will blow over and a new bright shiny programming methodology will become a thing to stress over. In the long run, I doubt anything will really change.
If you never tried Claude Code, give it s try. It's very easy to get I to. And you'll soon see how powerful it is.
It's remarkable that people who think like this don't have the foresight to see that this technology is not a higher level of abstraction, but a replacement of human intellect. You may be working with it today, but whatever you're doing will eventually be done better by the same technology. This is just a transition period.
Assuming, of course, that the people producing these tools can actually deliver what they're selling, which is very much uncertain. It doesn't change their end goal, however. Nor the fact that working with this new "abstraction" is the most mind numbing activity a person can do.
That’s not a higher level of abstraction, it’s having someone do the work for you while doing less and less of the thinking as well. Someone might resist that urge and consistently guide the model closely but that’s probably not what the collective range of SWEs who use these models are doing and rapidly the ease of using these models and our natural reluctance to take on mental stress is likely to make sure that eventually everyone lets LLMs do most or all of the thinking for them. If things really go in that direction and spread, I foresee a collective dumbing down of the general population.
So you're welcome to make the 100000000th Copy of the same thing that nobody cares about anymore.
even if you can be a prompt engineer (or whatever it's called this week) today
well, with the feedback you're providing: you're training it to do that too
you are LITERALLY training the newly hired outsourced personnel to do your job
but this time you won't be able to get a job anywhere else, because your fellow class traitors are doing exactly the same thing at every other company in the world
This things is going to erase careers and render skills sets and knowledge cultivated over decades worthless.
Anyone can promt the same fucking shit now and call it a day.
If you see your job as a "thinking about what code to write (or not)" monkey, then you're safe. I expect most seniors and above to be in this position, and LLMs are absolutely not replacing you here - they can augment you in certain situations.
The perks of a senior is also knowing when not to use an LLM and how they can fail; at this point I feel like I have a pretty good idea of what is safe to outsource to an LLM and what to keep for a human. Offloading the LLM-safe stuff frees up your time to focus on the LLM-unsafe stuff (or just chill and enjoy the free time).
It used to be I didn't mind going through all the meetings, design discussions, debates with PMs, and such because I got to actually code something cool in the end. Now I get to... prompt the AI to code something cool. And that just doesn't feel very satisfying. It's the same reason I didn't want to be a "lead" or "manager", I want to actually be the one doing the thing.
I'm not so confident that it'll only be code monkeys for too long
It seems like the billions so far mostly go to talk of LLMs replacing every office worker, rather than any action to that effect. LLMs still have major (and dangerous) limitations that make this unlikely.
Humans rewire their mind to optimize it for the codebase, that is why new programmers takes a while to get up to speed in the codebase. LLM doesn't do that and until they do they need the entire thing in context.
And the reason we can't do that today is that there isn't enough data in a single codebase to train an LLM to be smart about it, so first we need to solve the problem that LLM needs billions of examples to do a good job. That isn't on the horizon so we are probably safe for a while.
It is not perfect yet but the tooling here is improving. I do not see a ceiling here. LSPs + memory solve this problem. I run into issues but this is not a big one for me.
Dunning–Kruger is everywhere in the AI grift. People who don't know a field trying to deploy some AI bot that solves the easy 10% of the problem so it looks good on the surface and assumes that just throwing money (which mostly just buys hardware) will solve it.
They aren't "the smartest minds in the world". They are slick salesmen.
AI is getting better at picking up some important context from other code or documentation in a project, but it's still miles away from what it needs to be, and the needed context isn't always present.
But I also have no idea how people are going to think about what code to write when they don't write code. Maybe this is all fine, is ok, but it does make me quite nervous!
LLMs benefit juniors, they do not replace them. Juniors can learn from LLMs just fine and will actually be more productive with them.
When I was a junior my “LLM” was StackOverflow and the senior guy next to me (who no doubt was tired of my antics), but I would’ve loved to have an actual LLM - it would’ve handled all my stupid questions just fine and freed up senior time for the more architectural questions or those where I wasn’t convinced by the LLM response. Also, at least in my case, I learnt a lot more from reading existing production code than writing it - LLMs don’t change anything there.
For my whole life I’ve been trying to make things—beautiful elegant things.
When I was a child, I found a cracked version of Photoshop and made images which seemed like magic.
When I was in college, I learned to make websites through careful, painstaking effort.
When I was a young professional, I used those skills and others to make websites for hospitals and summer camps and conferences.
Then I learned software development and practiced the slow, methodical process of writing and debugging software.
Now, I get to make beautiful things by speaking, guiding, and directing a system which is capable of handling the drudgery while I think about how to make the system wonderful and functional and beautiful.
It was, for me, never about the code. It was always about making something useful for myself and others. And that has never been easier.
I do find it that the developers that focused on "build the right things" mourn less than those who focused on "build things right".
But I do worry. The main question is this - will there be a day that AI will know what are "the right things to build" and have the "agency" (or illusion of) to do it better than an AI+human (assuming AI will get faster to the "build things right" phase, which is not there yet)
My main hope is this - AI can beat a human in chess for a while now, we still play chess, people earn money from playing chess, teaching chess, chess players are still celebrated, youtube influencers still get monetized for analyzing games of celebrity chess players, even though the top human chess player will likely lose to a stockfish engine running on my iPhone. So maybe there is hope.
I've always been strongly in the first category, but... the issue is that 10x more people will be able to build the right things. And if I build the right thing, it will be easy to copy. The market will get crowded, so distribution will become even harder than it is today. Success will be determined by personal brand, social media presence, social connections.
Always has been. (Meme)
Of course, and if LLMs keep improving at current rates it will happen much faster than people think.
Arguably you don't need junior software engineers anymore. When you also don't need senior software engineers anymore it isn't that much of a jump to not needing project managers, managers in general or even software companies at all anymore.
Most people, in order to protect their own ego, will assume *their* job is safe until the job one rung down from them disappears and then the justified worrying will begin.
People on the "right things to build" track love to point out how bad people are at describing requirements, so assume their job as a subject matter expert and/or customer-facing liaison will be safe, but does it matter how bad people are at describing requirements if iteration is lightning fast with the human element removed?
Yes, maybe someone who needs software and who isn't historically some sort of software designer is going to have to prompt the LLM 250 times to reach what they really want, but that'll eventually still be faster than involving any humans in a single meeting or phone call. And a lot of people just won't really need software as we currently think about it at all, they'll just be passing one-off tasks to the AI.
The real question is what happens when the labor market for non-physical work completely implodes as AI eats it all. Based on current trends I'm going to predict in terms of economics and politics we handle it as poorly as possible leading to violent revolution and possible societal collapse, but I'd love to be wrong.
What makes you think AI already isn't at the same level of quality or higher for "build the right things" as it is for "building things right"?
Video didn't kill the radio star either. In fact the radio star has become more popular than ever in this, the era of the podcast.
Likewise, being a podcaster, or "influencer" in general, is all about charisma and marketing.
So with value destruction for knowledge workers (and perhaps physical workers too once you factor in robotics) we may in fact be moving into a real "attention economy" where all value is related to being a charismatic marketer, which will be good for some people for a while, terrible for the majority, but even for the winners it seems like a limited reprieve. Historically speaking charismatic marketers can only really exist through the patronage of people who mostly aren't themselves charismatic marketers. Without patrons (who have disposable income to share) the charismatic marketers are eventually just as fucked as everyone else.
I share this sentiment. It's really cool that these systems can do 80% of the work. But given what this 80% entails, I don't see a moat around that remaining 20%.
Microsoft / GitHub have no real limitation to doing better/faster, maybe it's the big company mentality, moving slower, fear of taking risks where you have a lot to lose, or when the personal incentive for a product manager at github is much much lower than the one of a co-founder of a seed stage startup. Co-Pilot was a microscopic line item for Microsoft as a whole, and probably marginal for GitHub too. But for Cursor, this was everything.
This is why we have innovation, if mega-corps didn't promote people to their level of incompetence, if bureaucracy and politics didn't ruin every good thing, if private equity didn't bleed every beloved product to the last penny, we would have no chance for any innovation or entrepreneurship because these company have practically close to unlimited resources.
So my only conclusion from this is - the moat is sometimes just the will to do better, to dare to say, I don't care if someone has a billion dollars to compete with me, I'll still do better.
In other words, don't underestimate the power of big companies to make colossal mistakes and build crappy products. My only worry is, that AI would not make the same mistakes an we'll basically have a handful of companies in the world (the makers of models, owner of tokens e.g. OpenAI, Anthropic, Google, Amazon, Meta, xAi), if AI led product teams will be able to not make the mistakes of modern corporations of ruining everything good that they got in their hands, then maybe software related entrepreneurship will be dead.
I think humans have the advantage.
> "build the right things" [vs] "build things right"
I think this (frequent) comparison is incorrect. There are times when quality doesn't matter and times that it does. Without that context these discussions are meaningless.If I build my own table no one really gives a shit about the quality besides me and maybe my friends judging me.
But if I sell it, well then people certainly care[0] and they have every right to.
If I build my own deck at my house people do also care and there's a reason I need to get permits for this, because the danger it can cause to others. It's not a crazy thing to get your deck inspected and that's really all there is to it.
So I don't get these conversations because people are just talking past one another. Look, no one gives a fuck if you poorly vibe code your personal website, or at least it is gonna be the same level as building your own table. But if Ikea starts shipping tables with missing legs (even if it is just 1%) then I sure give a fuck and all the customers have a right to be upset.
I really think a major part of this concern with vibe coding is about something bigger. It is about slop in general. In the software industry we've been getting sloppier and sloppier and LLMs significantly amplify that. It really doesn't matter if you can vibe code something with no mistakes, what matters is what the businesses do. Let's be honest, they're rushing and don't care about quality because they have markets cornered and consumers are unable to accurately evaluate products prior to purchase. That's the textbook conditions for a lemon market. I mean the companies outsource tech support so you call and someone picks up who's accent makes you suspicious of their real name being "Steve". After all, it is the fourth "Steve" you've talked to as you get passed around from support person to support person. The same companies who contract out coders from poor countries and where you find random comments in another language. That's the way things have been going. More vaporware. More half baked products.
So yeah, when you have no cake the half baked cake is probably better than nothing. At home it also doesn't matter if you're eating a half baked cake or one that competes with the best bakers in the world. But for everyday people who can't bake their own cakes, what do they do? All they see is a box with a cake in it, one is $1, another for $10, and another other is $100. They look the same but they can't know until they take a bite. You try enough of the $1 cakes and by the time you give up the $10 cakes are all gone. By the time you get so frustrated you'll buy the $100 cake they're gone too.
I don't dislike vibe coding because it is "building things the wrong way" or any of that pretentious notion. I, and I believe most people with a similar opinion, care because "the right things" aren't being built. Most people don't care how things were built, but they sure do care about the result. Really people only start caring about how the sausage is made when they find out that something distasteful is being served and concealed from them. It's why everyone is saying "slop".
So when people make this false dichotomy it just feels like people aren't listing to what's actually being said.
[0] Mind you, it is much easier for an inexperienced person to judge the quality of a table than software. You don't need to be a carpenter to know a table's leg is missing or that it is wobbly but that doesn't always hold true for more sophisticated things like software or even cars. If you haven't guessed already, I'm referencing lemon markets: https://en.wikipedia.org/wiki/The_Market_for_Lemons
But considering that AI will more and more "build things right" by default, it's up to us humans to decide what are the "right things to build".
Once AI knows what are the "right things to build" better than humans, this is AGI in my book, and also the end of classical capitalism as we know it. Yes, there will still be room for "human generated" market, like we have today (photography didn't kill painting, but it made it a much less of a main employment option)
In a way, AI is the great equality maker, in the past the strongest men prevailed, then when muscles were not the main means to assert force, it was the intellect, now it's just sheer want. You want to do something, now you can, you have no excuses, you just need to believe it's possible, and do it.
As someone else said, agency is eating the world. For now.
> it needs to be "build the right things right", vs "build things right and then discover if they are the right things"
I still think this is a bad comparison and I hoped my prior comment would handle this. Frankly, you're always going to end up in the second situation[0] simply because of 2 hard truths. 1) you're not omniscient and 2) even if you were, the environment isn't static. > But considering that AI will more and more "build things right" by default
And this is something I don't believe. I say a lot more here[1] but you can skip my entire comment and just read what Dijkstra has to say himself. I dislike that we often pigeonhole this LLM coding conversation into one about a deterministic vs probabilistic language. Really the reason I'm not in favor of LLMs is because I'm not in favor of natural language programming[2]. The reason I'm not in favor of natural language programming has nothing to do with its probabilistic nature and everything to do with its lack of precision[3].I'm with Dijkstra because, like him, I believe we invented symbolic formalism for a reason. Like him, I believe that abstraction is incredibly useful and powerful, but it is about the right abstraction for the job.
[0] https://news.ycombinator.com/item?id=46911268
[1] https://news.ycombinator.com/item?id=46928421
[2] At the end of the day, that's what they are. Even if they produce code you're still treating it as a transpiler: turning natural language into code.
[3] Okay, technically it does but that's because probability has to do with this[4] and I'm trying to communicate better and most people aren't going to connect the dots (pun intended) between function mapping and probabilities. The lack of precision is inherently representable through the language of probability but most people aren't familiar with terms like "image" and "pre-image" nor "push-forward" and "pull-back". The pedantic nature of this note is precisely illustrative of my point.
[4] https://www.mathsisfun.com/sets/injective-surjective-bijecti...
If I had an LLM generate a piece of artwork for me, I wouldn't call myself an artist, no matter how many hours I spent conversing with the LLM in order to refine the image. So I wouldn't call myself a coder if my process was to get an LLM to write most/all the code for me. Not saying the output of either doesn't have value, but I am absolutely fine gatekeeping in this way: you are not an artist/coder if this is how you build your product. You're an artistic director, a technical product manager, something of that nature.
That said, I never derived joy from every single second of coding; there were and are plenty of parts to it that I find tedious or frustrating. I do appreciate being able to let an LLM loose on some of those parts.
But sparing use is starting to really only work for hobby projects. I'm not sure I could get away with taking the time to write most of it manually when LLMs might make coworkers more "productive". Even if I can convince myself my code is still "better" than theirs, that's not what companies value.
Then it wasn't your craft.
There are woodworkers on YouTube who use CNC, some who use the best Festool stuff but nothing that moves on its own, and some who only use handtools. Where is the line at which woodworking is not their craft?
You get something that looks like a cabinet because you asked for a cabinet. I don't consider that "woodworking craft", power tools or otherwise.
For the power tool user, "woodworking with hand tools" isn't their craft.
For the CNC user, "woodworking with manual machines" isn't their craft.
There's a market for Ikea. It's put woodworkers out of business, effectively. The only woodworkers that make reasonable wages from their craft are influencers. Their money comes from YouTube ads.
There's no shame in just wanting things without going to the effort of making them.
I am not myself a woodworker, however I have understood that part of what makes it "crafty" is that the woodworker reads grain, adjusts cuts, and accepts that each board is different.
We can try to contrast that to whatever Ikea does with wood and mass production of furniture. I would bet that variation in materials is "noise" that the mass production process is made to "reject" (be insensitive to / be robust to).
But could we imagine an automated woodworking system that takes into account material variation, like wood grain, not in an aggregate sense (like I'm painting Ikea to do), but in an individual sense? That system would be making judgements that are woodworker-like.
The craft lives on. The system is informed by the judgement of the woodworker, and the craftperson enters an apprenticeship role for the automation... perhaps...
Until you can do RL on the outcome of the furniture. But you still need craft in designing the reward function.
Perhaps.
I love AI tools. I can have AI do the boring parts. I can even have to write polished, usable apps in languages that I don't know.
I miss being able to think so much about architecture, best practices, frameworks/languages, how to improve, etc.
I went into this field for both! what do i do now, i'm screwed
I guess I started out as a programmer, then went to grad school and learned how to write and communicate my ideas, it has a lot in common with programming, but at a deeper level. Now I’m doing both with AI and it’s a lot of fun. It is just programming at a higher level.
Almost none of the code I wrote in 2015 is still in use today. Probably some percentage of people can point to code that lasted 20 years or longer, but it can’t be a big percentage. When I think of the work of a craft, I think of doing work which is capable of standing up for a long time. A great builder can make a house that can last for a thousand years and a potter can make a bowl that lasts just as long.
I’ve thought of myself as a craftsman of code for a long time but maybe that was just wrong.
It was and is my craft. I've been doing it since grade 5. Like 30 years now.
Writing tight assembly for robot controllers all the way to AI on MRI machines to security for the DoD and now the biggest AI on the planet.
But my craft was not typing. It's coding.
If you're typist you're going to mourn the printer. But if you're a writer you're going to see how the improves your life.
I do believe directing an LLM to write code, and then reviewing and refining that code with the LLM, is a skill that has value -- a ton of value! -- but I do not think it is coding.
It's more like super-technical product management, or like a tech lead pair programming with a junior, but in a sort of mentorship way where they direct and nudge the junior and stay as hands-off as possible.
It's not coding, and once that's the sum total of what you do, you are no longer a coder.
You can get defensive and call this gatekeeping, but I think it's just the new reality. There's no shame in admitting that you've moved to a stage of your life where you build software but your role in it isn't as a coder anymore. Just as there's no shame in moving into management, if that's what you enjoy and are effective at it.
(If presenting credentials is important to you, as you've done, I've been doing this since 1989, when I was 8 years old. I've gone down to embedded devices, up through desktop software, up to large distributed systems. Coding is my passion, and has been for most of my life.)
Even though once upon a time both did.
Claiming that this isn't coding is as absurd as saying that coding is only what you do when you hook up the wires between some vacuum tubes.
The LLM is a very smart compiler. That's all.
Some people want to sit and write assembly. Good for them. But asserting that unless I assemble my own code I'm not a coder is just silly.
Possibly too obscure. I can't tell whether I'm being downvoted by optimists who missed the joke, or by pessimists who got it.
never once in my life i saw anything get better. except for metal gear solid psx and gears of wars
have you used them recently?
terrible, is the word I would use
(as a customer since the 2010s)
Not because the tools are insufficient, it's just that the kind of person that can't even stomach the charmed life of being a programmer will rarely be able to stomach the dull and hard work of actually being creative.
Why should someone be interested in you creations? In what part of your new frictionless life would you've picked up something that sets you apart from a million other vibe-coders?
This strikes me as the opposite of what I experience when I say I'm "feeling creative", then everything comes easy. At least in the context of programming, making music, doing 3D animation and some other topics. If it's "dull and hard work" it's because I'm not feeling "creative" at all, when "creative mode" is on in my brain, there is nothing that feels neither dull nor hard. Maybe it works differently for others.
If I can do that typing one line at a time, I can do it _way_ faster with AI.
For how long do you think this is sustainable? In the sense of you, or me, or all these other people here being able to earn a living. Six months? A couple of years? The time until the next-but-one Claude release drops?
Does everyone have to just keep re-making themselves for whatever the next new paradigm turns out to be? How many times can a person do that? How many times can you do that?
This new world makes me more effective at it.
And this new world doesn’t prevent me from crafting elegant architectures either.
In the web front-end world I'd be pretty much a newbie. I don't know any of the modern frameworks, everything I've used is legacy and obsolete today. I'd ramp up quicker than a new junior because I understand all the concepts of HTTP and how the web works, but I don't know any of the modern tooling.
Half the people I work with can't do imperative jQuery interfaces. So what I guess. I can't code assembly.
AI will kill that.
Why did you stop? Because, you realize, LLMs are giving up the process of creating for the immediacy of having. It's paying someone to make for you.
Things are more convenient if you live the dream of the LLM, and hire a taskrabbit to run your wood shop. But it's not you that's making.
Me too, but... The ability to code was a filter. With AI, the pool of people who can build beautiful elegant software products expands significantly. Good for the society, bad for me.
It's like a woodworker saying, "Even though I built all those tables using precise craft and practice, it was NEVER ABOUT THE CRAFT OR PRACTICE! It was about building useful things." Or a surgeon talking about saving lives and doing brain surgery, but "it was never about learning surgery, it was about making people get better!"
I mean sure yeah but also not really.
- infrastructure bs, like scaffold me a JS GitHub action that does x and y.
- porting, like take these kernel patches and adjust them from 6.14 to 6.17.
- tools stuff, like here's a workplace shell script that fetches a bunch of tokens for different services, rewrite this from bash to Python.
- fiddly things like dealing with systemd or kubernetes or ansible
- fault analysis, like here's a massive syslog dump or build failure, what's the "real" issue here?
In all these cases I'm very capable of assessing, tweaking, and owning the end result, but having the bot help me with a first draft saves a bunch of drudgery on the front end, which can be especially valuable for the ADHD types where that kind of thing can be a real barrier to getting off the ground.
To me, you sound more utilitarian. The philosophy you are presenting is a kind of Ikea philosophy. Utility, mass production, and unique beauty are generally properties that do not cohere together, and there's a reason for this. I think the use of LLMs in the production of digital goods is very close to the use of automation lines in the production of physical goods. No matter how you try some of the human charm, and thus beauty will inevitably be lost, the number of goods will increase, but they'll all be barely differentiable souless replications of more or less the same shallow ideas repeated as infinitum.
With the code, especially interfaces, the results will be similar -- more standardized palettes, predictable things.
To be fair, the converging factor is going on pretty much forever, e.g. radio/TV led to the lots of local accents disappearing, our world is heavily globalized.
To the skeptics: by all means, don't use AI if you don't want to; it's your choice, your career, your life. But I am not sure that hitching your identity to hating AI is altogether a good idea. It will make you increasingly bitter as these tools improve further and our industry and the wider world slowly shifts to incorporate them.
Frankly, I consider the mourning of The Craft of Software to be just a little myopic. If there are things to worry about with AI they are bigger things, like widespread shifts in the labor force and economic disruption 10 or 20 years from now, or even the consequences of the current investment bubble popping. And there are bigger potential gains in view as well. I want AI to help us advance the frontiers of science and help us get to cures for more diseases and ameliorate human suffering. If a particular way of working in a particular late-20th and early-21st century profession that I happen to be in goes away but we get to those things, so be it. I enjoy coding. I still do it without AI sometimes. It's a pleasant activity to be good at. But I don't kid myself that my feelings about it are all that important in the grand scheme of things.
You order it.
Do painters paint because they just like to see the final picture? Or do they like the process? Yes, painting is an artistic process, not exactly crafting one. But the point stand.
Woodworkers making nice custom furniture generally enjoy the process.
It's like learning to cook and regularly making your own meals, then shifting to a "new paradigm" of hiring a personal chef to cook for you. Food's getting made either way, but it's not really the same deal.
Food's getting made, but you focus on the truly creative part -- the menu, the concept, the customer experience. You're not boiling pasta or cutting chives for the thousandth time. The same way now you're focusing on architecture and design now instead of writing your 10,000th list comprehension.
It's not that you've stopped doing anything at all, like the other commenter claimed in their personal chef analogy.
For me, the act of sitting down and writing the code is what actually leads to true understanding of the logic, in a similar way to how the only way to understand a mathematical proof is to go trough it. Sure, I'm not doing anything useful by showing that the root of 2 is irrational, but by doing that I gain insights that are otherwise impossible to transfer between two minds.
I believe that coding was one of the few things (among, for example, writing math proofs, or that weird process of crafting something with your hands where the object you are building becomes intimately evident) that get our brains to a higher level of abstraction than normal mammal "survival" thinking. And it makes me very sad to see it thrown out of the window in the name of a productivity that may not even be real.
For 99% of the functions I've written in my life? Absolutely drudgery. They're barely algorithms. Just bog-standard data transformation. This is what I love having AI replace.
For the other 1% that actually requires original thought, truly clever optimization, and smart naming to make it literate? Yes, I'll still be doing that by hand, although I'll probably be getting the LLM to help scaffold all the unit tests and check for any subtle bugs or edge cases I may have missed.
The point is, LLMs let me spend more time at the higher level of abstraction that is more productive. It's not taking it away!
Luckily for real programmers, AI's not actually very good at generating quality code. It generates the equivalent of Ali Baba code: it lasts for one week and then breaks.
This is going to be the future of programming: low-paid AI clerks to generate the initial software, and then the highly paid programmers who fix all the broken parts.
Last night "I" "made" 3D boids swarm with directional color and perlin noise turbulence. "I" "did" this without knowing how to do the math for any of those things. (My total involvement at the source level was fiddling with the neighbor distance.)
https://jsbin.com/ququzoxete/edit?html,output
Then I turned them into weird proteins
https://jsbin.com/hayominica/edit?html,output
(As a side note, the loss of meaning of "self" and "doing" overlaps weirdly with my meditation practice...)
How many supposed "10x" coders actually produced unreadable code that no one else could maintain? But then the effort to produce that code is lauded while the nightmare maintenance of said code is somehow regarded as unimpressive, despite being massively more difficult?
I worry that we're creating a world where it is becoming easy, even trivial, to be that dysfunctional "10x" coder, and dramatically harder to be the competent maintainer. And the existence of AI tools will reinforce the culture gap rather than reducing it.
> Ultimately if you have a mortgage and a car payment and a family you love, you’re going to make your decision.
Nothing is preventing the author from continuing to write code by hand and enjoy it. The difference is that people won't necessarily pay for it.
The old way was really incredible (and worth mourning), considering in other industries, how many people can only enjoy what they do outside of work.
Now, AI can generate any kind of music anyone wants, eliminating almost all the anonymous studio, commercial, and soundtrack work. If you're really good you can still perform as a headliner, but (this is a guess) 80% of the work for musicians is just gone.
It only sounds like music.
Is painting a passion because others appreciate it? No, it is a passion in itself.
There will always be people appreciating coding by hand as a passion.
My passions - drawing, writing, coding - are worthwhile in themselves, not because other people care about them. Almost noone does.
Personally, I am not stymied by typing nor chores nor driving. For me, typing is like playing a musical instrument: at some point you stop needing to think about how to play and you just play. The interaction and control of the instrument just comes out of your body. At some point in my life, all the "need to do things around the house" just became the things I do, and I'm not bothered by doing them, such that I barely notice doing them. But it's complex: the concept of "chores" is front and center when you're trying to get a teenager to be responsible for taking care of themselves (like having clean clothes, or how the bathroom is safer if it's not a complete mess) and participating in family/household responsibilities (like learning that if you don't make a mess, there's nothing to clean up). Can you really be effective at directing someone/something else without knowing how to do it yourself? Probably for some things, but not all.
For sure.
I idealize a future where people can spend more time doing things they want to do, whatever those avocations might be. Freedom from servitude. I guess some kind of Star Trek / The Culture hybrid dream.
The world we have is so far from that imaginary ideal. Implicit in that ideal would be elimination of inequality, and I'm certain there are massive forces that would oppose that elimination.
"I'm able to put my shirt on so much faster with this shirt-buttoning machine, and I don't spend time tediously buttoning shirts and maybe having to rebutton when I misalign the buttons and buttonholes. You should get one to button your shirts, you're wasting time by not using a buttoning machine".
"I wear t-shirts."
(Obviously a contrived and simplistic example for fun)
I find it unsettling how many people in the comments say that they don't like writing code. Feels aliens to me. We went into this field for seemingly very different reasons.
I do use LLMs and even these past two days I was doing vibe coding project which was noticeably faster to setup and get to its current state than if I wrote in myself. However I feel almost dirty by how little I understand the project. Sure, I know the overall structure, decisions and plan. But I didn't write any of it and I don't have deep understanding of the codebase which I usually have when working on codebase myself.
This is a complaint someone is making about their job propspects thinly wrapped in floral language. I know for some people (it seems especially prominent in Americans I've found) their identity is linked to their job. This is a chance to work on this. You can decouple yourself and redefine yourself as a person.
Who knows? Once you're done you may go write some code for fun again.
Things seem to be getting better from December 2022 (chatgpt launch), sure, but is there a ceiling we don't see?
But that's only because self driving cars are still new and incomplete. It's still the transition period.
I already can't buy the car I want with a manual transmission. There are still a few cars that I could get with one, but the number is both already small and getting smaller every year. And none of those few are the one I want, even though it was available previously.
I already can't buy any (new) car that doesn't have a permanent internet connection with data collection and remote control by people that don't own the car even though I pay full cash without even financing, let alone the particular one I want. (I can, for now, at least break the on board internet connection after I buy the car without disabling the whole car, but that is just a trivial software change away, in software I don't get to see or edit.)
It's hardly unreasonable to suggest that in some time you won't be able to avoid having a car that drives itself, and even be legally compelled to let the car drive itself because you can't afford the insurance or legal risk or straight up fines.
And forget customizing or personalizing. That's right out.
It does seem probable based on progress that in 1-2 more model generations there will be little need to hand code in almost any domain. Personally I already don't hand code AT ALL, but there are certainly domains/languages that are under performing right now.
Right now with the changes this week (Opus 4.6 and "teams mode") it already is another step function up in capability.
Teams mode is probably only good for greenfield or "green module" development but I'm watching a team of 5 AI's collaborating and building out an application module by module. This is net new capability for the tool THIS WEEK (Yes I am aware of earlier examples).
I don't understand how people can look at this and then be dismissive of future progress, but human psychology is a rich and non-logical landscape.
Where is all this new software and increased software quality from all this progression?
Your craft is not my craft.
It's entirely possible that, as of now, writing JavaScript and Java frontends (what the author does) can largely be automated with LLMs. I don't know who the author is writing to, but I do not mistake the audience to be "programmers" in general...
If you are making something that exists, or something that is very similar to something that exists, odds are that an LLM can be made to generate code which approximates that thing. The LLM encoding is lossy. How will you adjust the output to recover the loss? What process will you go through mentally to bridge the gap? When does the gap appear? How do you recognize it? In the absolute best case you are given a highly visible error. Perhaps you've even shipped it, and need to provide context about the platform and circumstances to further elucidate. Better hope that platform and circumstance is old-hat.
I mourn the horse masters and stable boys of a century past because of their craft. Years of intuition and experience.
Why do you watch a chess master play, or a live concert, or any form of human creation?
Should we automate parts of our profession? Yes.
Should he mourn the loss of our craft. Also yes.
Two things are true at the same time, this makes people uneasy.
Figuring out how to live in the uncomfortableness of non-absolutes, how to live in a world filled with dualisms, is IMO one of the primary and necessary maturities for surviving and thriving in this reality.
Nothing I've ever built has lasted more than a few years. Either the company went under, or I left and someone else showed up and rewrote it to suit their ideals. Most of us are doing sand art. The tide comes in and its gone.
Code in and of itself should never have been the goal. I realized that I was thinking of the things I build and the problems I selected to work on from the angle of code quality nearly always. Code quality is important! But so is solving actual problems with it. I personally realized that I was motivated more by the shape of the code I was writing than the actual problems it was written to solve.
Basically the entire way I think about things has changed now. I'm building systems to build systems. Thats really fun. Do I sometimes miss the feeling of looking at a piece of code and feeling a sense of satisfaction of how well made it is? Sure. That era of software is done now sadly. We've exited the craftsman era and entered into the Ikea era of software development.
maybe this say something more about your career decisions than anything else?
I’m putting that on my wall.
Yes, watching an LLM spit out lots of code is for sure mesmerizing. Small tasks usually work ok, code kinda compiles, so for some scenarios it can work out.. but anyone serious about software development can see how piece of crap the code is.
LLMs are great tools overall, great to bounce ideas, great to get shit done. If you have a side project and no time, awesome.. If your boss/company has a shitty culture and you just want to get the task done, great. Got a mundane coding task, hate coding, or your code wont run in a critical environment? please, LLM that shit over 9000..
Remember though, an LLM is just a predictor, a noisy, glorified text predictor. Only when AI reaches a point of not optimizing for short term gains and has built-in long term memory architecture (similar to humans) AND can produce some linux kernel level code and size, then we can talk..
Overall, I’d say AI tooling has maybe close to doubled the time I spend on PR reviews. More knowledgeable developers do better with these tools but they also fall for the toolings false confidence from time to time.
I worry people are spending less time reading documentation or stepping through code to see how it works out of fear that “other people” are more productive.
1. Crafting something beautiful. Figuring out correct abstractions and mapping them naturally to language constructs. Nailing just the right amount of flexibility, scalability and robustness. Writing self-explanatory, idiomatic code that is a pleasure to read. It’s an art.
2. Building useful things. Creating programs that are useful to myself and to others, and watching them bring value to the world. It’s engineering.
These things have utility but they are also enjoyable onto themselves. As best I can tell, your emotional response to coding agents depends on how much you care about these two things.
AI has taken away the joy of crafting beautiful things, and has amplified the joy of building things by more than 10x. Safe bet: It will get to 100x this year.
I am very happy with this tradeoff. Over the years I grew to value building things much more highly. 20yo me would’ve been devastated.
I might be mistaken, but I bet they said the same when Visual Basic came out.
This may be the perspective of some programmers. It doesn't seem to be shared by the majority of software engineers I know and read and listen to.
We now have more opportunity than ever to create more of the things we have wanted to. We are able to spend more time leaning into our abilities of judgement, creativity, specific knowledge, and taste.
Countless programming frustrations are gone. I, and all those I talk to are having more fun than they have ever had.
I'm still not sure what analogy fits for me. It's closer to product manager/maestro/artist/architect/designer that helps a number of amazing systems create great code.
The modal person just trying to get their job done wasn't a software artisan; they were cutting and pasting from Stack Overflow, using textbook code verbatim, and using free and open-source code in ways that would likely violate the letter and spirit of the license.
If you were using technology or concepts that weren't either foundational or ossified, you found yourself doing development through blog posts. Now, you can at least have a stochastic parrot that has read the entire code and documentation and can talk to it.
For now. Though I suspect the commit history would probably still be pretty telling.
You can use AI to write all your code, but if you want to be a programmer and can't see that the code is pretty mid then you should work on improving your own programming skills.
People have been saying the 6 month thing for years now, and while I do see it improving in breadth, quality/depth still appears to be plateauing.
It's okay if you don't want to be a programmer though, you can be a manager and let AI do an okay job at being your programmer. You better be driven to be a good at manager though. If you're not... then AI can do an okay job of replacing you there too.
Musicians mourned synthesizers. Illustrators mourned Photoshop. Typesetters mourned desktop publishing. In every case the people who thrived weren't the ones who refused the new tool or the ones who blindly adopted it. They were the ones who understood that the tool absorbed the mechanical layer while the taste layer became more valuable, not less.
The real shift isn't from hand-coding to AI-coding. It's from "I express intent through syntax" to "I express intent through constraints and review." That's still judgment. Still craft. Just a different substrate.
What we're actually mourning is the loss of effort as a signal of quality. When anyone can generate working code, the differentiator moves upstream to architecture, to knowing what to build, to understanding why one approach fails at scale and another doesn't. Those are harder skills, not easier ones.
Why use a spade? Even those construction workers use the right sized tools. They ain't stupid.
LLM would be if the digging and hauling of the dirt happened without any people involved except the planning of logistics.
you'd sometimes discover a city communication line destroyed in the process; or the dirt hauled on top of a hospitals, killing hundreds of orphaned kids with cancer; or kittens mixed into concrete instead of cement.
And since you clicked "agree" on that Anthropic EULA, you can't sue then for it, so you now hire 5 construction workers to constantly overlook the work.
It's still net positive... for now at least... But far from being "without any people". And it'll likely remain this way for a long time.
I would add a nuance from OPs perspective sorta: a close friend of mine works in construction, and often comments on how projects can be different. On some, everyone in the entire building supply chain can be really inspired to work on a really interesting project because of either its usefulness or its craftsmanship (the 2 of which are related), and on some, everyone’s, just trying to finish the project is cheaply quickly as possible.
It’s not that the latter hasn’t existed in tech, but it does appear that there is a way to use LLMs to do more of the latter. It’s not “the end of a craft”, but without a breakthrough (and something to check the profit incentive) it’s also not a path to utopia (like other comments seem to be implying)
Craftsmanship doesn’t die, it evolves, but the space in between can be a bit exhausting as markets fail to understand the difference at first.
> Today, I would say that about 90% of my code is authored by Claude Code. The rest of the time, I’m mostly touching up its work or doing routine tasks that it’s slow at, like refactoring or renaming.
> I see a lot of my fellow developers burying their heads in the sand, refusing to acknowledge the truth in front of their eyes, and it breaks my heart because a lot of us are scared, confused, or uncertain, and not enough of us are talking honestly about it. Maybe it’s because the initial tribal battle lines have clouded everybody’s judgment, or maybe it’s because we inhabit different worlds where the technology is either better or worse (I still don’t think LLMs are great at UI for example), but there’s just a lot of patently unhelpful discourse out there, and I’m tired of it.
https://nolanlawson.com/2026/01/24/ai-tribalism/
If you're responding to this with angry anti-AI rants (or wild AI hype), might want to go read that post.
I still feel proud when skillfully guiding a set of AI agents to build from my imagination. Especially when it was out of my reach just 6-months ago.
I’m a 49 year old veteran who started at just 10 years old and have continued to find pure passion in it.
But I am still quite annoyed at the slopful nature of the code that is produced when you're not constantly nagging it to do better
We've RLed it to produce code that works by hook or by crook, putting infinity fallback paths and type casts everywhere rather than checking what the semantics should be.
Sadly I don't know how we RL taste.
Perhaps people or machines will finally figure out how to make software which actually works without a need to weekly patching
The craft was dying long before LLMs. Started in dotcom, ZIRP added some beatings, then LLMs are finishing the job.
This is fine, because like in furniture making, the true craftsmen will be even more valuable (overseeing farm automation, high end handmade furniture, small organic farms), and the factory worker masses (ZIRP enabled tech workers) will move on to more fulfulling work.
With woodworking and farming you get as a result some physical goods. Some John Smith that buys furniture can touch nice cherry paneling, appreciate the joinery and grain. With farming you he can taste delicious organic tomatoes and cucumbers, make food with it.
Would this John Smith care at all about how some software is written as long as it does what he wants and it works reliably? I'm not sure.
I find myself, ironically, spending more time typing out great code by hand now. Maybe some energy previously consumed by tedium has been freed up, or maybe the wacky machines brought a bit of the whimsy back into the process for me.
[0] And in programming, for the readers of Zed Shaw's books :)
I’m in consulting now and it’s all the same crap. Enterprises want to “unleash AI” so they can fire people. Maximize profits. My nephews who are just starting their careers are blindly using these tools and accepting the PR if it builds. Not if it’s correct.
I’m in awe of what it can do but I also am not impressed with the quality of how it does it.
I’m fortunate to not have any debt so I can float until the world either wises up or the winds of change push me in a new direction.
I liked the satisfaction of building something “right” that was also “useful”. The current state of Opus and Codex can only pretend to do the latter.
What I do mourn is the reliability. We're in this weird limbo where it's like rolling a die for every piece of work. If it comes up 1-5, I would have been better off implementing it myself. If it comes up 6, it'll get it done orders of magnitude faster than doing it by hand. Since the overall speedup is worthwhile, I have to try it every time, even if most of the time it fails. And of course it's a moving target, so I have to keep trying the things that failed yesterday because today's models are more capable.
I'll miss it not because the activity becomes obsolete, but because it's much more interesting than sitting till 2am trying to convince LLM to find and fix the bug for me.
We'll still be sitting till 2am.
> They can write code better than you or I can, and if you don’t believe me, wait six months.
I've been hearing this for the last two years. And yet, LLMs, given abstract description of the problem, still write worse code than I do.
Or did you mean type code? Because in that case, yes, I'd agree. They type better.
After ten years of professional coding, LLMs have made my work more fun. Not easier in the sense of being less demanding, but more engaging. I am involved in more decisions, deeper reviews, broader systems, and tighter feedback loops than before. The cognitive load did not disappear. It shifted.
My habits have changed. I stopped grinding algorithm puzzles because they started to feel like practicing celestial navigation in the age of GPS. It is a beautiful skill, but the world has moved on. The fastest path to a solution has always been to absorb existing knowledge. The difference now is that the knowledge base is interactive. It answers back and adapts to my confusion.
Syntax was never the job. Modeling reality was. When generation is free, judgment becomes priceless.
We have lost something, of course. There is less friction now, which means we lose the suffering we often mistook for depth. But I would rather trade that suffering for time spent on design, tradeoffs, and problems that used to be out of reach.
This doesn't feel like a funeral. It feels like the moment we traded a sextant for a GPS. The ocean is just as dangerous and just as vast, but now we can look up at the stars for wonder, rather than just for coordinates.
If you're someone with a background in Computer Science, you should know that we have formal languages for a reason, and that natural language is not as precise as a programming language.
But anyway we're peek AI hype, hitting the top on HN is worth more than a reasonable take, reasonableness doesn't sell after all.
So here we're seeing yet another text about how the world of software was solved by AI and being a developer is an artifact of the past.
Right? At least on HN, there's a critical mass of people loudly ignoring this these days, but no one has explained to me how replacing formal language with an english-language-specialized chatbot - or even multiple independent chatbots (aka "an agent") - is good tradeoff to make.
English or any other natural language can of course be concise enough, but when being brief they leave much to imagination. Adding verbosity allows for greater precision, but I think as well that that is what formal languages are for, just as you said.
Although, I think it's worth contemplating whether the modern programming languages/environments have been insufficient in other ways. Whether by being too verbose at times, whether the IDEs should be more like databases first and language parsers second, whether we could add recommendations using far simpler, but more strict patterns given a strongly typed language.
My current gripes are having auto imports STILL not working properly in most popular IDEs or an IDE not finding referenced entity from a file, if it's not currently open... LLMs sometimes help with that, but they are extremely slow in comparison to local cache resolution.
Long term I think more value will be in directly improving the above, but we shall see. AI will stay around too of course, but how much relevance it'll have in 10 years time is anybody's guess. I think it'll become a commodity, the bubble will burst and we'll only use it when sensible after a while. At least until the next generation of AI architecture will arrive.
It's also important to step back and realize that it goes way beyond coding. Coding is just the deepest tooth of the jagged frontier. In 3 years there will be blog posts lamenting the "death of law firms" and the "death of telemedicine". Maybe in 10 years it will be the death of everything. We're all in the same boat, and this boat is taking us to a world where everyone is more empowered, not less. But still, there will be that cutting edge in any field that will require real ingenuity to push forward.
I agree that it's very destabilizing. It's sort of like inflation for expertise. You spend all this time and effort saving up expertise, and then those savings rapidly lose value. At the same time, your ability to acquire new expertise has accelerated (because LLMs are often excellent private tutors), which is analogous to an inflation-adjusted wage increase.
There are a ton of variables. Will hallucinations ever become negligible? My money is on "no" as long as the architecture is basically just transformers. How will compiling training data evolve with time? My money is on "it will get more expensive". How will legislators react? I sure hope not by suppressing competition. As long as markets and VC are functioning properly, it should only become easier to become a founder, so outsized corporate profits will be harder to lock down.
AI has a ways to go before it's senior level if it ever reaches that level, but I do feel bad for juniors that survive this who never will have the opportunity to sculpt code by hand.
It's not like all of a sudden I'm working 2-3 hours a day. I'm just getting a lot more done.
My headspace is now firmly in "great, I'm beginning to understand the properties and affordances of this new medium, how do I maximise my value from it", hopefully there's more than a few people who share this perspective, I'd love to talk with you about the challenges you experience, I know I have mine, maybe we have answers to each others problems :)
I assume that the current set of properties can change, however it seems like some things are going to be easier than others, for example multi modal reasoning still seems to be a challenge and I'm trying to work out if that's just hard to solve and will take a while or if we're not far from a good solution
I mourn a little bit that in 20 years possibly 50% of software jobs will get axed or unless you are elite/celebrity dev salary will stagnate. I mourn that in the future upward mobility and moving up into upper middle class will be harder without trying to be entrepreneur.
Dont train your replacements, better yet lets stop using them whenever we can.
Yet I feel much more connected with my old code. I really enjoyed actually writing all that code even though it wasn't the best.
If AI tools had existing 5 years ago when I first started working on this codebase, obviously the code quality would've been much higher. However, I feel like I really loved writing my old code and if given the same opportunity to start over, I would want to rewrite this code myself all over again.
Sure, maybe it takes me a little while to ride across town on my bike, but I can reliably get there and I understand every aspect of the road to my destination. The bazooka-powered jetpack might get me there in seconds, but it also might fly me across state lines, or to Antarctica, or the moon first, belching out clouds of toxic gas along the way.
That's the future for maybe half of programmers.
Remember, it's only been three years since ChatGPT. This is just getting started.
Perhaps Im a cynic but I don't know
This is a 100% honest question. Because whatever your justification to this is, it can probably be used for AI programmers using temperature 0.0 as well, just one abstraction level higher.
I'm 100% honestly looking forward to finding a single justification that would not fit both scenarios.
But I've seen this conversation on HN already 100 times.
The answer they always give is that compilers are deterministic and therefore trustworthy in ways that LLMs are not.
I personally don't agree at all, in the sense I don't think that matters. I've run into compiler bugs, and more library bugs than I can count. The real world is just as messy as LLMs are, and you still need the same testing strategies to guard against errors. Development is always a slightly stochastic process of writing stuff that you eventually get to work on your machine, and then fixing all the bugs that get revealed once it starts running on other people's machines in the wild. LLMs don't write perfect code, and neither do you. Both require iteration and testing.
> The answer they always give is that compilers are deterministic and therefore trustworthy in ways that LLMs are not.
I don't see this as a frequent answer tbh, but I do frequently see claims that this is the critique.I wrote much more here[0] and honestly I'm on the side of Dijkstra, and it doesn't matter if the LLM is deterministic or probabilistic
It may be illuminating to try to imagine what would have happened if, right from the start our native tongue would have been the only vehicle for the input into and the output from our information processing equipment. My considered guess is that history would, in a sense, have repeated itself, and that computer science would consist mainly of the indeed black art how to bootstrap from there to a sufficiently well-defined formal system. We would need all the intellect in the world to get the interface narrow enough to be usable, and, in view of the history of mankind, it may not be overly pessimistic to guess that to do the job well enough would require again a few thousand years.
- Dijkstra: On the foolishness of "natural language programming"
His argument has nothing to do with the deterministic systems[1] and all to do with the precision of the language. His argument comes down to "we invented symbolic languages for a good reason".[0] https://news.ycombinator.com/item?id=46928421
[1] If we want to be more pedantic we can actually codify his argument more simply by using some mathematical language, but even this will take some interpretation: natural language naturally imposes a one to many relationship when processing information.
But the parent argument is pretty bad, in my opinion.
Compiler is your interface.
If you treat LLM as your interface... Well, I wouldn't want sharing codebase with you.
It's not about having abstraction levels above or below (BTW, in 21st century CPUs, the machine code itself is an abstraction over much more complex CPU internals).
It's about writing a more correct, efficient, elegant, and maintainable code at whichever abstraction layer you choose.
AI still writes messier, sloppier, buggier, more redundant code than a good programmer can when they care about the craft of writing code.
The end result is worse to those who care about the quality of code.
We mourn, because the quality we paid so much attention to is becoming unimportant compared to the sheer quantity of throwaway code that can be AI-generated.
We're fine dining chefs losing to factory-produced junk food.
UB is actually a big deal, and is avoided for a reason.
I couldn't in my life guess what CC would do in response to "implement login form". For all I know CC's response could depend on time of day or Anthropic's electricity bill last month more, than on the existing code in my app, and the specific wording I use.
When I write a program, I understand the architecture of the computer, I understand the assembly, I understand the compiler, and I understand the code. There are things that I don't understand, and as I push to understand them, I am rewarded by being able to do more things. In other words, Understanding is both beautiful and incentivized.
When making something with an LLM, I am disincentivized from actually understanding what is going on, because understanding is very slow, and the whole point of using AI is speed. The only time when I need to really understand something is when something goes wrong, and as the tool improves, this need will shrink. In the normal and intended usage, I only need to express a desire to achieve a result. Now, I can push against the incentives of the system. But for one, most people will not do that at all; and for two, the tools we use inevitably shape us. I don't like the shape into which these tools are forming me - the shape of an incurious, dull, impotent person who can only ask for someone else to make something happen for me. Remember, The Medium Is The Message, and the Medium here is, Ask, and ye shall receive.
The fact that AI use leads to a reduction in Understanding is not only obvious, but also studies have shown the same. People who can't see this are refusing to acknowledge the obvious, in my opinion. They wouldn't disagree that having someone else do your homework for you would mean that you didn't learn anything. But somehow when an LLM tool enters the picture, it's different. They're a manager now instead of a lowly worker. The problem with this thinking is that, in your example, moving from say Assembly to C automates tedium to allow us to reason on a higher level. But LLMs are automating reasoning itself. There is no higher level to move to. The reasoning you do now while using AI is merely a temporary deficiency in the tool. It's not likely that you or I are the .01% of people who can create something truly novel that is not already sufficiently compressed into the model. So enjoy that bit of reasoning while you can, o thou Man of the Gaps.
They say that writing is God's way of showing you how sloppy your thinking is. AI tools discourage one from writing. They encourage us to prompt, read, and critique. But this does not result in the same Understanding as writing does. And so our thinking will be, become, and remain vapid, sloppy, inarticulate, invalid, impotent. Welcome to the future.
> why do you not program in assembly?
There's a balance of levels of abstraction. Abstraction is a great thing. Abstraction can make your programs faster, more flexible, and more easy to understand. But abstraction can also make your programs slower, more brittle, and incomprehensible.The point of code is to write specification. That is what code is. The whole reason we use a pedantic and somewhat cryptic schema is that natural language is too abstract. This is the exact reason we created math. It really is even the same reason we created things like "legalese".
Seriously, just try a simple exercise and be adversarial to yourself. Describe how to do something and try to find loopholes. Malicious compliance. It's hard, to defend and writing that spec becomes extremely verbose, right? Doesn't this actually start to become easier by using coding techniques? Strong definitions? Have we not all forgotten the old saying "a computer does exactly what you tell it to, not what you intend to tell it to do"? Vibe coding only adds a level of abstraction to that. It becomes "a computer does what it 'thinks' you are telling it to do, not what you intend to tell it to do". Be honest with yourself, which paradigm is easier to debug?
Natural language is awesome because the abstraction really compresses concepts, but it requires inference of the listener. It requires you to determine what the speaker intends to say rather than what the speaker actually says.
Without that you'd have to be pedantic to even describe something as mundane as making a sandwich[1]. But inference also leads to misunderstandings and frankly, that is a major factor of why we talk past one another when talking on large global communication systems. Have you never experienced culture shock? Never experienced where someone misinterprets you and you realize that their interpretation was entirely reasonable?[2] Doesn't this knowledge also help resolve misunderstandings as you take a step back and recheck assumptions about these inferences?
> using temperature 0.0
Because, as you should be able to infer from everything I've said above, the problem isn't actually about randomness in the system. Making the system deterministic only has one realistic outcome: a programming language. You're still left with the computer doing what you tell it to do, but have made this more abstract. You've only turned it into the PB&J problem[1] and frankly, I'd rather write code than instructions like those kids are. Compared to the natural language the kids are using, code is more concise, easier to understand, more robust, and more flexible.I really think Dijkstra explains things well[0]. (I really do encourage reading the entire thing. It is short and worth the 2 minutes. His remark at the end is especially relevant in our modern world where it is so easy to misunderstand one another...)
The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid.
Instead of regarding the obligation to use formal symbols as a burden, we should regard the convenience of using them as a privilege: thanks to them, school children can learn to do what in earlier days only genius could achieve.
[0] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...[1] https://www.youtube.com/watch?v=FN2RM-CHkuI
[2] Has this happened to you and you've been too stubborn to realize the interpretation was reasonable?
I think we should move past this quickly. Coding itself is fun but is also labour , building something is the what is rewarding.
It's not even always a more efficient form of labour. I've experienced many scenarios with AI where prompting it to do the right thing takes longer and requires writing/reading more text compared to writing the code myself.
Many are calling people like me Luddites for mourning this, and I think that I am prepared to wear that label with pride. I own multiple looms and a spinning wheel, so I think I may be in a better position speculates on how the Luddites felt than most people are nowadays.
And what I see is that the economic realities are what they are - like what happened to cottage industry textile work, making software by hand is no longer the economical option. Or at least, soon enough it won’t be. I can fret about deskilling all I like, but it seems that soon enough these skills won’t be particularly valuable except as a form of entertainment.
Perhaps the coding agents won’t be able to make certain things or use certain techniques. That was the case for textile manufacturing equipment, too. If so then the world at large will simply learn to live without. The techniques will live on, of course, but their practical value will be as an entertainment for enthusiasts and a way for them to recognize one another when we see it in each others’ work.
It’s not a terrible future, I suppose, in à long enough view. The world will move on, just like it did after the Industrial Revolution. But, perhaps also like the Industrial Revolution and other similar points in history, not until after we get through another period where a small cadre of wealthy elites who own and control this new equipment use that power to usher in a new era of neofeudalism. Hopefully this time they won’t start quite so many wars while they enjoy their power trips.
We're 'simply' moving up the abstraction hierarchy again. Good!
Mechanising the production of code is good thing. And crafting code as art is a good thing. It is sign of a wider trend that we need to look at these things like adversaries.
I look forward to the code-as-art countermovement. It's gonna be quite something.
When cameras became mainstream, realism in painting went out of fashion, but this was liberating in a way as it made room for many other visual art styles like Impressionism. The future of programming/computing is going to be interesting.
Like iambateman said: for me it was never about code. Code was a means to an ends and it didn't stop at code. I'm the kind of software engineer that learned frontends, systems, databases, ETLs, etc -- whatever it was that was that was demanded of me to produce something useful I learned and did it. We're now calling that a "product engineer". The "craft" for me was in creating useful things that were reliable and efficient, not particularly how I styled lines, braces, and brackets. I still do that in the age of AI.
All of this emotional spillage feels for not. The industry is changing as it always has. The only constant I've ever experienced in this industry is change. I realized long ago that when the day comes that I am no longer comfortable with change then that is my best signal that this industry is no longer for me.
As an (ex-)programmer in his late 40s, I couldn't agree more. I'm someone who can be detail-oriented (but, I think also with a mind toward practicality) to the point of obsession, and I think this trait served me extremely well for nearly 25 years in my profession. I no longer think that is the case. And I think this is true for a lot of developers - they liked to stress and obsess over the details of "authorship", but now that programming is veering much more towards "editor", they just don't find the day-to-day work nearly as satisfying. And, at least for me, I believe this while not thinking the change to using generative AI is "bad", but just that it's changed the fundamentals of the profession, and that when something dies it's fine to mourn it.
If anything, I'm extremely lucky that my timing was such that I was able to do good work in a relatively lucrative career where my natural talents were an asset for nearly a quarter of a century. I don't feel that is currently the case regarding programming, so I'm fortunate enough to be able to leave the profession and go into violin making, where my obsession with detail and craft is again a huge asset.
Mourning the passing of one form of abstraction for another is understandable, but somewhat akin to bemoaning the passing of punch card programming. Sure, why not.
I’m using a mix of Gemini, grok, and gpt to translate some matlab into c++. It is kinda okay at its job but not great? I am rapidly reading Accelerated C++ to get to the point where I can throw the llm out the window. If it was python or Julia I wouldn’t be using an LLM at all bc I know those languages. AI is barely better than me at C++ because I’m halfway through my first ever book on it. What LLMs are these people using?
The code I’m translating isn’t even that complex - it runs analysis on ecg/ppg data to implement this one dude’s new diagnosis algorithm. The hard part was coming up with the algorithm, the code is simple. And the shit the LLM pours out works kinda okay but not really? I have to do hours of fix work on its output. I’m doing all the hard design work myself.
I fucking WISH I could only work on biotech and research and send the code to an LLM. But I can’t because they suck so I gotta learn how computer memory works so my C++ doesn’t eat up all my pc’s memory. What magical LLMs are yall using??? Please send them my way! I want a free llm therapist and a programmer! What world do you live in?? Let me in!
This is the narrow understanding of programming that is the whole point of contention.
I’ve built a number of team-specific tools with LLM agents over the past year that save each of us tens of hours a month.
They don’t scale beyond me and my six coworkers, and were never designed to, but they solve challenges we’d previously worked through manually and allow us to focus on more important tasks.
The code may be non-optimal and won’t become the base of a new startup. I’m fine with that.
It’s also worth noting that your evidence list (increased CVEs, outages, degraded quality) is exclusively about what happens when LLMs are dropped into existing development workflows. That’s a real concern, but it’s a different conversation from whether LLMs create useful software.
My tools weren’t degraded versions of something an engineer would have built better. They’re net-new capability that was never going to get engineering resources in the first place. The counterfactual in my case isn’t “worse software”—it’s “no software.“
User count? Domain? Scope of development?
You have something in mind, obviously.
Anything that proves that LLMs increase software quality. Any software built with an LLM that is actually in production, survives maintenance, doesn't have 100 CVEs, that people actually use.
> The difference in pace of shipping with and without AI assistance is staggering.
Lets back up these statements with some evidence, something quantitative, not just what pre-IPO AI marketing blog posts are telling you.
It shows, increased outages, increased vulnerabilities, windows failing to boot, windows task bar is still react native and barely works. And I have spoken to engineers at FANG companies, they are forced to use LLMs, managers are literally tracking metrics. So where is all this amazing new software and software quality or increased productivity from them?
Sure, if you have the money, get a carpenter to build your kitchen from solid oak. Most people buy MDF, or even worse, chipboard. IKEA, etc. In fact, not too long ago, I had a carpenter install prefabricated cabinets in a new utility room. The cabinets were pre-assembled, and he installed them on the wall in the right order and did the detailed fittings. He didn’t do a great job, and I could have done better, albeit much slower. I use handsaws simply because I’m afraid of circular saws, but I digress.
A lot of us here are like carpenters before IKEA and prefabricated cabinets, and we are just now facing a new reality. We scream “it is not the same”. It indeed isn’t for us. But the consumers will get better value for money. Not quality, necessarily, but better value.
How about us? We will eventually be kitchen designers (aka engineers, architects), or kitchen installers (aka programmers). And yes, compared to the golden years, those jobs will suck.
But someone, somewhere, will be making bespoke, luxury furniture that only a few can afford. Or maybe we will keep doing it anyway because our daily jobs suck, until we decide to stop. And that is when the craft will die.
The world will just become less technical, as is the case with other industrial goods. Who here even knows how a combustion engine works? Who knows how fabric is made, or even how a sawing machine works? We are very much like the mechanics of yesteryear before cars became iPads on wheels.
As much as we hate it, we need to accept that coding has peaked. Juniors will be replaced by AI, experts will retire. Innovation will be replaced by processes. And we must accept our place in history.
Speak for yourself. They produce shit code and have terrible judgment. Otherwise we wouldn't need to babysit them so much.
I have to say this reads a bit hollow to me, and perhaps a little bit shallow.
If the content this guy created could be scraped and usefully regurgitated by an LLM, that same hack, before LLMs, could have simply searched, found the content and still profited off of it nonetheless. And probably could have done so without much more thought than that required to use the LLM. The only real difference introduced by the LLM is that the purpose of the scraping is different than that done by a search engine.
But let's get rid of the loaded term "hack" and be a little less emotional and the complaint. Really the author had published some works and presumably did so that people could consume that content: without first knowing who was going to consume it and for what purpose.
It seems to me what the author is really complaining about is that the reward from the consuming party has been displaced from himself to whoever owns the LLM. The outcome of consumption and use hasn't changed... only who got credit for the original work has.
Now I'm not suggesting that this is an invalid complaint, but trying to avoid saying, "I posted this for my benefit"... be that commercial (ads?) or even just for public recognition...is a bit disingenuous.
If you poured you knowledge, experience, and creativity into some content for others to consume and someone else took that content as their own... just be forthright about what you really lost and don't disparage the consumer. Just because they aren't your "hacks" anymore, but that middlemen are now reaping your rewards.
So if my corporate overlords will have me talk to the soul-less Claude robot all day long in a Severance-style setting, and fix its stupid bugs, but I get to keep my good salary, then I'll shed a small tear for my craft and get back to it. If not... well, then I'll be shedding a lot more tears ... i guess
guy who doesn't realize we still use hammers. This article was embarrassing to read.
My whole life I have been reading other people’s code to accumulate best practices and improve myself. While a lot of developers start with reading documentation, I have always started with reading code.
And where I was previously using the GitHub Code Search to eat up as much example code as I could, I am now using LLMs to speed the whole process up. Enormously. I for one enjoy using it.
That said, I have been in the industry for more than 15 years. And all companies I have been at are full of data silos, tribal knowledge about processes and organically grown infrastructure, that requires careful changes to not break systems you didn’t even know about.
Actually most of my time isn’t put into software development at all. It’s about trying to know the users and colleagues I work with, understand their background and understand how my software supports them in their day to day job.
I think LLMs are very, very impressive, but they have a long way to go to reach empathy.
* People still craft wood furniture from felled trees entirely with hand tools. Some even make money doing it by calling it 'artisanal'. Nothing is stopping anyone from coding in any historical mode they like. Toggle switches, punch cards, paper tape, burning EPROMs, VT100, whatever.
* OP seems to be lamenting he may not be paid as much to expend hours doing "sleepless wrangling of some odd bug that eventually relents to the debugger at 2 AM." I've been there. Sometimes I'd feel mild satisfaction on solving a rat-hole problem but more often, it was significant relief. I never much liked that part of coding and began to see it as a failure mode. I found I got bigger bucks - and had more fun - the better I got at avoiding rat-hole problems in the first place.
* My entire journey creating software from ~1983 to ~2020 was about making a thing that solved someone's problem better, cheaper or faster - and, on a good day, we managed all three at once. At various times I ended up doing just about every aspect of it from low-level coding to CEO and back again, sometimes in the same day. Every role in the journey had major challenges. Some were interesting, a few were enjoyable, but most were just "what had to get done" to drag the product I'd dreamt up kicking and screaming into existence.
* From my first teenage hobby project to my first cassette-tape in-a-baggie game to a $200M revenue SaaS for F100, every improvement in coding from getting a floppy disk drive to an assembler with macros to an 80 column display to version control, new languages, libraries, IDEs and LLMs just helped "making the thing exist" be easier, faster and less painful.
* Eventually, to create even harder, bigger and better things I had to add others coding alongside me. Stepping into the player-coach role amplified my ability to bring new things into existence. It wasn't much at first because I had no idea how to manage programmers or projects but I started figuring it out and slowly got better. On a good day, using an LLM to help me "make the thing exist" feels a lot like when I first started being a player-coach. The frustration when it's 'two steps forward, one back' feels like deja vu. Much like current LLMs, my first part-time coding helpers weren't as good as I was and I didn't yet know how to help them do their best work. But it was still a net gain because there were more of them than me.
* The benefits of having more coders helping me really started paying off once I started recruiting coders who were much better programmers than I ever was. Getting there took a little ego adjustment on my part but what a difference! They had more experience, applied different patterns, knew to avoid problems I'd never seen and started coming up with some really good ideas. As LLMs get better and I get better at helping them help me - I hope that's were we're headed. It doesn't feel directionally different than the turbo-boost from my first floppy drive, macro-assembler, IDE or profiler but the impact is already greater with upside potential that's much higher still - and that's exciting.
But software engineering is the only industry that is built on the notion of rapid change, constant learning, and bootstrapping ourselves to new levels of abstraction so that we don't repeat ourselves and make each next step even more powerful.
Just yesterday we were pair programming with a talented junior AI developer. Today we are treating them as senior ones and can work with several in parallel. Very soon your job will not be pair programming and peer reviewing at all, but teaching a team of specialized coworkers to work on your project. In a year or two we will be assembling factories of such agents that will handle the process from taking your requirements to delivering and maintaining complex software. Our jobs are going to change many more times and much more often than ever.
And yet there will still be people finding solace in hand-crafting their tools, or finding novel algorithms, or adding the creativity aspect into the work of their digital development teams. Like people lovingly restoring their old cars in their garage just for the sake of the process itself.
And everything will be just fine.
Not sure I agree. I think most programming today looks almost exactly the same as it did 40 years ago. You could even have gotten away with never learning a new language. AI feels like the first time a large percentage of us may be forced to fundamentally change the way we work or change careers.
2. the tools still need a lot of direction, i still fight claude with opus to do basic things and the best experiences are when i provide very specific prompts
3. being idealistic on a capitalist system where you have to pay your bills every month is something i could do when my parents paid my bills
These apocalyptic posts about how everything is shit really don't match my reality at all. I use these tools every day to be more productive and improve my code but they are nowhere close to doing my actual job, that is figuring out WHAT to do. How to do it is mostly irrelevant, as once i get to that point i already know what needs to be done and it doesn't matter if it is me or Opus producing the code.
Also, don't forget the things that AI makes possible. It's a small accomplishment, but I have a World of Warcraft AddOn that I haven't touched in more than 10 years. Of course now, it is utterly broken. I pointed ChatGPT at my old code and asked it to update it to "retail" WoW, and it did it. And it actually worked. That's kind of amazing.
I mourn having to repeatedly hear this never-quite-true promise that an amazing future of perfect code from agentic whatevers will come to fruition, and it's still just six months away. "Oh yes, we know we said it was coming six, twelve, and eighteen months ago, but this time we pinky swear it's just six months away!"
I remember when I first got access to the internet. It was revolutionary. I wanted to be online all the time, playing games, chatting with friends, and discovering new things. It shaped my desire to study computer science and learn to develop software! I could see and experience the value of the internet immediately. It's utility was never "six months away," and I didn't have to be compelled to use it—I was eager to use it of my own volition as often as possible.
LLM coding doesn't feel revolutionary or exciting like this. It's a mandate from the top. It's my know-nothing boss telling me to "find ways to use AI so we can move faster." It's my boss's know-nothing boss conducting Culture Amp surveys about AI usage, but ignoring the feedback that 95% of Copilot's PR comments are useless noise: "The name of this unit test could be improved." It's waiting for code to be slopped onto my screen, so I can go over it with a fine-toothed comb and find all the bugs—and there are always bugs.
Here's what I hope is six months away: The death of AI hype.
What's much more interesting is looking back 6, 12, 18, or 24 months. 6 months ago was ChatGPT 5, 12 months ago was GPT 4.5, 18 months ago was 4o, and 24 months ago ChatGPT 3.5 was released (the first one). If you've been following closely you'll have seen incredible changes between each of them. Not to get to perfect, because that's not really a reasonable goal, but definite big leaps forward each time. A couple of years ago one-shotting a basic tic tac toe wasn't really possible. Now though, you can one-shot a fairly complex web app. It won't be perfect, or even good by a lot of measures compared to human written software, but it will work.
I think the comparison to the internet is a good one. I wrote my first website in 1997, and saw the rapid iteration of websites and browsers back then. It felt amazing, and fast. AI feels the same to me. But given the fact that browsers still aren't good in a lot of ways I think it's fair to say AI will take a similarly long time. That doesn't mean the innovations along the way aren't freaking cool though.
It's pretty obvious the change of pace is slowing down and there isn't a lot of evidence that shipping a better harness and post-training on using said harness is going to get us to the magical place where all SWE is automated that all these CEOs have promised.
What's happening now is training models for long-running tasks that use tools, taking hours at a time. The latest models like 4.6 and 5.3 are starting to make good on this. If you're not using models that are wired into tools and allowed to iterate for a while, then you're not getting to see the current frontier of abilities.
(EG if you're just using models to do general knowledge Q&A, then sure, there's only so much better you can get at that and models tapered off there long ago. But the vision is to use agents to perform a substantial fraction of white-collar work, there are well-defined research programmes to get there, and there is stead progress.)
o1 was something like 16-18 months ago. o3 was kinda better, and GPT 5 was considered a flop because it was basically just o3 again.
I’ve used all the latest models in tools like Claude code and codex, and I guess I’m just not seeing the improvement? I’m not even working on anything particularly technically complex, but I still have to constantly babysit these things.
Where are the long-running tasks? Cursor’s browser that didn’t even compile? Claude’s C compiler that had gcc as an oracle and still performs worse than gcc without any optimizations? Yeah I’m completely unimpressed at this point given the promises these people have been making for years now. I’m not surprised that given enough constraints they can kinda sorta dump out some code that resembles something else in their training data.
As to where the long-running tasks are... this is a very cool topic, let me nerd out for a moment! At this point the best examples are in internal lab + 3rd evals, for two reasons.
First, it's a rapidly developing capability. Here's a 3rd-party evaluation of the rate of progress here: [0].
Second, and perhaps more importantly, the bottleneck to adopting these long-running systems is giving them tools to unblock themselves as they work (eg, access to emails, databases, code, google docs, etc), much as white-collar workers today need to frequently integrate/assimilate data from various sources. It turns out that many of these services have awkward APIs to use in real life, so wiring them up takes a lot of time/setup to take advantage. Internally, the labs can simulate these capabilities, so they see the "peak potential" of the models when they can natively access such systems.
As such, I will make the following falsifiable prediction: Over the next 6-8 months, you will see a flood of new startups/products that help integrate actual company systems with long-running agents, and you will see a pivot from "tool as assistant" to "tool as 1:1 replacement for some employees" that is publicly reported on.
The job market will get flooded with the unemployed (it already is) with fewer jobs to replace the ones that were automated, those remaining jobs will get reduced to minimum wages whenever and wherever possible. 25% of new college grads cannot find employment. Soon young people will be so poor that you'll beg to fight in a war. Give it 5-10 years.
This isn't a hard future to game theory out, its not pretty if we maintain this fast track of progress in ML that minimally requires humans. Notice how the ruling class has increased the salaries for certain types of ML engineers, they know what's at stake. These businessmen make decisions based on expected value calculated from complex models, they aren't giving billion dollar pay packages to engineers because its trendy. We should use our own mental models to predict where this is going, and prevent it from happening however possible.
THE word ''Luddite'' continues to be applied with contempt to anyone with doubts about technology, especially the nuclear kind. Luddites today are no longer faced with human factory owners and vulnerable machines. As well-known President and unintentional Luddite D. D. Eisenhower prophesied when he left office, there is now a permanent power establishment of admirals, generals and corporate CEO's, up against whom us average poor bastards are completely outclassed, although Ike didn't put it quite that way. We are all supposed to keep tranquil and allow it to go on, even though, because of the data revolution, it becomes every day less possible to fool any of the people any of the time. If our world survives, the next great challenge to watch out for will come - you heard it here first - when the curves of research and development in artificial intelligence, molecular biology and robotics all converge. Oboy. It will be amazing and unpredictable, and even the biggest of brass, let us devoutly hope, are going to be caught flat-footed. It is certainly something for all good Luddites to look forward to if, God willing, we should live so long. Meantime, as Americans, we can take comfort, however minimal and cold, from Lord Byron's mischievously improvised song, in which he, like other observers of the time, saw clear identification between the first Luddites and our own revolutionary origins. It begins:[0]
https://archive.nytimes.com/www.nytimes.com/books/97/05/18/r...
Then next month, of course, latest thing becomes last thing, and suddenly it's again obvious that actually it didn't quite work.
It's like running on a treadmill towards a dangling carrot or something. It's simultaneously always here in front of our faces but also not here in actual hand, obviously.
The tools are good and improving. They work for certain things, some of the time, with various need for manual stewarding in the hands of people who really know what they're doing. This is real.
But it remains an absolutely epic leap from here to the idea that writing code per se is a skill nobody needs any more.
More broadly, I don't even really understand what that could possibly mean on a practical level, as code is just instructions for what the software should do. You can express instructions on a higher level, and tooling keeps making that more and more possible (AI and otherwise), but in the end what does it mean to abstract fully away from the instruction in the detail? It seems really clear that will never be able to result in getting software that does what you want in a precise way rather than some probabilistic approximation which must be continually corrected.
I think the real craft of software such that there is one is constructing systems of deterministic logic flows to make things happen in precisely the way we want them to. Whatever happens to tooling, or what exactly we call code or whatever, that won't change.
> getting software that does what you want
so then we become PMs?
Nobody credible is promising you a perfect future. But, a better future, yes! If you do not see it, then know this. You have your head firmly planted in the sand and are intentionally refusing to see what is coming. You may not like it. You may not want it. But it is coming and you will either have to adapt or become irrelevant.
Does Copilot spit out useless PR comments. 100% yes! Are there tools that are better than Copilot? 100% yes! These tools are not perfect. But even with their imperfections, they are very useful. You have to learn to harness them for their strengths and build processes to address their weaknesses. And yes, all of this requires learning and experimentation. Without that, you will not get good results and you will complain about these tools not being good.
I heard it will be here in six months. I guess I don't have much time to adapt! :)
6 months ago is when my coding became 100% done by AI. The utility already has been there for a while.
>I didn't have to be compelled to use it—I was eager to use it of my own volition as often as possible.
The difference is that you were a kid then with an open mind and now your world view has fixed into a certain way the world works and how things should be done.
Yeah, it's weird. I'm fixated on not having bugs in my code. :)
I have encountered a lot of people say it will be better in six months, and every six months It has been.
I have also seen a few predictions that say 'in a year or two they will be able to do a job completely. I am sceptical, but I would say such claims are rare. Dario Amodei has been about the only prominent voice that I have encountered that puts such abilities on a very short timeframe, and he still points to more than a year.
The practical use of AI has certainly increased a lot in the last six months.
So I guess what I'm asking is more specifics on what you feel was claimed, by whom, and how much did they fall short?
Without that supporting evidence you could just be being annoyed by the failure of claims that exist in your imagination.
Maybe you’re just older.
> it's still just six months away
Reminds me of another "just around the corner" promise...[0]I think it is one thing for the average person to buy into the promises but I've yet to understand why that happens here. Or why that happens within our community of programmers. It is one thing for non-experts to fall for obtuse speculative claims, but it is another for experts. I'm excited for autonomous vehicles, but in 2016 is was laughable to think they're around the corner and only 10 years later does such a feat seem to start looking like it's actually a few years away.
Why do we only evaluate people/claims on their hits and not their misses? It just encourages people to say anything and everything, because eventually one will be right. It's 6 months away because eventually it will actually be 6 months away. But is it 6 months away because it is actually 6 months away or because we want it to be? I thought the vibe coder's motto is "I just care that it works." Honestly, I think that's the problem. Everyone care's about if it works or not and that's the primary concern of all sides of the conversation here. So is it 6 months away because it is 6 months away or is it 6 months away because you've convinced yourself it is 6 months away. You got good reasons for believing that, you got the evidence, but evidence for a claim is meaningless without comparing to evidence that counters the claim.
[0] https://en.wikipedia.org/wiki/List_of_predictions_for_autono...
I’ve been programming since 1984.
OP basically described my current role with scary precision.
I mostly review the AI’s code, fix the plan before it starts, and nudge it in the right direction.
Each new model version needs less nudging — planning, architecture, security, all of it.
There’s an upside.
There’s something addictive about thinking of something and having it materialize within an hour.
I can run faster and farther than I ever could before.
I’ve rediscovered that I just like building things — imagining them and watching them come alive — even if I’m not laying every brick myself anymore.
But the pace is brutal.
My gut tells me this window, where we still get to meaningfully participate in the process, is short.
That part is sad, and I do mourn it quite a bit.
If you think this is just hype, you’re doing it wrong.
I feel generative AI is being imposed onto society. While it is a time-saving tool for many applications, I also think there are many domains where generative AI needs to be evaluated much more cautiously. However, there seems to be relentless pressure to “move fast and break things,” to adopt technology due to its initial labor-saving benefits without fully evaluating its drawbacks. That’s why I feel generative AI is an imposition.
I also resent the power and control that Big Tech has over society and politics, especially in America where I live. I remember when Google was about indexing the Web, and I first used Facebook when it was a social networking site for college students. These companies became successful because they provided useful services to people. Unfortunately, once these companies gained our trust and became immensely wealthy, they started exploiting their wealth and power. I will never forget how so many Big Tech leaders sat at Trump’s second inauguration, some of whom got better seats than Trump’s own wife and children. I highly resent OpenAI’s cornering of the raw wafer market and the subsequent exorbitant hikes in RAM and SSD prices.
Honestly, I have less of an issue with large language models themselves and more of an issue with how a tiny handful of powerful people get to dictate the terms and conditions of computing for society. I’m a kid who grew up during the personal computing revolution, when computation became available to the general public. I fell for the “computers for the rest of us,” “information at your fingertips” lines. I wanted to make a difference in the world through computing, which is why I pursued a research career and why I teach computer science.
I’ve also sat and watched research industry-wide becoming increasingly driven by short-term business goals rather than by long-term visions driven by the researchers themselves. I’ve seen how “publish-and-perish” became the norm in academia, and I also saw DOGE’s ruthless cuts in research funding. I’ve seen how Big Tech won the hearts and minds of people, only for it to leverage its newfound power and wealth to exploit the very people who made Big Tech powerful and wealthy.
The tech industry has changed, and not for the better. This is what I mourn.
Nothing will prevent you from typing “JavaScript with your hands”, from “holding code in our hands and molding it like clay…”, and all the other metaphors. You can still do all of it.
What certainly will change is the way professional code will be produced, and together with that, the avenue of having a very well-paid remuneration, to write software line-by-line.
I’ll not pretend that I don’t get the point, but it feels like the lamentation of a baker, tailor, shoemaker, or smith, missing the days of old.
And yet, most people prefer a world with affordable bread, clothes, footware, and consumer goods.
Will the world benefit the most from “affordable” software? Maybe yes, maybe not, there are many arguments on both sides. I am more concerned the impact on the winners and losers, the rich will get more rich and powerful, while the losers will become even more destitute.
Yet, my final point would be: it is better or worse to live in a world in which software is more affordable and accessible?
Except the community of people who, for whatever reason, had to throw themselves into it and had critical mass to both distribute and benefit from the passion of it. This has already been eroded by the tech industry coopting programming in general and is only going to diminish.
The people who discovered something because they were forced to do some hard work and then ran with it are going to be steered away from that direction by many.
Food:
A lot of the processed foods that are easily available make us unhealthy and sick. Even vegetables are less nutritious than they were 50 years ago. Mass agriculture also has many environmental externalities.
Consumer goods:
It has become difficult to find things like reliable appliances. I bought a chest freezer. It broke after a year. The repairman said it would cost more to fix than to buy a new one. I asked him if there was a more reliable model and he said no: they all break quickly.
Clothing:
Fast fashion is terrible for the environment. Do we need as many clothes as we have? How quickly do they end up in landfills?
Would we be better off as a society repairing shoes instead of buying new ones every year?
If you are arguing that standard of living today is lower than in the past, I think that is a very steep uphill battle to argue
If your worries are about ecology and sustainability I agree that is a concern we need to address more effectively than we have in the past. Technology will almost certainly be part of that solution via things like fusion energy. Success is not assured and we cannot just sit back and say "we live in the best of all possible worlds with a glorious manifest destiny", but I don't think that the future is particularly bleak compared to the past
I worry that humanity has a track record of diving head first into new technologies without worrying about externalities like the environment or job displacement.
I wish we were more thoughtful and focused more on minimizing the downsides of new technologies.
Instead it seems we’re headed full steam towards huge amounts of energy use and job displacement. And the main bonus is rich people get richer.
I’m not sure if having software be cheaper is beneficial. Is it good for malware to be easier to produce? I’d personally choose higher quality software over more software.
I’m not convinced cheaper mass produced clothing has been a net positive. Will AI be a positive? Time will tell. In the short term there are some obvious negatives.
Cars make people unhealthy and lead to city designs that hurt social engagement and affordability, but they are so much more efficient that it's hard not to use them.
And then the obvious stuff about screens/phones/social media.
I wonder if there are some interesting groupings.
No they cannot, And an AI bro squeezing every talking point into a think piece while pretending to have empathy doesn't change that. You just want an exit, and you want it fast.
> and if you don’t believe me, wait six months
This reads as a joke nowadays.
I often have one programming project I do myself, on the side, and recently I've been using coding agents. Their average ability is no doubt impressive for what they are. But they also make mistakes that not even a recent CS graduate with no experience would ever make (e.g. I asked the agent for it's guess as to why a test is failing; it suggested it might be due to a race condition with an operation that is started after the failing assertion). As a lead, if someone on the team is capable of making such a mistake even once, then that person can't really code, regardless of their average performance (just as someone who sometimes lands a plane in the wrong airport or even crashes without their being a catastrophich condition outside their control can't really fly regardless of their average performance). "This is more complicated than we though and would take longer than we expected" is something you hear a lot, but "sorry, I got confused" is something you never hear. A report by Anthropic last week said, "Claude will work autonomously to solve whatever problem I give it. So it’s important that the task verifier is nearly perfect, otherwise Claude will solve the wrong problem." Yeah, that's not something a team lead faces. I wish the agent could work like a team of programmers and I would be doing my familiar role of a project lead, but it doesn't.
The models do some things well. I believe that programming is an interesting mix of inductive and deductive thinking (https://pron.github.io/posts/people-dont-write-programs), and the models have the inductive part down. They can certainly understand what a codebase does faster than I can. But their deductive reasoning, especially when it comes to the details, is severely lacking (e.g. I asked the agent to document my code. It very quickly grasped the design and even inferred some important invariants, but when it saw an `assert` in one subroutine it documented it as guarding a certain invariant. The intended invariant was correct, it just wasn't the one the assertion was guarding). So I still (have to) work as a programmer when working with coding assistants, even if in a different way.
I've read about great successes at using coding agents in "serious" software, but what's common to those cases is that the people using the agents (Mitchell Hashimoto, antirez) are experts in the respective codebase. At the other end of the spectrum, people who aren't programmers can get some cool programs done, but I've yet to see anything produced in this way (by a non programmer) that I would call serious software.
I don't know what the future will bring, but at the moment, the craft isn't dead. When AI can really program, i.e. the experience is really like that of a team lead, I don't think that the death of programming would concern us, because once they get to that point, the agents will also likely be able to replace the team lead. And middle management. And the CTO, the CFO, and the CEO, and most of the users.
It gets hard to compare AI to humans. You can ask the AI to do things you would never ask a human to do, like retry 1000 times until it works, or assign 20 agents to the same problem with slightly different prompts. Or re-do the entire thing with different aesthetics.
I would also point out that the author, and many AI enthusiasts, still make certain optimistic assumptions about the future role of "developer," insisting that the nature of the work will change, but that it will somehow, in large measure, remain. I doubt that. I could easily envision a future where the bulk of software development becomes something akin to googling--just typing the keywords you think are relevant until the black box gives you what you want. And we don't pay people to google, or at least, we don't pay them very much.
I could not be having a better time.
I liked coding! It was fun! But I mourned because I felt like I would never get out 1% of the ideas in my head. I was too slow, and working on shit in my free time just takes so much, is so hard, when there's so little fruitful reward at the end of a weekend.
But I can make incredible systems so fast. This is the craft I wanted to be doing. I feel incredibly relieved, feel such enormous weigh lifted, that maybe perhaps some of my little Inland Empire that lives purely in my head might perhaps make it's way to the rest of the world, possibly.
Huge respect for all the sadness and mourning. Yes too to that. But I cannot begin to state how burdened and sad I felt, so unable to get the work done, and it's a total flip, with incredible raw excitement and possibility before me.
That said, software used to reward such obsessive deep following pursuit, such leaning into problems. And I am very worried, long term, what happens to the incredible culture of incredible people working really hard together to build amazing systems.
Oh come on. 95% of the folks were gluing together shitty React components and slathering them with Tailwind classes.
If you really buy all that you'd be part of the investor class that crashed various video game companies upon seeing Google put together a rather lame visual stunt and have their AI say, and I quote because the above-the-fold AI response I never asked for has never been more appropriate to consult…
"The landscape of AI video game generation is experiencing a rapid evolution in 2025-2026, shifting from AI-assisted asset creation to the generation of entire interactive, playable 3D environments from text or image prompts. Leading initiatives like Google DeepMind's Project Genie and Microsoft's Muse are pioneering "world models" that can create, simulate physics, and render games in real-time."
And then you look at what it actually is.
Suuuure you will, unwanted AI google search first response. Suuure you will.
And you still need plans.
Can you write a plan for a sturdy house, verify that it meets the plan that your nails went all the way in and in the right places?
You sure can.
Your product person, your directors, your clients might be able to do the same thing, it might look like a house but its a fire hazard, or in the case of most LLM generated code a security one.
The problem is that we moved to scrum and agile, where your requirements are pantomime and postit notes if your lucky, interpretive dance if you arent. Your job is figuring out how to turn that into something... and a big part of what YOU as an engineer do is tell other people "no thats dumb" without hurting their feelings.
IF AI coding is going to be successful then some things need to change: Requirements need to make a come back. GOOD UI needs to make a comeback (your dark pattern around cancelation, is now going to be at odds with an agent). Your hide the content behind a login or a pay wall wont work any more because again, end users have access too... the open web is back and by force. If a person can get in, we have code that can get in now.
There is a LOT of work that needs to get done, more than ever, stop looking back and start looking forward, because once you get past the hate and the hype there is a ton of potential to right some of the ill's of the last 20 years of tech.
It's not that impressive that Claude wrote a C compiler when GitHub has the code to a bunch of C compilers (some SOTA) just sitting there.
I'm using an LLM to write a compiler in my spare time (for fun) for a "new" language. It feels more like a magical search engine than coding assistant. It's great for bouncing ideas from, for searching the internet without the clutter of SEO optimized sites and ads, it's definitely been useful, just not that useful for code.
Like, I have used some generated code in a very low stakes project (my own Quickshell components) and while it kind of worked, eventually I refactored it myself into 1/3 of the lines it produced and had to squash some bugs.
It's probably good enough for the people who were gluing React components together but it still isn't on the level where I'd put any code it produces into production anywhere I care about.
I don't mourn or miss anything. No more then the previous generation mourned going from assembly to high level languages.
The reason why programming is so amazing is getting things done. Seeing my ideas have impact.
What's happening is that I'm getting much much faster and better at writing code. And my hands feel better because I don't type the code in anymore.
Things that were a huge pain before are nothing now.
I didn't need to stay up at night writing code. I can think. Plan. Execute at a scale that was impossible before. Alone I'm already delivering things that were on the roadmap for engineering months worth of effort.
I can think about abstractions, architecture, math, organizational constraints, product. Not about what some lame compiler thinks about my code.
And if someone that's far junior to me can do my job. Good. Then we've empowered them and I've fallen behind. But that's not at all the case. The principals and faculty who are on the ball are astronomically more productive than juniors.
The "difficult", "opinionated", "overpaid" maniacs are virtually all gone. That's why such a reckless and delusional idea like "we'll just have agents plan, coordinate, and build complete applications and systems" is able to propagate.
The adults were escorted out of the building. Managements' hatred of real craftspeople is manifesting in the most delusional way yet. And this time, they're actually going to destroy their businesses.
I'm here for it. They're begging to get their market share eaten for breakfast.
And this surprises me, because I used to love writing code. Back in my early days I can remember thinking "I can't believe I get paid for this". But now that I'm here I have no desire to go back.
I, for one, welcome our new LLM overlords!
I for one think writing code is the rewarding part. You get to think through a problem and figure out why decision A is better than B. Learning about various domains and solving difficult problems is in itself a reward.
So just tell the LLM about what you're thinking about.
Why do you need to type out a for loop for the millionth time?
Sure, they are still needed for debugging and for sneering at all those juniors and non-programmers who will finally be able to materialise their fantasies, but there is no way back anymore, and like riding horses, you can still do it while owning a car.
You think its just SWE? It will be accountants, customer service, factory workers, medical assistance basically anyone who doesn't work with their hands directly, and they'll try to solve that here soon too and alienate them too.
Look at who's in charge, do you think they're going to give us UBI? No, they're going to sign us up to go fight wars to help them accumulate resources. Stop supporting this, they're going to make us so poor young men will beg to fight in a war. Its the same playbook from the first half of the 20th Century.
You think I'm paranoid, give it 5 years.
We are at all time high's in the stock market/equities and they've laid off 400k SWE's in the last 16 months. While going on podcasts to tell us we are going to have more time to create and do what we love. We have to work to pay our bills. We don't want whats coming, but they're selling us some lie that this will solve all our problems, it will solve the ruling classes problems that will be it. You will have no bargaining chips and you will be forced to take whatever morsels given to you.
Your competency will be directly correlated 1:1 to the quantity and quality of tokens that you can afford, given access too (or loaned??) We're literally at the beginning of a black mirror episode before it gets dark.
People that grew up in the Capitalist West have been brainwashed since they were 10 years old they they can be a billionaire too, no you can't there's 2k-3k of them and 8 billion of us.
These automation tools are the ultimate weapon for the ruling class to strip all value of you from your labor, and you're embracing that as a miracle. Its not, your life is in the process of being torn of all meaning.
Good luck to everyone who agrees, we're going to need it.. Anyone supporting these companies or helping enhance these model's capabilities, you're a class traitor and soon to be slave.
Required reading: https://archive.nytimes.com/www.nytimes.com/books/97/05/18/r...
Do what isn't replaceable. You're being told literally everything is replaceable. Note who's telling you that and follow the money.
I feel bad for this essayist, but can't really spare more than a moment to care about his grief. I got stuff to do, and I am up and doing. If he was in any way competing with the stuff I do? One less adversary.
I would rather bring him into community and enjoy us all creating together… but he's acting against those interests and he's doomering and I have no more time for that.
fact of the matter is, being able to churn out bash oneliners was objectively worth $100k/year, and now it just isnt anymore. knowing the C++ STL inside-out was also worth $200k/year, now it has very questionable utility.
a lot of livelihoods are getting shaken up as programmers get retroactively turned into the equivalent of librarians, whose job is to mechanically index and fetch cognitive assets to and from a digital archive-brain.
also:
> We’ll miss the sleepless wrangling of some odd bug that eventually relents to the debugger at 2 AM.
no we won't lol wtf
but also: we will probably still have to do that anyways, but the LLM will help us and hopefully make it take less time
Right now I can think of very few white-collar jobs that I would feel comfortable training 4+ years for (let alone spending money or taking on debt to do so). It is far from a guarantee that almost any 4-year degree you enroll in today will have any value in four years. That has basically never before been true, even in tech. Blue collar jobs are clearly safer, but I wouldn't say safe. Robotics is moving fast too.
I really can't imagine the social effects of this reality being positive, absent massive and unprecedented redistribution of the wealth that the productivity of AI enables.