Theres way too much money on this hype train now though to point out the emperor isnt wearing any clothes and way too many people who always did think that "boilerplate spew" (the one thing AI really does well) is a valid form of programming rather than a shortcut to tech debt.
> The valuable engineer is the one who sees the hidden constraint before it causes an outage. The one who notices that the team is solving the wrong problem. The one who reduces a vague debate into crisp tradeoffs. The one who identifies the missing abstraction. The one who can debug reality, not just read code. The one who can create clarity where everyone else sees noise
How do you think engineers in the second half got there? By writing tons and tons of code to "build those reps" and gain that experience.
The author tries to answer this:
> That process is not optional. It is how engineers acquire and elevate their competency. If early-career engineers use A.I. to remove all struggle from the learning loop, they are hurting their development.
but in a world wherein writing code by hand (the "struggle") is "artisinal" and "outdated", this process being non-optional (which I agree with) is contradictory.
How juniors and fresh grads do that with AI that is designed to give you whatever answer you need in a given moment is unclear to me. I don't see how that's possible, but maybe I'm thinking too myopically.
Socrates wrote about what was being lost as philosophy was becoming written rather than oral...and he was right.
We can't even understand what was lost. Many methods of learning and thinking became entirely lost. You could say they were redundant, and they were. But... writing largely replaced oral traditions. It didn't just augment them.
He was that old school coder who had the skills to do philosophy and be an intellectual without writing. Writing was an augmentation for him. But for the new cohort... it was a new paradigm and old paradigm skills became absent.
It is very hard to imagine skilled coders becoming skilled without need pressing that skill acquisition. The diligent student will acquire some basic "manual coding" skill... but mostly the skill development will be wherever the hard work is.
Dr. Steven Skultety & Dr. Gad Saad discussed this in a recent video / podcast.
This link is time stamped to the topic https://youtu.be/7mcQf9E3YRo?t=1058
Quoting my boy Max Stirner who also fking hated these guys
“This war is opened by Socrates, and not until the dying day of the old world does it end in peace.“ - The Ego and its Own, Max Stirner
If I am free as “rational I,” then the rational in me, or reason, is free; and this freedom of reason, or freedom of the thought, was the ideal of the Christian world from of old. They wanted to make thinking – and, as aforesaid, faith is also thinking, as thinking is faith – free; the thinkers, the believers as well as the rational, were to be free; for the rest freedom was impossible. But the freedom of thinkers is the “freedom of the children of God,” and at the same time the most merciless – hierarchy or dominion of the thought; for Isuccumb to the thought. If thoughts are free, I am their slave; I have no power over them, and am dominated by them. But I want to have the thought, want to be full of thoughts, but at the same time I want to be thoughtless, and, instead of freedom of thought, I preserve for myself thoughtlessness. If the point is to have myself understood and to make communications, then assuredly I can make use only of human means, which are at my command because I am at the same time man. And really I have thoughts only as man; as I, I am at the same time thoughtless. He who cannot get rid of a thought is so far only man, is a thrall of language, this human institution, this treasury of human thoughts. Language or “the word” tyrannizes hardest over us, because it brings up against us a whole army of fixed ideas. Just observe yourself in the act of reflection, right now, and you will find how you make progress only by becoming thoughtless and speechless every moment. You are not thoughtless and speechless merely in (say) sleep, but even in the deepest reflection; yes, precisely then most so. And only by this thoughtlessness, this unrecognized “freedom of thought” or freedom from the thought, are you your own. Only from it do you arrive at putting language to use as your property. If thinking is not my thinking, it is merely a spun-out thought; it is slave work, or the work of a “servant obeying at the word.” For not a thought, but I, am the beginning for my thinking, and therefore I am its goal too, even as its whole course is only a course of my self-enjoyment; for absolute or free thinking, on the other hand, thinking itself is the beginning, and it plagues itself with propounding this beginning as the extremest “abstraction” (such as being). This very abstraction, or this thought, is then spun out further
- The ego and its own, Max Stirner
With any new technology, subsequent drudgery depends on the technology, its concomitant economics, and the imagination of the people using it.
I can live a happy life without struggling for basic needs and without playing golf all day long. If you strip off every obligation from life, then you exist, not live.
Facing challenges and overcoming obstacles, friends and family is what makes me happy. When you’re rich, most people only care about your money, not the person you are. And I think that’s exactly what a happy life is about.
I can imagine I could be perfectly happy with a life full of challenges of that kind, instead of being forced to work at given scheduled times which often imply I spend less time with my son than I would like, including days I don't feel like it, and including boring tasks (I love my job, but like almost every job, it also has its paperwork, pointless meetings, etc.), knowing I depend on that work to live.
In short, I think we all do need the challenge, the struggle, the successes and the failures, otherwise life would just be boring and pointless. But I don't think we (or at least I) need the obligation component and the high stakes.
What you mention about the rich attracting people focused on money rings true, but it would be moot if AI led us all to lead lives more similar to the rich, which was the point here. (Of course, there's also the issue of whether there is widespread or unequal access to AI, but that's another story...).
That is a bold and frankly unsupportable claim.
Interestingly, he placed a lot of importance on memory... where you emphasize manipulation of concepts.
The idea that there will be less to think about seems a bit short-sighted. Humans are very good at moving to higher levels of abstraction, often with more complexity to deal with, not less.
BUT, BUT! I keep the index.
My favourite quote from Donald Rumsfeld (a very bad human being, but this is still good)
> Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.
What I optimise for is to have as many "known unknowns" as possible. I know a concept, process or a tool exists, but don't understand it or know how to do it. But because I know it exists, I won't start inventing it again from scratch when I need it.
Like if one needs to do some esoteric task, they might start figuring it out from scratch. But because the index in my brain contains a link ("known unknown") to a tool/process that makes that specific thing a LOT easier, I can start looking into it more.
Or I might need to do something common like plumbing or some electrical work at home. Do I know how to do that? No. But I Know A Guy I can call, again externalising the knowledge. Either they come over and help me do it or talk me through the process of adjusting the thermostat in my shower faucet (you need to use WAY more force than I was comfortable with without an expert on the phone btw... there are no hidden screws, you just rip the bits off :D)
And we don't need words to think; cognitive problem solving and language processing are separate processes [1]
We will shift the problems we need to think about. Same as always; humanity isn't really solving building stone pyramids. Did we stop thinking? No just thought about a different todo list.
[1] https://www.scientificamerican.com/article/you-dont-need-wor...
Software code is on the other hand extremely formal, and either it works perfectly as intended, it works crappily and keeps breaking in various edge cases or just doesn't work (last 2 are just variants of same dysfunctionality, technically its binary state). There is no scenario where broken code somehow ends up working and delivering, or maybe 1 in trillion, sometimes.
Also the change is so fast that the failure is immediately obvious to everybody, its not gradual change of thinking over few decades/generations.
LLMs are getting impressive, but anybody claiming there is no massive long term harm to getting to what we call now proper seniority is... don't know, delusional, junior who never walked that long and hard-won path, doing PR for llms at all costs or some other similar type. Or simply has some narrow use case working great for them long term which definitely can't be transferred on whole industry, like 1-man indie game dev.
Because the easier path seemingly delivers what's expected of them. Sigh, they may even be demanded to take the faster path.
I've seen many junior unable to walk that necessary path before LLMs were a thing.
It's not by writing syntax that you get there. It's by creating software and gaining the experience of seeing it go wrong.
"Good judgement comes from experience. Experience comes from bad judgement."
AI just shortens the cycle without needing to type out syntax, so you get even more iterations, faster, and learn the lessons more quickly.
Some do not learn from that experience. They were never going to learn without AI either.
Writing syntax is still an important part of the experience. It is valuable because it requires you to spend time immersed in the nuts and bolts that hold software together. I'd compare it to cooking, if you have an assistant or a machine do everything and you never actually touch a knife or stir a pot, you'll lose your touch. But there is also something valuable about covering more ground and the additional experience that brings.
Well this is true, but that doesn't mean that there isn't any other way to acquire this knowledge. Until now, this way of gaining deeper understanding was simply the most practical one, since you needed to write lots of code when starting out as a software engineer.
But it's just as well possible to gain knowledge about useful abstractions and clean code by using AI to do the work. You'll find out after a while which codebases get you stuck and which code abstractions leverage your AI because it needs fewer tokens to read and extend your codebase.
Study of senior drafter "red lines": what and why they changed the initial drawing, RFI response etc. Reverse engineering good work. Failed design studies etc.
SWE equivalents: PRs, code review, studying high quality codebases (guess what: LLMs are amazing at helping here), pair programming (learning why what the LLM did was wrong, how to improve it, etc), customer support, debugging prod incidents, studying post mortems etc
We don't hire juniors and throw them boilerplate and tiny bugs while expecting them to learn along the way ad hoc through some pair programming and the occasional deep end. We give them specific tasks and studies that develop their domain understanding and taste, actively support and mentor them, and expect them to drive some LLMs on the side to solve simple issues that still need human eyes on it.
Is that generally the case though? I'm about two years into my first job in the industry and that's exactly my experience, and certainly frustrating...
i don't understand all this fear projected as if people won't have agency of learning just because LLMs make it easier to do certain things. i don't think it's contradictory at all. half the people here will never have to wrangle the bullshit i dealt with 20 years ago and i'm sure when i was dealing with it there was another 20 years of bullshit before me lol.
if you vibe code your app with no regard for the underlying code you will pay the price for it at some point in the future, anybody worth their salt will slow down enough to figure it out the "artisanal" way.
I think this extends to other parts of life, too. I still remember that I fondly played a game over and over again back in high school, when I did not have the Internet and had to borrow CDs from my friends — but when I went into the university and had access to pretty much every game freely on the Intranet, I rarely do that anymore. That’s why I always think an abundance of X may not be the best option for me. That’s why probably includes money, too.
Engineers sucked then as much as they suck now
Even in a world where there's a lot of AI generated code there can still be people that have enough exposure to doing hard things. Definitely at this point in time where AI can't really do all those hard things anyways - but even after it'll be able to.
you are thinking too myopically.
We have people who can still do maths well after the introduction of the calculator. We have people who can still spell after the introduction of spell check.
The junior only need to train without using AI to gain the skills needed - that's called education. If they choose to rely on AI solely, and gimp their own education, that's on them.
I assume by "do maths" you mean doing simple calculations, like adding a bunch of small numbers, in one's head. That's because in many situations it's more convenient to do so, than using a calculator. So the skill is preserved / practiced, because a calculator is too cumbersome to use. The skills of most people settle at the equilibrium where it takes the same effort to take out the calculator and focus on typing, as it would to strain the brain doing it without a calculator.
> We have people who can still spell after the introduction of spell check.
When using spell check to fix your document, you automatically learn to spell. Your skills improve by using the tool. A better analogy to AI would be an email client with a "Fix all and send"-button, where you never look at the output of the spell checker.
Both require manual "labor" which leads to learning.
Also to note. Calculators merely solve intermediary steps. LLMs are increasingly designed to do a one shot full blown work. Longer context, deep thinking, agentic loops.
In practice, what this means is that you can read some subject many times, but you would still struggle to reproduce the content by yourself. That is why, when learning, it is not sufficient to just read the material several times.
Arithmetics is a very, very small subset of math.
I started getting that "I'm reading another AI-written blog post" feeling around 1/3 of the way through, but I don't consider myself super calibrated on this.
Pangram seems pretty confident it's AI (https://www.pangram.com/history/e9f6eb77-86f9-46d0-a6c1-e57c...). But I know these tools aren't perfect. I'd love to hear from the author what their process was in writing this piece!
Related question (I'm trying to work this out for myself):
If you believe using AI to write an email or blog post for you isn't okay, but using AI to write code for you is... what's the difference?
Right now my instinct is something like:
- Code can be verifiably correct (especially w/ good tests) so it's less of a purely-creative act than writing.
- But always, always double-check the tests!
- I still wouldn't submit a PR where I can't vouch for every line of code.
- AI-written documentation and specs are mostly still bad and should be looked down upon. But mostly because the quality, at least today, is poor. (Lots of duplication, lack of a clear understanding of the reader's intent and needs, no thoughtful curation, etc.)
- Be psychologically ready to update these priors as models change.
I'd love to hear from anyone who's thought more about this.
The one thing I can tell you is that pangram is confidently wrong in this instance. And I now worry about how many may have relied on such assessments blindly in consequential places (school essays?). Which ties back to the thesis of my piece - where do you rely on AI and where do you rely on your own intelligence.
On a lighter note, decades ago, in middle school, we had an exercise to summarize a book we read. My school’s librarian wrote ambiguously “write this in your own words”. I asked her what she had meant by that. She had thought I’d copied it from somewhere even though it was all my own words. I went on to become the school topper in my final year for English (and one spot shy of being the school topper for Computer Science).
(We obviously live in a more nuanced world than most social media interactions might make you think :P)
> On a lighter note, decades ago, in middle school, we had an exercise to summarize a book we read.
My first experience with plagiarism was in first grade, when we were told to write a book report about a subject during our library time. I diligently took my book on the musk ox and copied three pages word-for-word into my notebook as my report. I can't remember when or how we learned this wasn't "right", but I still think back on that and laugh.
The valuable engineer is the one who sees the hidden constraint before it causes an outage. The one who notices that the team is solving the wrong problem. The one who reduces a vague debate into crisp tradeoffs. The one who identifies the missing abstraction. The one who can debug reality, not just read code. The one who can create clarity where everyone else sees noise.
This is a list of six things, disguised as an actual paragraph. Of sentence fragments disguised as actual sentences. Etc. Either you wrote this yourself and the AI didn't tell you "this is repetitive and list-y", or... "The software engineers who will be most valuable in the future are not the ones who do everything themselves. They are the ones who refuse to spend time on work that A.I. can do for them, while still understanding everything that is done on their behalf."
"The danger is not that A.I. will make people lazy in some vague moral sense. It is that it makes it easy to simulate competence without building competence."
"In that world, the engineer is not replaced by A.I. The engineer becomes more leveraged because they are operating above the level of raw output."
"The ability to explain why something works, not just that it appears to work."
"That process is not optional. It is how engineers acquire and elevate their competency."
"The support system may make you look functional, but it does not make you capable."
"The challenge is not merely adopting A.I. tools. It is protecting the conditions under which real thinking, learning, and craftsmanship continue to thrive."
"They will need interview loops that test reasoning, not just polished answers."
"The organizations that handle this well will not be the ones that simply push A.I. adoption hardest. They will be the ones that learn to separate leverage from dependency, acceleration from imitation, and genuine capability from convincing output."
^ Which of these are your thoughts? They all look like slop to me.Also the entire framing around "judgment" and "taste" is what LLMs love to parrot about the topic.
There are fair arguments in the post but I totally agree that "writing is thinking" and also holding myself to "if you didn't bother to write it, why would I bother to read it"?
Edit: 9 babies → 9 mothers
It's "9 women can't make a baby in one month".
It still takes roughly nine months to make a human baby, regardless of how many women or babies are involved!
On paper your CPU can execute at least one instruction per core per cycle but that's on average too, if you actually only have one instruction to run it takes several cycles.
Also, you can get a baby tonight if you steal one from the maternity ward.
The real question is, how do LLMs turn the mythical man month on its head. If we accept AI generated code, can an agentic AI swarm make software faster simply by parallelizing in a way that 9 women can't make a baby in 1 month because they're am AI, not human, and communicate in a different way.
The pitfall of AI coding is that previously every shiny tangent that was a distraction, is now a rabbit hole to be leaped into for an afternoon, if you feel like it. It's like that ancient Chinese curse, may you live in interesting times. Everybody can recreate an MVP of Twitter in a weekend now when previously that was just a claim a certain type of people made.
> The nearest related Chinese expression translates as "Better to be a dog in times of tranquility than a human in times of chaos."
https://en.wikipedia.org/wiki/May_you_live_in_interesting_ti...
There's a good point in here along the lines of "if you need X in a month, and someone else has something that's 90% of what you want X to be, can you buy it from them before starting any crazy internal death marches instead?"
> The real question is, how do LLMs turn the mythical man month on its head. If we accept AI generated code, can an agentic AI swarm make software faster simply by parallelizing in a way that 9 women can't make a baby in 1 month because they're am AI, not human, and communicate in a different way.
This is quite possibly only a one-time switch from a changed baseline, though. Give it a few years and "the fastest way an LLM tool can do it" will be what gets tossed out a an estimate, and stakeholders will still want you to do it in a tenth the time...
As far as I know, all women everywhere start not pregnant
we learn by doing
If you're not coding anymore, but using AI tools, you're developing skills in using those AI tools, and your code abilities will atrophy unless exercised elsewhere.
[1] Depending on the topic and the level of knowledge of it.
2. Don’t assume you’re the next Mozart. Someone is, statistically it’s not you.
Take juggling for example - something that was on HN homepage last week. You can learn everything you need to know about juggling though a post or a book or an educational video. But can you juggle after all that book learning? Not at all - to be able to juggle one has to spend time practicing and no amount of reading can help meaningfully compress that process.
Muscle memory required for juggling is not a 1:1 correlation to experience, but I feel like it's close enough to it.
I do think that these pieces sometimes smuggle in a nostalgic picture of how engineers "really" learn which has only ever been partly true.
Also could be shortened to "IA, not AI", and gets even more fun when you translate it to Spanish: "AI, no IA".
The problem is that it was coined so early that we are way past the aphorism stage now.
If you asked 100 Americans what this aphorism means, I strongly doubt a single one could capture McLuhan's original meaning.
https://publichealthpolicyjournal.com/mit-study-finds-artifi...
I think it means something like we're trapped in the constraints of the medium. Tweets say more about the environment of twitter than whatever message happened to be sent.
but i think im off on that, ill look this person up and find out!
Firstly, Twitter has an upper bound on the complexity of thoughts it can carry due to its character limit (historically 180, now somewhat longer but still too short).
Secondly, a biased or partial platform constrains and filters the messages that are allowed to be carried on it. This was Chomsky's basic observation in Manufacturing Consent where he discussed his propaganda model and the four "filters" in front of the mass media.
Finally, social media has turned "show business [into] an ordinary daily way of survival. It's called role-playing." [0] The content and messages disseminated by online personas and influencers are not authentic; they do not even originate from a real person, but a "hyperreal" identity (to take language from Baudrillard) [0]:
You are just an image on the air. When you don't have a physical body, you're a
_discarnate being_ [...] and this has been one of the big effects of the electric age. It
has deprived people of their public identity.
Emphasis mine. Influencers have been sepia-tinted by the profit orientation of the medium and their messages do not correspond to a position authentically held. You must now look and act a certain way to appease the algorithm, and by extension the audience.If nothing else, one should at least recognize that people primarily identify through audiovisual media now, when historically due to lack of bandwidth, lack of computing and technology, etc. it was far more common for one to represent themselves through literate media - even as recently as IRC. You can come to your own conclusions on the relative merits and differences between textual vs. audiovisual media, I will not waffle on about this at length here.
The medium itself is reshaping the ways people represent, think about, and negotiate their own self-concept and identity. This is beyond whatever banal tweets (messages) about what McSandwich™ your favourite influencer ate for lunch, and it's this phenomena that is important and worth examining - not the sandwich.
[0] Marshall McLuhan in Conversation with Mike McManus, 1977. https://www.tvo.org/transcript/155847
For "the medium is the message", "medium" refers to any tool that acts as an extension of yourself. TV is an extension of your community, even things like light bulbs (extends your vision) are included in his meaning.
McLuhan argued that all forms of media like that carry a message that's more than just their content. "The message" in that argument refers to the message the medium itself brings rather than its content. For example, the airplane is "used for" speeding up travel over long distance, but the the message of its medium itself is to "dissolve the railway form of city, politics, and association, quite independently of what the airplane is used for."
You can see it happening via online media that extend ourselves across the internet. Think of how, once easy video creation via Youtube became uniform, web comics stopped becoming a popular medium for comedy online. It's not like the web comics faded because they got worse; it's that they faded into a niche format because people didn't want to communicate via static images anymore. Or how, once short form videos on TikTok got big, you saw other platforms shift to copy the paradigm. McLuhan's point is that it's not just the content of those short form videos that matters; it's the message of the format itself. Peoples' attention spans grow shorter because of the format, and before too long, we saw the tastes and expectations of the masses change. Reddit's monosite-with-subcommunities format and dopamine triggering voting feedback mechanism were its message more than any actual content posted there, and it's why traditional forums are niche and dwindling.
If you want to get a pretty good understanding of it, just read the first chapter from his book Understanding Media. It's short and relatively straight forward.
To maintain relevance, we must find common ground. There is no true objectivity, because every sign must be built up from an arbitrary ground. At the very least, there will be a conflict of aesthetics.
The problem with LLMs is that they avoid the ground entirely, making them entirely ignorant to meaning. The only intention an LLM has is to preserve the familiarity of expression.
So yes, this kind of AI will not accomplish any epistemology; unless of course, it is truly able to facilitate a functional system of logic, and to ground that system near the user. I'm not going to hold my breath.
I think the great mistake of "good ole fashioned AI" was to build it from a perspective of objectivity. This constrains every grammar to the "context-free" category, and situates every expression to a singular fixed ground. Nothing can be ambiguous: therefore nothing can express (or interpret) uncertainty or metaphor.
What we really need is to recreate software from a subjective perspective. That's what I've been working on for the last few years... So far, it's harder than I expected; but it feels so close.
What does "subjective" mean here? Are you talking about just-in-time software? That is, software that users get mold on the fly?
I'm reminded immediately of the Enochian language which purportedly had the remarkable property of having a direct, unambiguous, 1-to-1 correspondence with the things being signified. To utter, and hear, any expression in Enochian is to directly transfer the author's intent into the listener's mind, wholly intact and unmodified:
Every Letter signifieth the member of the substance whereof it speaketh.
Every word signifieth the quiddity of the substance.
- John Dee, "A true & faithful relation of what passed for many yeers between Dr. John Dee ... and some spirits," 1659 [0].
The Tower of Babel is an allegory for the weak correspondence between human natural language and the things it attempts to signify (as opposed to the supposedly strong 1-to-1 correspondence of Enochian). The tongues are confused, people use the same words to signify different referents entirely, or cannot agree on which term should be used to signify a single concept, and the society collapses. This is similar to what Orwell wrote about, and we have already implemented Orwell's vision, sociopolitically, in the early 21st century, through the culture war (nobody can define "man" or "woman" any more, sometimes the word "man" is used to refer to a "woman," etc).LLMs just accelerate this process of severing any connection whatsoever between signified and signifier. In some ways they are maximally Babelian, in that they maximize confusion by increasing the quantity of signifiers produced while minimizing the amount of time spent ensuring that the things we want signified are being accurately represented.
Speaking more broadly, I think there is much confusion in the spheres of both psychology and religion/spirituality/mysticism in their mutual inability to "come to terms" and agree upon which words should be used to refer to particular phenomenological experiences, or come to a mutual understanding of what those words even mean (try, for instance, to faithfully recreate, in your own mind, someone's written recollection of a psychedelic experience on erowid).
[0] https://archive.org/details/truefaithfulrela00deej/page/92/m...
Non determinism is what conveniently feels the gap of having no spec.
In fact turn temperature to 0. And it will be virtually deterministic. It exacerbates the problem that LLMs, as you rightly point out, have no spec.
But it seems we are heading there. For simple stuff, if I made a very clear spec - I can be almost sure, that every time I give that prompt to a AI, it will work without error, using the same algorithms. So quality of prompt is more valuable, than the generated code
So either way, this is what I focus my thinking on right now, something that always was important and now with AI even more so - crystal clear language describing what the program should do and how.
That requires enough thinking effort.
What makes you think it will work for you?
Unless you review that code carefully, and then we're back to the point about it not saving you any cognitive overhead.
The “with extra steps” is doing a lot of work in that sentence.
That "almost" is doing a lot of heavy lifting here. This is just "make no mistakes" "you're holding it wrong" magical thinking.
In every project, there is always a gap between what you think you want and what you actually need. Part of the build process is working that out. You can't write better specs to solve this, because you don't know what it is yet.
On top of that, you introduce a _second_ gap of pulling a lever and seeing if you get a sip of juice or an electric shock lol. You can't really spec your way out of that one, either, because you're using a non-deterministic process.
So right now, humans are for sure more reliable. But it is changing. There are things I already trust a LLM more than a random or certain known humans.
Isn't it an abstraction similar to how an engineering or product manager is? Tell the (human or AI coder) what you want, and the coder writes code to fulfill your request. If it's not what you want, have them modify what they've made or start over with a new approach.
Software engineering is a lot more social and communication-heavy than people think. Part of my job is to _not_ take specs at face value. You learn real quick that what people say they need and what they actually need are often miles apart. That's not arrogance, that's just how humans work.
A good product manager understands the biz needs and the consumer market and I know how to build stuff and what's worked in the past. We figure out what to build together. AIs don't think and can't do this in any effective way.
Also, if you fuck up badly enough that you make your engineers throw out code, you're gonna get fired lol
A human coder can be seen as an abstraction level because it will talk to the PM in product terms, not in code. And the PM will be reviewing the product. What makes this work is that the underlying contract is that there's a very small amount of iterations necessary before the product is done and the latter one should require shorter time from the PM.
We've already established using a LLM tool that way does not work. You can spend a whole month doing back and forth, never looking at code and still have not something that can be made to work. And as soon as you look at the code, you've breached the abstraction layer yourself.
A lot of people are using them as such too: the amount of people talking about "my fleets of agents working on 4 different projects": they aren't reviewing that output. They say they are, but they aren't, anymore than I review the LLVM IR. It makes me feel like I'm in some fantasy land: I watch Opus 4.7 get things consistently backwards at the margins, mess up, make bugs: we wouldn't accept a compiler that did any of this at this scale or level lol
So far, my conclusion is that while LLMs can be s productivity boost, you have to direct them carefully. They don't really care about friction and bad abstractions in your codebase and will happily keep piling cards on top of the crooked house of cards they've generated.
Just like before AI, you need a cycle of building and refactoring running on repeat with careful reviews. Otherwise you will end up with something that even an LLM will have a hard time working in.
There are skills we're losing that are probably ok to lose (e.g. spacial memory & reasoning vs GPS, mental arithmetic vs calculators), primarily because those are well bounded domains, so we understand the nature of the codependency we're signing up for. AI is an amorphous and still growing domain. It is not a specific rung in the abstraction hierarchy; it is every rung simultaneously, but at different fidelity levels.
I'd argue these are not at all OK to lose. You live in an earthquake zone? You sure better know which way is north and where you have to walk to get back home when all the lines are down after a big one. You need to do a quick mental check if a number is roughly where it should be? YOu should be able to do that in your head.
There might be better examples that support your point more effectively e.g. cursive writing
The arguments you make ≤ the values you actually hold ≤ the actions you take in support of those values.
I'm only interested in any such argument to the extent to which you've personally put it into practice. Otherwise, you're living proof of the argument's weakness. (To be fair, it's extremely hard to be internally consistent on this stuff! We all want better for ourselves than we have time and energy for. But that's my point: your fully subconscious emotional calculus will often undercut at least some of your loftier aspirations. Skills that don't matter anymore invariably atrophy due to the opportunity cost of keeping them honed.)
The ones I use certainly are. And with a bit of training you can reason and predict how they will respond to a given input with a large degree of accuracy without being familiar with how the particular compiler under question was implemented.
Not so with the AI tools. At least with the ones I use anyway.
Nevermind the fact that these tools are nowhere near as capable as their marketing suggests. Once companies and society start hitting the brick wall of inevitable consequences of the current hype cycle, there will be a great crash, followed by industry correction. Only then will actually useful applications of this technology surface, of which there are plenty. We've seen how this plays out a few times before already.
Ancient historical reference: https://martiansoftware.com/lab/rundoc
After 5 hours or so of doing this planning, I'm EXHAUSTED. I never was exhausted in this manner from programming alone. Am I learning something new? Feels like management. :)
The strange sorts of errors and reasoning issues LLMs have also require a vigilance that is very draining to maintain. Likewise with parsing the inhuman communication styles of these things…
For instance I'm the old world, if you wanted to change an interface, you might have to edit 5 or 6 files to add your new function in the implementations. This is pretty routine and you won't need to concentrate that much if you're used to it, so you can spend that low-effort time thinking about the bigger picture.
I figured out some patterns in the way it behaves and could put more guard-rails in place so they hopefully won't bite me in the future (spelled out decision trees with specific triggers, standing orders, etc.), but some I can't categorize right now.
You can't figure this out instantly except when you'd review everything the LLM produces, which I am not. So the round trip time is pretty long, but I can trace it back to the intent now because I commit every architecture decision in an ADRs, which I pour most of my energy into. These are part of the repo.
Using these ADRs helped a lot because most of the assumptions of the LLM get surfaced early on, and you restrict the implementation leeway.
But maybe pacing/procrastination might be relief valves?
On the other hand I have been in debates where someone asks ChatGPT to draft a list of possible approaches and pros and cons - and after reading through the list we were all in alignment on the best approach.
The latter I think is a constructive use of AI to elevate thinking, while the former has me thinking it may be time for a career change.
What? I've heard many takes on what AI lacks, but never this one. We had ChatGPT being able to solve an Erdős problem on its own yesterday [0]; how could you explain that if it cannot do logic?
WRT logic, there a multiple occasions of LLMs answering incorrectly to trivial logic puzzles. Of course, with each occasion becoming public they are added to training data and overfitted on, but if you embed them in a more subtle way LLMs will fail again.
> “This one is a bit different because people did look at it, and the humans that looked at it just collectively made a slight wrong turn at move one,” says Terence Tao, a mathematician at the University of California, Los Angeles, who has become a prominent scorekeeper for AI’s push into his field. “What’s beginning to emerge is that the problem was maybe easier than expected, and it was like there was some kind of mental block.”
> “There was kind of a standard sequence of moves that everyone who worked on the problem previously started by doing,” Tao says. The LLM took an entirely different route, using a formula that was well known in related parts of math, but which no one had thought to apply to this type of question.
> “The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,” Lichtman says. But now he and Tao have shortened the proof so that it better distills the LLM’s key insight.
> More importantly, they already see other potential applications of the AI’s cognitive leap. “We have discovered a new way to think about large numbers and their anatomy,” Tao says. “It’s a nice achievement. I think the jury is still out on the long-term significance.”
You can debate whether the LLM used logic or not. I don't think you can debate that the LLM has in this case elevated human thinking, by leading us to a solution that had eluded world-class mathematicians for 60 years. And a new way to think "about large numbers and their anatomy".
And if it works for Terrence Tao and Erdos problems, then I'm certainly not above using AI to help brainstorm solutions for my little app at work.
There are multiple occasions of me answering incorrectly to trivial logic puzzles. Is that enough for you to deduce that I am "lack" logic?
Humans make mistakes all the time, and indeed we say "To err is human"; why should we expect AI not to?
Or without the ability to use a library from GitHub / their package manager.
It doesn't feel THAT much different to me.
"Engineer" as a term might drift. There are "web developers" that can only use webflow / wordpress.
"Couldn't", or "wouldn't"? Early in my career I'd be happy doing anything basically, not much I "couldn't" do, given enough time. But nowadays, there is a long list of things I wouldn't do, even if I know I could, just because it's not fun.
This is not a binary.
Engineers are accredited and in some countries even come with a title.
This is a pet peeve of mine, so while I understand what you mean, I will challenge you to come up with a strict definition that excludes software engineering!
And since I've had this discussion before, I'll pre-emptively hazard a guess that the argument boils down to "rigor", and point out that a) economic feasibility is a key part of engineering, b) the level of rigor applied to any project is a function of economics, and c) the economics of software projects is a very wide range.
Put another way, statistically most devs work on projects where the blast radius of failure is some minor inconvenience to like, 5 users. We really don't need rigor there, so I can see where you're coming from. But on the other extreme like aviation software, an appropriately extreme level of rigor is applied.
"Structured, mature, legally enforced, physically grounded standards based approach to the construction of repeatable, reliable, verifiable, artifacts under stable (to the degree that matters) external constraints".
Some niche software development (e.g. NASA/JPL coding projects with special rules, practices, MISRA etc) can look like that.
99.9% of the time though, software "engineering" is an ad hoc, mix and match, semi-random, always changing requirements and environments, half-art half-guess, process, by unlicensed practicioners, that is only regulated at some minor aspects of its operation (like GDPR, or accessibility requirements), if that.
Which is to say, engineer the job title is distinct from engineering the activity is distinct from engineer the accreditation.
And they weren't. They were craftsmen and tradesmen, e.g. stonemasons.
Also, software engineering is ahead of a few other disciplines of engineering on rigor in some dimensions. I feel like most software engineers don't understand how good software tools are at change management compared to pretty much anything else. (and that having good change management is a baseline, as opposed to a decent chance of not having any at all).
The definition I always saw used was this one, I think:
> Engineering is the profession in which a knowledge of the mathematical and natural sciences gained by study, experience, and practice is applied with judgment to develop ways to utilize, economically, the materials and forces of nature for the benefit of mankind.
This sounds like it should exclude software design and development. Except it doesn't need to, and it's not really useful to exclude it simply because the definition isn't broad enough. The definition isn't engineering. The definition is trying to describe and encapsulate the reality of engineering. Nuclear and modern electrical engineers frequently never create anything physical in their careers whatsoever. Nuclear engineers manage power generation at facilities that others designed and built, while electrical engineers are frequently just dealing with signal processing. They are not less rigorous in their methodology.
The reality is that engineering is the methodical application of constraints to solve a problem. And it is the methodology that is the valuable aspect. The knowledge is necessary for each discipline, but it is itself fundamentally a prerequisite. There is a reason engineering is a single school of many disciplines.
Meanwhile, the reason that software engineering looks like half-art and half-guess has a lot more to do with software as a non-theoretical field of study only being about 60 years old in practical terms. The fundamental works of the field like The Art of Computer Programming haven't even been written yet.
Whatever happens to software development and operational systems administration in the next 50 years, however, both roles almost certainly would benefit society by becoming actual professions. Their responsibility to society as a whole has been allowed to be understated, and we're well past the days when a computer bug causing the kinds of deaths and damages such as we'd see from a civic work failure or automotive design flaw sounds unreasonable. Indeed, that actually sound fortunate given some of the software catastrophes that have occurred.
That's the subject, the only word that is NOT doing any work there (since both regular and software engineering produce artifacts).
Words that do the heavy work in that phrase are:
structured, mature, legally enforced, standards-based approach - for repeatable, reliable, verifiable, - artifacts - under stable external constraints
Software can sometime appear to touch those.
E.g. there are "standards", like HTML or like ARIA, so it's "standards-based" too! But those standards are loosely enforced, usually not mandated, loosely defined, and ad-hoc implemented with all kinds of diverting.
Or e.g. software can some times be repeatable. E.g. reproducible builds (to touch upon one aspect). But that's again left to the implementor, seldom followed (almost never for most software work, only in niche industries).
In general, software is not engineering (in the strict sense) because it's anything goes, all the above conditions can or cannot be handled (in any random set), the final work is a moving target, and verification is fuzzy, if it even happens.
>The reality is that engineering is the methodical application of constraints to solve a problem.
In that case, following specific constraits to solve a math problem, or to draw an artwork (e.g. using perspective) is also "engineering". That's too loose a term to be of any use.
Even accepting that, the degree of "methological" in software "engineering" versus e.g. civic or aviation engineering is orders of magnitude less.
Other than that part (most countries in the world do not have regulations or licensing requirements for most engineering disciplines) I would agree. But I would also point out the set of software projects that meet that definition is much larger than those you listed.
As mentioned, it's a matter of economics, so the rigor scales with the pain it can cause if something that goes wrong. Hence any software that has a high blast radius is that rigorously built, probably even more. There are entire categories (not just individual examples!) of such projects. An obvious category are platforms that run or build other applications: OS kernels, databases, compilers, frameworks, cloud platforms (yes those 9's are an industry standard), and so on.
Then there are those regulated ones like automotive, aviation and medical software. There is even a case to be made for critical financial software.
Another less obvious category applies to any large software services company that has oncall engineers, because the high cost of engineers quickly climbs and quality processes quickly get installed, which basically amount to those critera you listed.
That internal LoB app with 5 users? That level of rigor simply does not make economic sense. Which is probably what you mean by:
> 99.9% of the time though, software "engineering" is an ad hoc, mix and match, semi-random, always changing requirements and environments, half-art half-guess, process, by unlicensed practicioners, that is only regulated at some minor aspects of its operation (like GDPR, or accessibility requirements), if that.
To that I'll say, as someone whose first site outage as an intern was an actual industrial manufacturing factory (not an AbstractFactoryFactory!) a surprisingly large fraction of projects in other engineering disciplines match that description ;-)
Well, then in those countries those disciplines aren't treated as enginering.
Any country worth its name and with a rule of law, would have regulations and licensing requirements for electricians, civic engineers, structural engineers, aviation engineers, chemical engineers, etc.
I mean, they had building rules at the time of Babylon:
https://talk.build/construct-iq/ancient-babylon-and-the-firs...
And even in medieval times, working in certain fields that we'd call engineering today, was legally restricted to specific guilds.
Of course I want the best of the best who are top notch and rigorously trained working on mission critical software.
Even most of the projects I personally have worked on simply did not need "engineering" as such, but other projects where uptime was critical and the cost of failure was high, there was a much higher level of rigor.
It wasn't that much different from SWE - mostly looking up catalogs, connecting certain pre-made pieces together with custom parts and lots of testing of the final plan to make sure there are no collisions and every movement is constrained properly.
95% of the time no load or sizing calculations were necessary - we just oversized everything based on tacit knowledge (the greybeards reviewing the plans) since these machines were not mass produced and choosing somewhat bigger parts was not expensive given that these machines would operate and produce value 24/7 for years.
(I hope the analogy to software engineering is visible!)
What I'm saying is that the level of "engineering rigor" heavily depends on the field where engineers are operating within. Even certain SWE fields (healthcare, finance, aviation etc.) have more regulation and require more rigor than others.
Where I work, there are plenty of non licensed engineers, but we pay a 3rd party agency for regulatory approval. The people who work for that agency are licensed engineers. Their expertise is knowing the regulations backwards and forwards.
Here's what I think is happening within industry. More and more work done by people with engineering job titles consists of organizing and arranging things, fitting things together, troubleshooting, dealing with vendors, etc. The reason is the complexity of products. As the number of "things" in a product increases by O(n), the number of relationships increases by O(n^2), so the majority of work has to do with relationships. A small fraction of engineers engages in traditional quantitative engineering. In my observation, the average age of those people is around 60, with a few in their 70s.
Will you have AI at the cost of a slack subscription? At the cost of a teammate? Will it not be available and you'll have to hire anthropic workers with AI access?
In a way, this is less of a cost issue than the fact that some/many engineers do not seem to be willing or able to host things themselves anymore and will happily outsource every part of their stack to managed services, be it CDN, hosting, databases, etc. I don't know why that's not more alarming than the LLMs.
128 GB unified memory, Nvidia chip and ARM CPU for just around 3k€ net. They easily push ~400 input and ~100 output tokens per second per device on say gpt-oss-120b. With two devices in a cluster, thats enough performance for >20 concurrent RAG users or >3 "AI augmented" developers.
And they don't even pull that much power.
Lots of people use firebase, supabase etc.
Many people's jobs are centered around using Salesforce
It all makes me uncomfortable- I want to be able to work without internet. But it's getting more difficult to do it
I’m sure you can see the difference between a garbage collector and a nondeterministic slop generator
But it feels good to equivocate, so here we are.
Ollama/llamafile/vllm/llama.cpp are free. Qwen/kimi/deepseek are free. Pi.dev/OpenCode are free. If you're using a SaaS AI subscription that's fine, but that's hardly the only option.
is doing a lot of work to avoid engaging with the actual argument.
1) you use it to help write code that you still “own” and fully understand.
2) you use it as an abstraction layer to write and maintain the code for you. The code becomes a compile target in a sense. You would feel like it’s someone else’s code if you were asked to make changes without AI.
I think 2) is fine for things like prototypes, examples, references. Things that are short lived. Where the quality of the code or your understanding of it doesn’t matter.
I think people get into trouble when they fool themselves and others by using 2) for work that requires 1). Because it’s quicker and easier. But it’s a lie. They’re mortgaging the codebase. And I think the atrophy sets in when people do this.
1) Day job 2) Side project
It would be unprofessional to treat the first like the second.
Nobody is going to pay you for your artisanally crafted CSS code or whatever you were coding manually until last year. If you can do it faster/better than the AI, good for you. But it's not a contest and possibly your days of maintaining that lead might be numbered.
In the end, as long as the UI is styled alright, nobody will care that you pieced it together manually for hours and hours. More importantly, people are not going to pay you more for it than they'll pay the next guy getting a similar result in an hour of prompting AIs. They'll want you to move faster and do more.
That's what better tools do, they just cause people to expect more, better, and faster. And their expectations expand until they match the limitations of the new tools.
People seem to have this mental block where somehow the amount of stuff we ship is going to be a constant in the universe and we'll all be out of work and descend in despair. That's something that in the history of our species inventing tools has never really happened. I don't see any reason why AI would change that. Sure, there's a lot more we can do now. And it's a lot cheaper now. So we can now have a bit more of our proverbial cake and eat it. People will push this as far as they can and will want more and more of the good stuff.
And they'll need help getting all that stuff built. One way is a painful process of slowly prompting things together. Most people lack the skills to do that, don't know what to ask for and are in any case busy doing other things. That job, building stuff using tools, is still a job that needs doing. I'm quite busy currently doing that.
Anyway, there are a lot of people producing mediocre software (with or without AI). That's pretty much a constant. I remember people using Visual Basic. Exact same thing. The problem isn't the tools but the people using them. There's a learning curve and most people are still behind that curve.
Yet now suddenly everyone is supposed to want to become a team lead of sorts (ie. the agents becoming your team). I don't want to do that, I treat an AI agent as a pair in a pair programming unit. Nothing more, nothing less. If someone wants to treat it differently, good on them, but they have no place telling what works for thee works for me.
I think a lot of people are getting caught up in the discussion about how we, generally as technologists, are going to use AI. And it is looking like the industry is moving towards what used to be programmers now being team leads or project managers of AI teams.
So it's probably best for you to try to not get involved in those discussions, and when someone says "you" assume they mean "you (generally)"?
Even my colleagues who cheated their way through uni still needed critical thinking to do that and get away with cheating without being caught.
People might hate this but being a good cheat requires a lot of critical thinking.
The only thing worth asking people is: what have you produced? Within this one question is so much detail that any other artifact is moot.
What you'd take is irrelevant if the HR/recruiter doing the initial screening of resumes is looking at an oversupply of candidates with degrees.
Hiring is broken is many ways. Candidates without degrees are faring even worse now are the initial recruiter screening stage due to the poor market.
In my EU country, academic inflation is so bad due to free education and psyopping everyone to path of academia, that not having a MSc is basically a red flag to companies for getting a SW job as most candidates have one, which means you're expected to have one too if you want to get a job.
It's not really that hard to get a degree in engineering if your only goal is the degree itself.
I do have to say I was appalled by some of the tests I had as an exchange student in the US (will not name the Uni in question but ranked around 60 in us rank). I remember a computer graphics test where a lot of questions were of the type "Which companies created the consortium maintaining the opengl specification?"... it was fully possible to obtain a passing grade just by rote memorization of facts. So I have no trouble believing that in the US it's possible in some unis to get a software engineering degree without understanding or critical thining
(Take home) projects are easier than ever thanks to AI. In the past, you at least had to track down some person to do the work for you.
I was self-taught since I was 15, so most of these classes were just review for me. I met lots of people that didn't know how to code as seniors (and never ended up getting a job in their field).
Most of the "Software Engineering" curricula I've seen is catered towards "getting a job as a programmer", and is mostly focused on languages, frameworks and outdated processes.
As an engineer in another discipline, there's no engineering there.
I would rank like this: Computer Science > Self Taught > Software Engineering.
I would say that today's graduates are IMO a bit better than a few decades ago but there are still many graduating who are just not good at writing computer software and don't really have the aptitude for that (or maybe the interest in getting good). That's what happens when the pipeline of people coming in are people who want to make money and the institution is mostly a degree factory.
You are, of course, right that the idea that someone could finish a serious engineering degree without being able to think is ridiculous.
So what does that tell me?
Better yet, for about 30% having the LLM slop it would have yielded better outcomes, but having them slop something nets terrible slop. But at least I can reshape because even the LLM wont do something that stupid.
--
A lot of students (and developers out there too) are able to pass follow instructions and pass the test.
A smaller portion of them are able to divide up a task into the "this is what I need to do to accomplish that task".
Even fewer of them are able to work through the process of identifying the cause of a problem they haven't seen before and work through to figure out what the solution for that problem is.
--
... There are also a lot of people out there that aren't even able to fall into the first group without copying and pasting from another source. I've seen the "stack sort" at work https://xkcd.com/1185/ https://gkoberger.github.io/stacksort/ professionally. People copying and pasting from Stack Overflow (back in the day) without understanding what they're writing.
Now, they do it with AI. Take the contents of the Jira description, paste it into some text box, submit the new code as a PR, take the feedback from the PR and paste it back into the box and repeat that a few times. I've seen PRs with "you're absolutely correct, here are the updates you requested" be sent back to me for review again.
This is not a new thing. AI didn't cause it, but AI is exacerbating the issue with professional programming by having the people who are not much more than some meat between one text box and another (yes, I'm being a bit harsh there) and the people who need instructions but don't understand design to be more "productive" while overwhelming the more senior developers.
... And this also becomes a set of permanent training wheels on developers who might be able to learn more if they had to do it. That applies at all levels. One needs to practice without training wheels and learn from mistakes to get better.
“AI suggested we do it that way”
And we’ve been degrading our systems rapidly for last several weeks. We’ve decided to pause and reflect and change how we use AI on tasks that are not dead simple.
the tool works better than stackoverflow, and i expect it eventually will improve enough that such people become as "productive" as the intelligent and conscientious engineer today.
That's a very bold claim. As a small example let's look at calculators - I remember a lot of claims that having access to calculators would make people's brains atrophy and they'll never be able to do actual math, but what I'm seeing in myself and most people around me is that we're using calculators (and more mathematical software) to tackle significantly more complex problems than people would be able to do if they rejected calculators.
To be clear, I'm not arguing that kids should be using a calculator from the first day of pre-school, but I do absolutely think that using them as later on as augmentation is clearly beneficial.
In the middle ground:
I'm putting together exercises for a C/Systems programming class I'm teaching in the fall.
Partway through this, for some reason [cough procrastination cough], I thought it would be fun to implement them in Scheme. My Scheme was already poor, and what meager skills I had are completely rusty. I used Claude to great effect as a tutor for that, but didn't have it code any of the solutions at all, of course. I could tell I was leveling up fast as I coded the things up.
Gotta use it in the right way if one wants to sharpen ones skills.
For junior engineers the distinction matters most. The reps are not just about getting the right answer, they are about building the intuition for when the answer is wrong. That's the hardest thing to transfer between people, and the thing AI is currently worst at self-verifying.
Why would you as a worker bother doing everything pristine? Theres no reward for you. The management of the company will fire you the day they see fit anyway. Not to mention companies tend to give higher salary raises to those who leave and later return - a true slap in the face of 'loyalty'.
I think the evidence for this is quite clear. Humans are NOT going to expend any energy - even mental energy, to think about something if they don't have to.
Personally, I really enjoy using AI. I have created my own cascade workflow to stop myself from “asking one more question”. Every session is planned. Claude and Codex can be annoying as hell (for different reasons). Neither is sufficiently smart for me to trust them. I treat them as junior devs who never get tired, know a lot of facts but not necessarily how to build.
I also enjoy using AI. It makes it easier to get mundane work done quickly. Junior devs who never get tired is a great analogy. It's a force multiplier and for people with limited time (meetings, people management, planning etc.) they enable doing a lot in limited time. I can relate to more junior people being worried and/or some senior people concerns of quality though. I get a task done, review it, get another task done. I won't let it build something large on auto-pilot.
One thing that should be noted is that life was simpler back then. You could know the syntax of C or Pascal. You knew all the DOS calls or the standard libraries. You knew BIOS and the PC architecture. I still used reference manuals to look up some details I didn't have in my head.
Today software stacks tend to be a lot more complicated.
I am doing it again using LLM. Legitimately, things that would have taken weeks is now done overnight. I still have to look at the code, at the generated C output, still have control over the architecture to make it easy for me and the LLM to work with in the future, etc
Is this replacing my thinking? I am not sure. I suppose I would have learnt a lot more about compilers/transpilers had I preserver through it for months with manual writes and rewrites but I would solely be working on this. Instead, I also had some time to write a custom NFS server support for a custom filesystem in Golang.
I'm extremely confident the answer is yes.
But we have to judge how much value that particular thinking has.
As an instructor, I've implemented linked list functionality a zillion times. I'm on the long tail of skills-gain from each reimplementation. But every time I implement it, I'm gaining a little more.
Now, is it worth it? Probably not. The time spent on that marginal gain would be better spent implementing something more novel by hand. So punting to an LLM, while it costs me, might be a net gain in that case. But implementing another compiler? Hell yeah, that would be replacing my thinking. I've only ever made one PL/0 compiler plus that one yacc thing in compiler theory class, and those were a long time ago.
We should quantify the loss of thinking when we decide how much to punt the code creation to someone or something else.
I have found myself going out and actually reading code less and less over the past year. I would be lying if I said that there are not fairly regular moments where I question the comfort level I have obtained with the system that I have built. I've seen it work with such a high accuracy and success rate so many times that my instinct at this point is to not question it. I keep waiting for this to really bite me in the ass somehow, but it just keeps not happening. Sure, there have been minor issues that have slipped through the cracks that caused me to backtrack, but that is nothing new. The difference is that with the previous way, I had painstakingly written that code and had a much more personal relationship with it. The code was the problem. Now whenever that does happen, I'm going back to the system and figuring out why it didn't get the answer right on its own, or why it didn't surface the whole thing in the plan to me prior to implementation.
It IS a waste of time if your only goal is the creation of the plan. However, one must be very self-aware of their goals because if one of the unacknowledged ones is to retain the ability to create plans, then you must continue creating plans yourself.
1) perfect is the enemy of good
2) fake it till you make it
The analogies imagine difficult scenarios where the habit of taking shortcuts doesn't help. But most people most of the time don't run into those scenarios at all.
It's only your opinion that is provably false.
First, there are still people who don't like high level languages and don't use them, because they find assembly better.
Second, I personally work in a field where I need to consult the source of truth, the actual binary, and not the high level source code - precisely because the high level of abstraction is obscuring the real mechanics of software and someone needs to debug and clean up the mess done by "high level thinkers".
High level programming languages are only an illusion (albeit a good one) but good engineers remember that illusion is an illusion.
I can tell you this, the person you're replying to comes from the overwhelming majority/generality. You, on the other hand, are that one guy.
Of course even my comment is a bit general. You're not "one" guy literally. But you are an extreme minority that is small enough such that common English vernacular in software does not refer to you.
Also, if you need to control performance, you still need to know how CPU cache and branch prediction works, both of which exists at the abstraction level of assembly.
And putting aside the vanishing skill, there is also an issue of volume.
All that LLMs and other generative models have done is enable an order of magnitude more stuff to be created cheaply. This then puts the onus and cost on the consumer of that output, hence why everyone is exhausted after a day of work that just involves looking over output. This volume of output will cause people to stop looking at all of the output and just trust the randomly generated code, and in time the quality will suffer.
It's worrying how much trust is being put in those systems. And my worry is not about the job anymore, but our future in general.
So, on one hand, I'm also kinda sad and how quickly we've thrown the guardrails away, but on the other -- it's... Well. It's just work.
Turns out, no one ever really cared how elegant or robust our code was and how clever we were to think up some design or other, or that we had an eye on the future; just that it worked well enough to enable X business process / sale / whatever.
And now we're basically commoditised, even if the quality isn't great, more people can solve these problems. So, being honest, I think a lot of my pushback is just a kinda internal rebellion against admitting that actually, we're not all that special after all.
I'm just glad I got to spend 20 years doing my hobby professionally, got paid really well for it, and often times was forced to solve complicated problems no one else could -- that kept me from boredom.
I think the shift we are seeing now, as 'previously' knowledge workers is that work becomes a lot more like manual labour than what we've really been doing up until now. When there's no 'I don't know' anymore, then you're not really doing knowledge work, right?
I guess I'll just ride the wave, spew out LLM crap at work, and save the craft for some personal projects, I'll certainly have the capacity now work is a no-op.
In a corporate world, we are typically detached from real world consequences and looking at people around me, people really don't think about such things - but I do. And I really care, because "relaxed" standards might result in errors that amount to stuff like identity thefts, or stolen money, shit like this, even on the smallest scale.
Obviously we can't prevent everything, but it seems like we, as industry, decided to collectively YOLO and stop giving shit at all. And personally I don't like that it is me who is losing sleep over this, while people who happily delegate all their thinking over to LLMs sleep better than ever now.
Keep it simple right; in everything you do, make things a bit better than you found them. It's enough. You're never going to win the fight to get everyone (or maybe even ANYONE depending how messed up your org is) to care; so why lose sleep on things you can't change?
At least, that's what I started doing some years ago by now having lost lots of those fights, and I'm sleeping fine again.
Our futures are safe in this sense, in fact it's even beneficial as we may be the last generation to have these skills. Humanities future on the other hand is another open question.
You can learn to understand the patterns that compilers spit out and there are many tools out there to aid in that understanding. You can't learn to understand what an LLM spits out because by design it is non-deterministic and will vary in form and function for each pull of the lever.
You can learn to understand how high level concepts in code map down to assembly language and how compilers transform constructs in one language to another. You can't know that about LLMs because they generate non-deterministic output based on processing of huge low-precision tables.
It's not even a close comparison.
I wonder if this sort of trend will continue?
(A competent assembly programmer can go miles around a competent high-level programmer, that's still true in 2026...)
GenAI is like a non-deterministic compiler. Just like your manager's reports except with less logical thinking skill. I'd argue this is still problematic.
I can''t imagine telling them now to stop—use the Ersatz Intelligence instead of Actual Intelligence.
> In talking to engineering management across tech industry heavy-weights, it's apparent that software engineering is starting to split people into two nebulous groups:
> The first group will use A.I. to remove drudgery, move faster, and spend more time on the parts of the job that actually matter i.e. framing problems, making tradeoffs, spotting risks, creating clarity, and producing original insight.
There is already research literally showing that on average it is a net loss on focus, learning and critical thinking skills.
Its the feeling of having done a lot of thinking for themselves without having actually done so.
Daily.
I think only twice have I agreed with it.
Like the way it will always give you code if you ask, even if the code is crap, it will always give you a design if you ask. Won't be a good design, though.
I don't know, I don't doubt you're more productive. Broadly so. But the depth and rigor I think may be missing, as the article suggests.
As an aside, I suppose it's a good time for those nearing the end of their careers, those who no longer need to learn, to cash out and go all in on AI.
Nearly certainly. Just turns out that depth and rigour matters a lot less than I would've hoped. Depressing, really.
But I can juggle 2 workstreams in a day easily, and I can trivially swap projects in and out of the "hot path" as demanded by prioritization or blockers; before LLM coding both of those were a lot harder.
When cars first appeared it took quite some knowledge and experience to even get the things started, let alone to keep them running. Modern cars are far better in all respects and as a result modern drivers often don't have a clue what to do when the 'Check Engine' light appears. More recent cars actively resist attempts by their owners to fix problems since this is considered 'too dangerous' - which can be true in case of electric cars. That's the cost of progress, it is often worth it but it does make sense to realise what it would take to go back in time to the days when we coded our software outside in the rain, upphill both ways with only a cup of water to quench our thirst. In the dark. With wolves howling in the woods. OK, you get my drift.
Will there be something like 'software preppers' who prepare for the 'AIpocalypse' by keeping their laptops in shielded containers while studiously chugging along without any artificial assistance. Probably. As a hobby, at least, just like there are 'survivalist preppers' who make surviving some physical apocalypse their goal in some way or other.
If not the tool then whose to blame? It’s very clear people that rely on LLMs for coding lose their skills. Just because you have a lot of parallel tasks going at once doesn’t mean you’re producing quality work. Who’s reviewing it? Are you just blindly trusting it?
> This is the part that some people may not want to hear --
> There is no generated explanation that transfers mastery into your brain without you doing the work. > There is no way to outsource reasoning for long enough that you still end up strong at reasoning.
This is in relation to early-career engineers, but I wonder why people think this won't apply to mid- and late-career engineers. Are they not also constantly learning things on the job? Are they not thus shortcutting their own understanding of what they are learning day-to-day?
Thats why they’re relaxed - it’s just switching from one sort of unreliability to a slightly different flavour
If the brain is like a muscle, it won't work.
Let’s say a person has 10 units of learning per week. Is the author actually claiming that that person must not deliver any results beyond their 10 units?
It makes some sense to have say 20 units of results and prioritize which ones to fully comprehend.
I suspect APIs / libraries / languages / platforms will have more churn due to AI. New platform new system need to learn. Once every 5 years might become every year or even more frequent. That would be a sort of inflation of knowledge and skills. It would affect the decision making about how to spend one’s 10 units per week.
This is… not how humans work? If you have the time and energy to learn ten things, and then spend time babysitting a random number generator to produce evidence of 10 more units of work, you’re paying an opportunity cost compared to someone who spends the time learning an eleventh thing. You can argue who has more short term value to a company… but who is the wiser person after a thirty year career?
Beyond that, if that's all you do, you are basically proving you're replaceable. If you're smart, you'll reallocate intellectual capacity that was freed up by A.I. onto something A.I. can't do today.
Managers simply cannot know all of the details of what their reports write. They have to build abstractions.
shows both groups using AI differently. Hard to continue reading the article that excludes your group entirely.
I have been an ardent opponent of AI since it came up a few years back. I refuse to vibe code and I refuse to let AI think for me. I won't be an AI controller.
However, two days ago I found a nice, personal use case for AI: Advanced writing checks (grammar checks, mostly, and some rewordings) in Word using a rather expensive app.
I write a lot of US English, despite it not being my native language, and AI is now helping me to write much better than I did before. Also, I discovered that I am much worse at writing Danish than I was believing. In fact, I think I am better at writing US English than at Danish, that's a bit surprising as I am a Dane.
No AI was used during the writing of this entry, but I dearly love the writing tool already! I have heard similar stories from friends who say that AI is very good at summarizing long documents and stuff like that.
So, I personally think that AI CAN elevate one's thinking. I am learning more about Danish and US English grammar every day, now, than I did during a decade before. Writing is suddenly so fun because it involves growing my skills.
IMO, teams need to agree on a set of principles on AI usage, concrete examples of where and how to use it. Perhaps its much more useful in parts of your system that's faster evolving and doesn't have too much core logic like testing frameworks etc
Simply discarding it as 'yet another tool' is part of the problem.
"Coding in the Red-Queen Era" https://corecursive.com/red-queen-coding/
That's exactly what is happening now. I wouldn't even call it an analogy, I'd call it an example of where AI is already having a baleful effect. FWIW I don't disagree with the article's thesis or the examples: yes, absolutely, if used well AI can elevate engineers in exactly this way and it behooves us engineers to use it in that way. We can also say that the deliberate design of the AI systems we are constantly being exhorted to use inclines them towards work-slop and abdicated thinking.
I learn so much arguing with it.
Yet nothing has actually changed.
Becoming dependent on a technology is to be expected. I'm pretty sure 95% of us are dependent on packaged meat and don't know how to hunt.
That's substantively different than going from assembly to C.
I remember some of my earlier issues with various languages. `Dim A, B as Int`, in VisualBasic one of them is an Int the other is a Variant, in REALbasic (now Xojo) they're both Int. `MyClass *foo = nil; [foo bar];` isn't an error in ObjC because sending a message to nil is a no-op.
Or how, back when I was a complete beginner, if I forgot a semicolon in Metrowerks, the compiler would tell me about errors on every line after (but not including!) the one where I forgot the semicolon.
"Docs say", "Compiler says", "StackOverflow says", "Wikipedia says"; either this tool is good enough or it isn't; it not being good enough means we're still paid to do the thing it can't do, that only stops when nobody needs to because it can do the thing. The overlap, when people lean on it before the paint is dry, is just a time for quick-and-dirty. LLMs are in the wet-paint/quick-and-dirty phase. You could get suff done by copy-pasting code you didn't understand from StackOverflow, but you couldn't build a career from that alone. LLMs are better than StackOverflow, but still not a full replacement for SWeng, not yet.
And every single major company becomes bureaucratic and political after 30+ years in the business when the original founders are long retired, and the Wall Street friendly beancounters take over, caring only about the quarterly reports.
'Lean agile' tech companies are by far the exception, not the rule.
Look at OpenAI and Anthropic, both fairly new companies that are excessively political already. This 'garage stage' of lacking politics is a myth, read old stories about Microsoft, when it was 15 people it was political.
No, you are.
You first asked: "When was tech not bureaucratic and political?"
To which I replied "in the 60's, 70's, 80's, 90's when they started in garages".
What did you fail to understand here?
>Look at OpenAI and Anthropic, both fairly new companies that are excessively political already.
Everything becomes political when you tell them they're worth trillions if they only play the right tune. Money brings out the worst in people. SW companies didn't make trillions decades ago.
What you actually wrote in the comment four hours ago:
>60's, 70's, 80's, 90's, basically before the Google and Meta found out ads and money printing run the world
Your lie just now:
>To which I replied "in the 60's, 70's, 80's, 90's when they started in garages".
---
>What did you fail to understand here?
Nothing because you never said it. Wild behavior.
You literally just quoted me saying before two comments above: "You are changing your argument by adding this: "when they started in a garage." and then pretend otherwise.
Now you're pretending I never said and acting like you didn't read it.
Are you unable to understand an argument made by adding the context of two sentence from two consecutive comments following up on each other(which you yourself quoted and said it changes the argument), or are you just a troll acting in bad faith pretending you can't understand just to score a cheap gotcha?
>Wild behavior.
Yes you have, which is why I'll stop replying to you now, to protect my sanity. Jesus Christ.
I have no choice but let claude explore them for me and return me its summarized understanding. As next step, only claude can apply the required cross repo fixes, not me.
I just don't have the time. Meanwhile my skills as classical programmer atrophy, while my experience with and trust in claude go up...
If all you do is point your LLM at your Jira tickets, then you are failing to be an engineer. I mean, if that's all you are doing, then who needs you? One of the most important things to learn is what the right questions to ask are and what the right decisions to make are when guiding the LLM, as well as the ability to judge the output it produces.
However my #1 productivity tool is still a custom code generator I have been using for years. It routinely generates 90+% of the code needed to write a typical biz web application, leaving just the business logic.
No AI. Just straightforward high-level-spec-to-server-client-DB code that is 100% trusted and proven in battle.
I mean, right now we're at the stage where any user can get AI to make you software to solve very specific things - almost no technical knowledge needed.
My prediction is that first will software engineers be rendered obsolete. After that, small businesses will disappear, as users can simply get those products/services directly via AI.
Then it became thousands.
Now models can handle and operate on code bases with hundreds of thousands LOC, even low MLOC.
So in just 3.5 years we've gone from LLMs being cute toys, to being powerful enough to actually replace junior engineers. Even if we hit a new AI winter tomorrow, the proverbial damage is already done.
...or as I interpret it your brain grows only when it does things that are difficult.
If you remove the difficulty, it will atrophy into a hum of a mindless chit-chat.
Engineering the data structures and control flows from scratch is a completely different than asking an LLM to scaffold them for you.
For the new prompt engineers I suggest the following title:
MCSE => Microsoft Certified Slop EngineerIf you never walk, your legs get weak, you gain weight, your aerobic system loses capacity, and you lose the ability to walk. You don't need it, you say, because you have your car and your mobility scooter and you'll always have these things. Your crutches don't make you weaker, you can still do everything the walkers can do, you say.
Good luck with the nature hike!
Most "I didn't realize I needed that" moments arrive after the atrophy is already done.
I don't give a shit about this career. I don't give a shit about engineering. I despise every second of it. There's nothing to aim for other than being a drone that does whatever is asked of it.
If AI can reduce my mental workload, why wouldn't I want to delegate everything over to it so I can save my faculties for what I truly enjoy? For the art of a worthless craft?
For you, it seems that you are not cut for it judging from what you say.
So yes, use LLMs.
And I don't have the personality for running a start-up or any company, unfortunately. I'm extremely risk-averse and withdrawn. If I really had no other choice, I'd probably have to budget in a ton of... chemical helpers (stimulants).
Anyway statistician, accountant, teacher, are indeed jobs, and I assure you they aren't found living on the streets.
It's changing the way we think, and reason.
Speaking as a BE focused Go developer, I'm now working with a typescript FE, using AI to guide me, but it scares the shit out of me because I don't understand what it's suggesting, forcing me to learn what is being presented and the other options.
No different to asking for help on IRC or StackOverflow - for decades people have asked and blindly accepted the answers from those sources, only to later discover that they have bought a footgun.
The speed at which AI is able to gather the answers from StackOverflow coupled with its "I know what I am talking about" tone/attitude does fool people at first, just like the over-confident half assed engineers we have always had to deal with.
Unlike those human sources, we can forcefully pushback on AI and it will (usually) take the feedback onboard, and bring the actual solution forward.
Thus proving the engineer steering it still has to know what they are doing/looking at.
‘AI’ doesn’t exist, and LLMs have vanishingly narrow legitimate justifiable use cases. Any output from one is intrinsically, explosively, imprecise, and can’t be trusted to be build upon without specialist treatment. I’m yet to identify any application of a LLM which can rationally be mistaken for intelligence.
Anyone who persists in referring to LLMs as ‘AI’ is either betraying they don’t understand what they’re talking about, or they’re invested too deeply in an active grift.
What’s the opposite of AI psychosis? Burying your head in the sand? Because anyone who could write this unironically today is certainly afflicted.
It’s no different to religions or economics.
University degrees certainly used to teach computing fundamentals without you having a computer in front of you.