Here we are a few decades later, and we don't see business units using Word's built-in dictation feature to write documents, right? Funny how that tech seems to have barely improved in all that time. And, despite dictation being far faster than typing, it's not used all that often because.. the error rate is still too high for it to be useful, because errors in speech-to-text are fundamentally an unsolvable problem (you can only get so far with background noise filtering and accounting for accents etc).
I see the parallel in how LLM hallucinations are fundamentally an unsolvable component of transformers-based models, and I suspect LLM usage in 20 years will be around the level of speech-to-text today: ubiquitously in the background, you use it here and there to set a timer or talk to a device, but ultimately not useful for any serious work.
LLMs create a new workflow wherever they are employed. Even if capable, that is not always a more desirable/efficient experience.
This is super scary stuff for an ADHDer like me.
I have an idea for a programming language based on asymmetric multimethods and whitespace sensitive, Pratt-parsing powered syntax extensibility. Gemini and Claude are going to be instrumental in getting that done in a reasonable amount of time.
My daily todos are now being handled by NanoClaw.
These are already real products, it's not mere hype. Simply no comparison to blockchain or NFTs or the other tech mentioned. Is some of the press on AI overly optimistic? Sure.
But especially for someone who suffers from ADHD (and a lot of debilitating trauma and depression), and can't rely on their (transphobic) family for support -- it's literally the only source of help, however imperfect, which doesn't degrade me for having this affliction. It makes things much less scary and overwhelming, and I honestly don't know where I'd be without it.
Gen AI reached 39% adoption in two years (internet took 5, PCs took 12). Enterprise spend went from $1.7B to $37B since 2023. Hyperscalers are spending $650B this year on AI infra and are supply-constrained, not demand-constrained. There is no technology in history with these curves.
The real debate isn't whether AI is transformative. It's whether current investment levels are proportionate to the transformation. That's a much harder and more interesting question than reflexively citing a phrase that pattern-matches to past bubbles.
Plenty a visual programming language has tried to toot their own horns as being the next transformative change in everything, and they are mostly just obscure DSLs at this point.
The other issue is nobody knows what the future will actually look like and they'll often be wrong with their predictions. For example, with the rise of robotics, plenty of 1950s scifi thought it was just logical that androids and smart mechanic arms would be developed next year. I mean, you can find cartoons where people envisioned smart hands giving people a clean shave. (Sounds like the making of a scifi horror novel :D Sweeney Todd scifi redux)
I think AI is here to stay. At very least it seems to have practical value in software development. That won't be erased anytime soon. Claims beyond that, though, need a lot more evidence to support them. Right now it feels like people just shoving AI into 1000 places hoping that they can find an new industry like software dev.
Source?
It really is 'different', though, in the same way the Internet was.
It took about 20 years (ie: since The World ISP) for the Internet to work its way into every facet of life. And the dot com bubble popped half-way through that period of time.
AI might 'underwhelm' for another five or ten years. And then it won't. Whether that's good or bad, I don't know.
Will that happen in the future, maybe. but I don't have enough insight into how AI is evolving in the labs to make a judgement on that.
So far, life goes on roughly the same as it did five years ago. This can feel 'underwhelming' in contrast to the onslaught of public discussion about, and huge investments in, AI.
Most of us here on HN are programmers, and we all know how radically LLMs have changed our code projects. Even so, the change to our everyday lives (aside from our work or hobby project) is not, just yet, glaringly obvious. This year, it's mainly... every website shoves an AI box on its site that nobody seems to want!
I don't hear people saying "nothing is going to change", but I do hear questions about the timeline and if the current levels of investment match returns. Branding these people as stuck in some sort of negative identity is bullshit.
"AI will change everything!"
Few seem to understand that both of the above can be true. The parallel you draw to the internet revolution is apt; dot-coms were both a bubble and changed everything.
The stuff LLMs will democratize will be a lot more impactful than nice posters for car wash fundraisers though. So in that sense it will be different, but I don’t think it will crack the market for proficient experts in the field in the same way photoshop didn’t destroy graphic design and CAD didn’t destroy drafting. It may get rid of the market for a lot of the second-tier bootcamp grad talent though, so I wouldn’t be getting into that right now if I could help it.
I was firmly in the camp that blockchain was not a viable solution to any problem, and that NFTs sound stupid. I think AI is much different than that list. So, there goes your argument?
Squares are rectangles. The existence of rectangles that aren't squares doesn't negate that.
AI is different because the magic clearly is because of the tech. The fact that we get this emergent behavior out of (what essentially amounts to) polynomial fitting is pretty surprising even for the most skeptical of critics.
It's not a very legible situation for people outside of the profession, and a lot of them believe it's just another grift that will blow up in a few years.
For what it’s worth, not a single other technology in the list made any sort of impact on my work. For better or worse, LLMs did.
Well, okay, quantum computing actually affected me a lot because I worked at a quantum hardware manufacturer, but that’s different.
I have unlimited derision for morally spineless worms who disingenuously make it out to be more than it is-- looking at Dario, Sam, and the silly CEO of Control AI. Also, I hate to say it but Andrej Karpathy on twitter-- he's a worthless follow now. I can't blame, but am daily exasperated by media figures who can't help but go with what they hear prominent individuals in the field say.
If I were a junior now, and less confident, I would be abandoning my career in this climate.
LLMs are not going away. They will get a little better than they are now, and new model paradigms will come around at some point. But this tale of massive redundancy and skyrocketing unemployment is not going to come from LLMs.
This is the only reason why I cannot wait for a pop, and pray to God that it comes sooner than later. I just want to feel good about technology again. I want to tinker, to feel positivity, to know how sustainable the tools I'm using actually are.
I don't want to be reminded daily of the disgusting reality of unbridled capitalism.
But like all the previous hype, most of the people that were the loudest won't say they were wrong, and they'll move to the next thing, pretending like they never were the one that portrayed AI as the holy Graal.
I mean, disillusionment is the least of my worries.
LLM's are not artificial general intelligence (i.e. not sci-fi AI). Why haven't they transitioned to being mere algorithms by now? Why is the public being told AI is finally arriving when it's really just another algorithm?
We have some truly slick and shady corporations involved in the bubble right now and they're marketing LLM's like tobacco. LLM's have been pushed out, at immense cost, to the public in a way that makes them more directly accessible to average people than any past algorithm. Young children can ask a LLM to do their homework for them. Middle managers can ask a LLM to create a (shitty) ad campaign for them. Corporations have gone to tremendous expense to make that widely available and, for the moment, mostly free. They seem to be following the Joe Camel school of marketing. Get them hooked while they're young so they come to you first when they're older! The only difference is that nobody is stepping in to stop the new Joe Camel from handing out free samples to kids.
Then there's the "go big" aspects of the bubble. The major competitors are trying to out-spend each other to dominance, but the suckers are so colossally big that their bubble is affecting global GPU, memory, and storage prices. This bubble is going to stress power grids wherever it operates and do considerable environmental harm. The financial games being played behind the bubble are absolutely stupid. The results, so far, are tantalizing for billionaires. LLM's offer the promise of being able to fire all their pesky and annoying human workers. It won't deliver on that, and none of these companies is ever going to make enough to pay their debts. There might be "too big to fail" government bailouts, but there are going to be some big bankruptcies too.
Useful algorithms will come out of all this, a lot of tears too, but not "AI".
Umm, what? For the past 3 years, every year I've said something along the lines of "even if models stop improving now, we'll be working on this for years, finding new ways to use it and make cool stuff happen". The hype is already warranted. To have used these tools and not be hyped is simply denial at this point.
Most of Mag-7 are planning to spend over 500B on capex this year alone on building out datacenters for AI pipelines that have yet to prove that it can generate a sustainable profit. Yes, AI is useful in some environments, but the current pricing is heavily subsidized. So my point stand, the hype is not warranted.
Still don't understand what's the end goal here. Assuming they don't deliver, then there are billions of investments that will go bust. Assuming they deliver, millions lose their jobs and there's going to be a bloodbath on the streets.
There is a third outcome that combines both of these.
LLMs can massively displace the workforce (and cause widespread social instability) AND the companies pouring hundreds of billions into them right now could, at the same time, fail to capture significant amounts of the labor savings value as late-mover alternatives run the race drafting their progress without the massive spend.
I'd honestly be surprised if this double-whammy isn't the outcome at this point. AI is going to have a massive impact on everything, but there is still no moat in sight.
But there's a lot of things playing out to our advantage. Vast swathes of useful and publicly available training data. The rigorous precision of said data. Vast swathes of data we can feed it as input to our queries from our own codebases. While we never attained the perfect ideal we dreamed of, we have vast quantities of documentation at differing levels of abstraction that the training can compare to the code bases. We've already been arguing in our community about how design patterns were just level of abstraction our coding couldn't capture and AI has access now to all sorts of design patterns we wouldn't have even called design patterns because they still take lots of code to produce, but now for example, if I have a process that I need to parallelize it can pretty much just do it in any of several ways depending on what I need at that point.
It is easy to get too overexcited about what it can do and I suspect we're going to see an absolute flood of "We let AI into our code base and it has absolutely shredded it and now even the most expensive AI can't do anything with it anymore" in, oh, 3 to 6 months. Not that everyone is going to have that experience, but I think we're going to see it. Right now we're still at the phase where people call you crazy for that and insist it must have been you using the tool wrong. But it is clearly an amazing tool for all sorts of uses.
Nevertheless, despite my own experiences, I persist in believing there is an AI bubble, because while AI may replace vast swathes of the work force in 5-20 years, for quite a lot of the workforce, it is not ready to do it right this very instant like the pricing on Wall Street is assuming. They don't have gigabytes of high-quality training data to pour in to their system. They don't have rigorous syntax rules to incorporate into the training data. They don't have any equivalent of being guided by tests to keep things on the rails. They don't have large piles of professionally developed documentation that can be cross-checked directly against the implementation. It's going to be a slower, longer process. As with the dot-com bubble, it isn't that it isn't going to change the world, it is simply that it isn't going to change the world quite that fast.
I think you're right but for the wrong reasons wrt sustainable profit.
Specifically, overcounting how much it will cost in 5 years to run AI because you're extrapolating current high prices, and at the same time undercounting how the demand will drive efficiency gains.
It's high time to stop accumulating debt while providing free picture of pelicycles, just charge the full cost for them - enough to generate profits and pay back debt.
What we see now is literally burning money and energy to generate hype. The only true measures of success are financial and macroeconomic. If the hype is real, there should be no problem for the mighty AI to generate debt-free profits for its providers while the overall price level in the US goes down.
We observe the exact opposite which makes the AI hype act only as market manipulation for capital misallocation.
I was so expecting to find this wind-up aimed at those peddling the "AI is hype" laziness.
It's laziness because they have little CS fundamentals to base such claims on, and the deductions can be made, just not clearly to people who need to study a lot more.
It's like watching an invisible train (visible to those with strong CS) rolling down the tracks at a leisurely pace. Those sitting in their stalled car on the tracks are busy tweeting about "AI HPY PE TRAIN." Until it wrecks their car, the gimmick is free oxygen. It's a lot easier to write articles than it is to build GPUs and write programs.
So, what CS fundamentals do you need to evaluate if AI is the real thing, or will disappoint in the future? Until a few months ago, coding agents were met with skepticism, until Anthropic introduced their new model and, with it, a hype train that cannot be rationally justified. Look, SOTA LLMs, and coding agents in particular, are impressive. However, current predictions about the future of software development (and the world in general) are speculative. There is little to no data showing whether AI can deliver on its promises. How could there be in this short time frame? No one knows what the future will hold, no one knows how coding agents will be integrated into our work life and everyday life in the long run, or what hard limitations they will reveal. No one can tell you how professions will change in the coming years; every prediction is purely speculative, and anyone making prophecies is either trying to cope with the uncertainty themselves or has some stakes in the AI bet. It would be nice if people were actually humble enough to admit that they have no idea what will happen in the future, instead of writing the hundredth doom and gloom post.
It's amazing to me how those willing to seize on the speculative nature of any ANY uncertainty cannot recognize the inherent uncertainty of the inverse.
> what CS fundamentals do you need
1. Tarski's undefinability theorem 2. Gödel's incompleteness theorems 3. Curry Howard correspondence
And a lot of exposure to deductive reasoning, vague ideas of automated theorem proving and formalization.
I won't pretend its easy, but let's be clear, a small fraction of people who know things are being forced to entertain the hysteria of a vast majority who are unwilling to know things and just go around beating their chests and will continue doing so until the train hits them.
There are 2-3 minor architectural changes in between now and what I would identify as a completely unbounded AGI with clearly discernible dynamic, self-defined objective functions and self-defined procedures for training and inference. It can be done in megabytes. Oh god. Get me out of this forum. I wish to return to my code editor.
It's different just like the steam engine was different, except technology moves much 100x faster now than it did then. It's different and the same.
Non-coding work is thinking about the system architecture, thinking about how data should flow, thinking about the problem to be solved, talking with people who will use it, discovering what their objectives are.
Producing 40k lines a code per day simply means you're not doing any of that work: the work that ensures you're building something worth building.
Which is why the result is massive, pointless things that don't do the things people actually need, because you've not taken any time to actually identify the problems worth solving or how to solve them.
It's a form of mania that recalls Kafka's The Burrow, where an underground creature builds and builds an endless series of catacombs without much purpose or coherence. When building becomes so easy when it was so hard -- and when it becomes more fun to build and watch codex's streams of diffs fly by, than to plan -- we forget the purpose of building, and building becomes its own purpose, which is why we usually so little actual productive impact on the world from the "40k lines of code a day" cohort.
Otherwise his entire team must collectively groan when a Slack message appears: "Got a new PR ready for review everybody!"
It is physically and physiologically impossible for anyone to be reviewing "30-40K lines of nearly perfect code a day" to the extent needed to push it with confidence in a sensible development process.
> 3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.
...conveniently doesn't list a bunch of hyped tech that hasn't failed:
> microchips, PCs, the internet, ecommerce, cloud, EVs, 5G
...and presents this as evidence that the current hyped tech (AI) will fail:
> Seems like you say that about every passing fancy - and they all end up being utterly underwhelming.
When the article needs to construct disingenious arguments, I'm not interested in its conclusion.
But wait! If you actually read to the end, there's a plot twist!
> The ideology of "winner takes all" is unsustainable and not supported by reality.
Who said anything about winner takes all? You just burned a "this time is different" straw man and then conclude that "winner takes all" is not realistic?
At this moment I'm wondering if the article was in fact written by a quantized 8B LLM. Surely people don't do such non-sequiturs and then expect to be taken seriously.
But of course not. This is not an argument. This is preaching to the choir.
Preach, brother, preach.
Internet, handheld computers, electric cars...The problem is the same dudes.
Putting beanie babies in with Quantum Computing and Nuclear Power completely ignores the potential life changing elements of some technologies, even if they don't work.
Oh, and smart glasses he put in there, so he'll be eating his words in 2 years.
I’ve never heard half of the things and the other half is mostly consumer electronics or specific product names. The closest example here is Quantum Computing, which is also a serious technology in development. I think for the OP these are all tech buzzwords that he invests in without understanding what they really are. That’s why he thinks all these unrelated things are the same.
The point is to take the hype with a grain of salt and knowledge that not all hyped technologies transformed the world as promised. Maybe AI is like the internet or electricity. But maybe the claims about AGI/ASI and full automation are just hype.
It's just an interface problem. The VT100 didn't change the world overnight either.
I spent most of Covid in VRChat and met my current live-in gf, so the metaverse was real for me too.
I also made decent money selling crypto, so that part was real for me too.
And AI coding, for as dumb as even the best models are, still enabled me to create things that I wanted to, but wouldn't have had time or gotten nearly as far without.
I dunno if the author realizes, but all the things they mentioned did materialize in one way or another, just not exactly how the hype described it.
Maybe if they could let go of some of the cynicism, they could find something to be optimistic about. Nothing ever goes exactly as planned, but that doesn't mean nothing is good.
From the post, which is not a very long one: "All of the above technologies are still chugging along in some form or other (well, OK, not Quibi). Some are vaguely useful and others are propped up by weirdo cultists"
I also found the "it's almost always dudes" line a bit strange, because I've seen plenty of women doing marketing for startups running on hype.
75% of restaurant orders are delivery now due to widespread personal electric transportation. It already has fundamentally changed humanity.
abstract away a lot of the mechanics of working with data/information.
helpful, when literacy seems to be trending in a downward direction.
"All of the above technologies are still chugging along in some form or other (well, OK, not Quibi). Some are vaguely useful and others are propped up by weirdo cultists. I don't doubt that AI will be a part of the future - but it is obviously just going to be one of many technology which are in use.
> No enemies had ever taken Ankh-Morpork. Well technically they had, quite often; the city welcomed free-spending barbarian invaders, but somehow the puzzled raiders found, after a few days, that they didn't own their horses any more, and within a couple of months they were just another minority group with its own graffiti and food shops.
- Terry Pratchet's Faust Eric"
Deep disconnect from reality.
Actually IT IS different. Actually if they manage to create a viable small nuclear reactors or Quantum computers the world will change like it changed with the Watt thermal engine.
Why he is not talking about the Internet, trains, electricity, nuclear bombs, rockets,aviation or engines? Because they worked, like AI works today.
All of them were bubbles at the time and they changed the world forever. AI is changing the world AND it is a bubble.
AI is here to stay. It will improve and it will have consequences. The fact that a robot could do things with its hands is actually significant, whenever you like it or not.
https://www.youtube.com/watch?v=SZFhFGpDWGw
"Today, I'm speaking with Stephen C. Meyer, Director of The Discovery Institute's Center for Science and Culture, and and George D. Montañez, Director of the AMISTAD Lab at Harvey Mudd College–both of whom are extremely knowledgable on the topic of artificial intelligence. During the course of our conversation, they discuss the asymmetry between human intelligence & AI, the inability of AI to ascribe meaning to raw data, and the limitations of large language models. The real question though is: are we screwed? Let's find out."
Is just propaganda...
Iran is 2 weeks from a nuclear weapon We obliterated Iran's nuclear dreams
Russia is fighting with shovels Russia is on the verge of swarming Europe
What would Joost Meerloo say about it, I wonder.
We're in that part of turbulence where we don't know if the floating leaf is going to go left or right.
The people who will have the hardest time with this transition are those who go all in on a specific prediction and then discover they were wrong.
If you want to avoid that, you can try very very hard to just not be wrong, but as I said, I don't think that's possible.
Instead, we need to be flexible and surf the wave as it comes. Maybe AI fades away like VR. Or maybe it reshapes the world like the internet/smartphones. The hardest thing to do right now, when everyone is yelling, is to just wait and see what happens. But maybe that's the right thing to do.
[p.s.: None of this means don't try to influence events. If you've got a frontier model you've been working on, please try to steer us safely.]
"This time will be different," they said about the Metaverse, ignoring the vast tranches of MUCKs, MUDs, MMOs, LSGs, and repeated digital real estate gold rushes of the past half-century. Billions burned on something anyone who played Second Life, Entropia, FFXIV, EQ2, VRChat, or fucking Furcadia could've told you wasn't going to succeed, because it wasn't different, it just had more money behind it this time.
"NFTs are different", as collectors of trading cards, art prints, coins, postage stamps, and an infinite glut of collectibles looked at each other with that knowing, "oh lord, here we go" glance.
"Crypto is different", as those who paid attention to history remembered corporate scrip, gift cards, hedge funds, the S&L crisis, Enron, the MBS crisis, and the multitude of prior currency-related crises and grifts bristled at the impending glut of fraud and abuse by those too risky to engage in traditional commerce.
And thus, here we are again. "This time is different", as those of us who remembered the code generators of yore pollute our floppy drives and salesgrifts convinced our bosses that their program could replace those expensive programmers roll our eyes at the obvious bullshit on naked display, then vomited from stress as over a trillion dollars was diverted from anything of value into their modern equivalent - with all the same problems as before.
I truly hate how stupidly people with money actually behave.
What is meta-technology?
Effectively, it’s a statement saying nothing can ever be profoundly different, because people have said it before and been wrong.
Lazy.
Failure to appreciate changes in AI will have left you calling every shot wrong over the past 5 years. While AI models continue to improve at an exponential rate, you'll cling to your facile maxims like "dude it's just predicting the next token it isn't real intelligence".
for all the things you listed, less than 1000 people are using it, with AI we're clearly not finished with the gartner hype cycle, but the back end is going to be over a billion users.
Also, every single close friend of mine makes some use of LLMs, while none of them used any the overhyped technologies listed. So you need a specially strong argument to group them together.
New things are happening and it's exciting. "AI bad" statements without examples feel very head-in-sand.
I like technology. I made a decent living from it. But if I had chased every hyped fad that was promised as the next big thing, I doubt I'd be as happy as I am now.
I mean you're just stating that sometimes tech doesn't meet it's hype. What's insightful about that? It's a given; cherry-picking examples doesn't prove your case.
Well, no, the ratio is most definitely not 1-to-1.
MRNA vaccines. Where are the countless breathless articles about these literal life saving tech? A few, maybe, but very few dudes pumping out asinine "white papers" and trying to ride the hype train.
Solar and battery. Again, lots of real world impact but remarkably few unhinged blowhards writing endless newsletters about how this changes everything.
I'm struggling to think of a tech from the last 20 years which has lived up to its hype.
Not everything is written to be insightful. Some things are just written to get them out of my head.
Do feel AI is overall just hype? When did you last try AI tools and what about their use made you conclude they will likely be forgotten or ignored by the mainstream?
It was an hour of pasting in error messages and getting back "Aha! Here's the final change you need to make!"
Underwhelming doesn't even begin to describe it.
But, even if I'm wrong, we were told that COBOL would make programming redundant. Then UML was going to accelerate development. Visual programming would mean no more mistakes.
All of them are in the coding mix somewhere, and I suspect LLMs will be.
> usage is copy pasting code back and forth with gemini
the jokes write themselves
As I said, maybe I'm wrong. I hope you have fun using them.
> Not everything is written to be insightful. Some things are just written to get them out of my head.
I like that, going to use it as the motivation to get some things out of my own head.
Hype is often early, in 10-20 years we'll start seeing the value as the rest of the world catches up
https://www.sfgate.com/food/article/rise-fall-bay-area-start...
This only doesn't feel like substantiation if you reject the notion that these cases are analogous.
"You shouldn't eat that."
"Why not?"
"Everyone else who's eaten it has either died or gotten really sick."
"But I'm different! Why should I listen to your unsubstantiated claims?"
"(lists names of prior victims)"
"That doesn't mean anything. I'm different. You're just making vague and dismissive unsubstantiated claims."
The claim isn't "AI bad" the claim is more along the lines of "there's a lot of money changing hands and this has all the earmarks of a classic hype cycle; while attention/diffusion models may amount to something the claims of their societal impacts are almost certainly being exaggerated by people with a financial stake in keeping the bubble inflated as long as possible, to pull in as many suckers as possible."
If you want another example (which you won't find analogous if you've already drunk the koolaid):
https://theblundervault.substack.com/p/the-segway-delusion-w...
Internet - this time is different
iPhone - this time is different
Love the Sir Terry reference.
Similarly to how titles that start with "how" usually have that word automatically removed.
Or maybe judicious use of an LLM here could be helpful. Replace the auto-edits with a prompt? Ask an LLM to judge whether the auto-edited title still retains its original meaning? Run the old and new titles through an embedding model and make sure they still point in roughly the same direction?
x-clacks-overhead GNU Terry Pratchett