201 points by i5heu 2 hours ago | 31 comments
svnt 1 hour ago
It is a quirky article but the author, instead of engaging with information sources to understand what important thoughts people have had about these topics, feels the best thing to do is introduce new terms that other terms already exist for. This is basically just inductive bias plus the AI homogenization idea producing a distribution shift.

This is what happens in thought-isolation. It isn’t better than educating yourself, whether that education involves AI or not.

Phillip Kitcher is known for epistemic monoculture, Dawkins and then Henrich popularized collective intelligence and cultural evolution.

The thing about these fear pieces is concepts like the hollowed mind are reductive and that reductionism is based on a reductive view of (usually other) people.

But what actually happens is we have formalized processes and can externalize them. This is a benefit if you can use your newfound capacity and free time for something better, which I think most people ultimately will.

drivebyhooting 17 minutes ago
If computers are bicycles for the mind and AI are cars, I wonder what the analogue for the obesity epidemic is.
rcoveson 11 minutes ago
It's even more depressing than that framing would suggest, because we skipped over the decades where cars were just fast, powerful transportation tools and went straight from "mind bicycles" to "mind Teslas" full of cameras, tracking, proprietary software, and subscription fees.
gdubya 13 minutes ago
That is a sharp and slightly chilling analogy. If Steve Jobs saw the computer as a tool that amplified human effort (the bicycle), and AI represents a tool that automates that effort entirely (the car), then the "obesity epidemic" of the mind is likely Cognitive Atrophy.

- Gemini

antonvs 10 minutes ago
> That is a sharp

LLM tell right there.

> - Gemini

Yes, we already know. I suppose you think posting AI slop in this context is funny. It isn't.

Also, no, the observation is not sharp. You're being gaslighted and having your cock fluffed by a machine.

antonvs 12 minutes ago
The obesity epidemic has much less to do with cars and much more to do with cheapness of food and volume consumed.

A typical deli sandwich in the US should be enough to last any normal person three days. Same goes for e.g. ice cream from Shake Shack (random example I know, but one I came across recently). If you buy one of these and eat them in one sitting, the answer to "why am I obese" is simply "you eat way too much."

superxpro12 20 minutes ago
I think we're excluding from this analysis the probability that these "AI" products will remain truly unbiased and free from external (corporate) influences.

When AI gains true marketshare in the "think-space", I have zero trust that the corporate overlords controlling these machines will use them in the fairest interests of humanity.

rpcope1 14 minutes ago
You're absolutely right! But Brawndo has what plants crave!
antonvs 15 minutes ago
When I read pieces like this all I think is, resistance to change is a helluva drug.

I've been working on a project and using LLMs heavily to inform my design decisions. There's already a long list of cases where it has taught me things I wasn't familiar with, alerted me to possibilities I didn't consider, shown me how to do things that I was struggling with. In those cases I ask for references, and it delivers.

This is not "endangering human development". If anything, it's the exact opposite - allowing human knowledge to be transmitted to other humans in an accessible way that otherwise, usually simply would not have happened.

Of course, this all depends on using AI to enhance cognition and access to knowledge, as opposed to just letting a machine write all your code for you without review, Yegge-style.

I'm not saying there isn't a moral dimension to all this, and areas of serious concern. But the one about "endangering human development" is wholly in our individual hands. You can use AI to help you learn, or to replace the need to learn. The former will be better for human development.

One real lesson from this is perhaps that we need to teach people how to use AI in ways that benefit their development, not just their output.

nathan_compton 6 minutes ago
I think it depends on the person. As a teacher, I see this. Some kids (the gifted ones) use AI to multiply their efforts. Most kids use to just get by and are actually coming out of the class with less knowledge than they would have without one.
Forgeties79 31 minutes ago
>This is a benefit if you can use your newfound capacity and free time for something better, which I think most people ultimately will.

I think for a lot of of us the problem is that this is not a given. It’s often promised and rarely occurs, especially in the modern era. Increased productivity usually just means increased demands in the workplace.

zozbot234 1 hour ago
At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis is sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days the god Thamus was the king of the whole country of Egypt; and he dwelt in that great city of Upper Egypt which the Hellenes call Egyptian Thebes, and the god himself is called by them Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. It would take a long time to repeat all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
palmotea 1 hour ago
Oh no, not that tired thing again. I suppose your point is: people once were critical of the technology of writing, so all criticism of the technology-at-hand is illegitimate. You don't actually make a point, so one has to assume.

Some points:

1. Technological inventions are not repetitions of the same phenomenon. Each invention is its own unique event, you cannot generalize the experience with previous inventions to understand the effects of the latest ones.

2. Socrates may have been in large degree right. Imagine that you and your society has been locked in the sewers, condemned to wade in shit for so long that you and your ancestors long ago forgot what fresh air feels like. What would you think about your life? Would you think "this is horrible" or "this is fine"? Or maybe "I enjoy smell of shit and we're so much better off because we don't have to worry about sunburn"?

_verandaguy 1 hour ago
While I agree with your rebuke of the GP, Socrates was materially wrong about writing (or at least, about the ability to persist information beyond any single human lifetime).

Cumulatively, knowledge work (including, in particular, curating knowledge) is exceptionally energy intensive from an evolutionary standpoint. It does pay dividends, clearly, but to get compounding effects from it, being able to efficiently pass down big corpora of facts, ideas, processes, etc., is an absolute necessity.

Writing systems are the fundamental way through which we can do this. They worked for us for millennia, and we eventually built upon them to develop encodings used today to store information remarkably densely.

bluGill 1 hour ago
The larger win from writing is passing down things that are not commonly needed. If you hunt antelope every year I can teach my kids. If we know there are antelope "over there", but they are easy to over hunt to so we only hunt in 100 year droughts - nobody in the village will know how to hunt them when we need to and so we need writing. (never mind how we figure out that they are easy to over hunt)
bonesss 42 minutes ago
> Writing systems are the fundamental way through which we can do this

Writing systems are ‘a’ fundamental way to pass down large collections of facts, and my personal bias. We are prejudiced and naive though:

- Those knotting systems in China and South America that preceded writing for millennia are also persistent and intricate

- Cave paintings are quite dense, drawings and art are direct visual representations with compound meanings (seasonal behaviour, hunting strategy, creation myths)

- Iconography of all forms persists a rich visual language, hieroglyphics and equivalent which carry deep social instruction with verbal reinforcement

- Stories with self-correction have many-tens-of-millennia consistency categorically outstripping any other medium we have tested, the aboriginal dream-stories capture humanities shared storage during its global expansion

- Music is math. Song and dance captured all of the above in self-verifying and correcting fashion for hundreds, hundreds of millennia before that.

And before we hit any complexity arguments, like a hard specification:

a) those formats leveraged human pattern recognition and meat-based compression (ie “every chunk in the 4,000 page OOXML specification is as simple as do-as-Word-did…”)

b) find video of African dance/drumming ceremonies — density is not the issue — a special hoot, a known drumbeat… there were continental signalling networks that terrified Colonial explorers.

There is an argument that writing allows for corrosive decontextualization. Jesus cursed a fig tree. No one learning that tale the old ways would snicker. And, thus, history becomes not a tale, but a grab bag of a child’s letter blocks, you can spell anything you want.

_verandaguy 0 minutes ago
While I agree that those are all ways of preserving knowledge in a somewhat inter-generational way, a few thoughts.

- None of these are as flexible as writing. They're more expressive, more engaging (arguably, at least to some), and might even be good at succinctly saving certain specific types of knowledge.

  Knot systems typically parallel the abacus, having been used for accounting and to keep a record of tax levies. Certainly this isn't the *only* thing they were used for, but this was the case in a number of indigenous civilizations in the Americas, as well as in some Asian civilizations. Certain dances might be good at representing the motions you have to go to while working fields or performing other societal tasks, sure. But a good writing system, in its relative blandness, is incredibly versatile, and can encode not just a wide breadth of information, but also include information about *why* the information is what it is, to the extent that the authors knew.
- Many of these systems tend to either disappear or change over time while relying on largely-unwritten rules, implied social context, and other informational artifacts that themselves don't have a very long shelf life in the event of significant social change. Where destroying the written word (especially in the wake of the invention of the printing press) is a long-term, conscious, coordinated action; dances, songs, and stories can fall victim to everything from fashion, to counterculture, to human migrations, to hostile invasions.

- I don't understand what you mean by things like "stories with self-correction." In many cultures with an oral tradition, the stories do get distorted because of people misremembering, or through conscious changes in response to social conditions at the time of a retelling; if a 1,000-year-old story with no written record backing it is told today, it's almost certainly not the original story, but the culmination of a thousand years and dozens of generations of sometimes-subtle, sometimes not reinterpretation.

programjames 28 minutes ago
[dead]
gallerdude 1 hour ago
1. You can't understand the nuances, but there is a general pattern: new inventions may make us slightly less proficient at specifics, yet more powerful overall

2. Imagine a hunter gatherer is time travelled to 2026. You have lunch go to a cafe with him, and he learns that food is cheap, delicious, and abundant. He sees your house, and thinks it's amazing compared to his cave. He thinks that 2026 must be absolute paradise. You explain to him, well kinda, but also not really. Is the hunter gatherer right?

AlecSchueler 1 hour ago
Alternatively he sees that you live in your house alone and feel lonely all the time. Maybe you have a small family and a few friends but it's nothing compared to the tribal life he knows.

He sees you spend your day working but rarely get to go outside or do anything active. Even when you're not working you sit behind a desk staring at a screen.

He wonders why you bother will all the technology when it made your life worse. Is he right?

gallerdude 5 minutes ago
I agree partially, but also misses the wonder he would have for: relaxing bathtubs, funny livestreams, wireless earbuds, huge libraries, and even globes.

And yeah, you could make a list of struggles we have today he never did. But that’s kind of my point - it’s complicated.

tadfisher 53 minutes ago
The hunter-gatherer will wonder why you spend so much time working. He only spends 2-3 hours a day gathering and preparing food, maybe an hour maintaining tools and shelter; with the rest dedicated to leisure and social activities.
DiscourseFan 34 minutes ago
As to 2., the whole of this narrative in the Phaedrus is ironic, considering it depends on the written word for its transmission, this dialouge being fully reported by Plato, filled with literary allusion, dramatic setting. Cf. "Plato's Pharmacy," by Derrida, and the work of his student, Bernard Stiegler.
quirkot 1 hour ago
regarding #2: how many serfs came home after re-digging the toilet hole to eat a meal of hand-milled grain bread and old vegetables with the members of the family that survived infancy and thought "life just doesn't get any better than this"? Probably almost all of them
partyficial 1 hour ago
he(zozbot234) could also be agreeing with OP, not disagreeing.

I don't remember phone numbers anymore. If I were to lose my phone, or the cloud, I'm SOL re-adding everyone.

pixl97 1 hour ago
I mean, it's most likely because you have an absolute shit load of numbers/contacts in your phone. In the old days people just had rolodexes filled with numbers and if that disappeared they were just as screwed.

I remember a few numbers of my most direct contacts and depend on backups for everything else.

rrr_oh_man 1 hour ago
> he(zozbot234) could also be agreeing with OP, not disagreeing.

This is how I for one understood this.

jareklupinski 1 hour ago
> What would you think about your life? Would you think "this is horrible" or "this is fine"? Or maybe "I enjoy smell of shit and we're so much better off because we don't have to worry about sunburn"?

id probably start with "who locked us in this sewer?"

hibikir 1 hour ago
That's quite the uncharitable view. Let's try a better one.

Changes on what humans need to remember what to do have, for as far as we have written records, changed the skills humans hone over time. They change our fitness function. Some of those changes are bad for a while, and then get better. Others are just far better at all times. Others might get rejected. Either way, it takes a long time before we know what the technology does to us: See how cheap printing is directly linked to wars of religion.

So it's not that AI could not be bad in the short run, or even in the long run: It appears to be the kind of technology where one cannot evaluate without significant adoption, and at that poing, we are in this rollercoaster for a while whether we want it or not. See social media, or just political innovation, like liberal democracy or communism. We can make guesses, but many guesses made early on look ridiculous in hindsight, like someone complaining about humans relying on writing.

tbrownaw 1 hour ago
Writings are fixed once written, and don't update themselves as the world changes.

Writings are subject to known biases such as publication bias, and so relying on them reduces the range of what you can consider.

Therefore, writing is bad for the same reasons that this post thinks that AI is bad.

quirkot 1 hour ago
[dead]
alwa 1 hour ago
xg15 51 minutes ago
> Phaedrus: Yes, Socrates, you can easily invent tales of Egypt, or of any other country.

Looks like even back then, they went "cool story bro" on that text...

hdndjsbbs 1 hour ago
The irony of quoting this particular story without providing any of the necessary context for readers. Truly an aid to reminiscence and not memory.
charonn0 50 minutes ago
> they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

This could be describing an internet argument where both parties google for expert articles that seem to support their point of view without really understanding anything about the subject.

butlike 1 hour ago
It's just a story. Doesn't mean it's wise.
eaglelamp 1 hour ago
You're misinterpreting the quote. Socrates is saying that being able to find a written quotation will replace fully understanding a concept. It's the difference between being able to quote the pythagorean theorem and understanding it well enough to prove it. That's why Socrates says that those who rely on reading will be "hard to get along with" - they will be pedantic without being able to discuss concepts freely.

Likewise with AI the appearance of reasoning without the substance could lead to boring exchanges of plausible slop rather than meaningful discourse.

pixl97 1 hour ago
I mean Socrates said enough stuff that was wrong or didn't have any scientific understanding either.

Simply put at humanity wide scales written information is by far the most important thing you can have. There is kind of a Sortie's paradox occurring where you have individual knowledge that can be held by one person conflicts with systems knowledge that has to be redundant and can be easily transferred.

user3939382 1 hour ago
This is actually a great criticism. Post Enlightenment we’ve come to worship the written word as a source of truth. It’s not. Thoughts, wisdom, understanding, exist primarily (and by necessity primarily) as a continuous structure in our minds. By writing, we distill and collapse this rich continuous structure into a discrete 2D slice. It’s portable which has many benefits but we tend to forget that this written word we worship in academia is a low fidelity copy created out of necessity, not because it’s optimal. In fact, much is lost this way. The hazard is that we often end up testing for mastery of this low fidelity discretization rather than the knowledge structure it shadows.
DangitBobby 28 minutes ago
We would literally not have access to this criticism without the written word. It would have long been lost to time. And so it is with enumerate other thoughts that happily have been recorded.

Before written word, the uneducated had to just take the words of the (apparently) wise as an authority on all matters, and the only access to their knowledge was through conversation with them. That's gatekeeping and siloing in one go.

And authorities' thoughts themselves often form 2D slices of knowledge once they stop continually updating themselves in the know on SotA. Even if they do keep themselves updated, each conversation you've had with (what a layperson can recollect of it) is a thin 2D slice of that knowledge.

I can think of practically no ways that written expertise is not better.

layer8 1 hour ago
On the other hand, books allow us to access a much broader selection of ideas than would otherwise be feasible.

I’m not sure where LLMs lie on that spectrum. They allow faster access, but it also feels more limited.

CamperBob2 1 hour ago
[flagged]
moralestapia 1 hour ago
This is why I come to HN, knowledgeable people enrich the discussion so much with their unique points of view.

Also thanks to Mia (she/her), this was a very interesting read.

reg_dunlop 1 hour ago
Impressive. Thanks for the share.

I was thinking about this recently: The difference between systemic (systematic) learning and opportunistic learning.

AI enables opportunistic learning, or Just-in-time (JIT) learning. It give the impression of infinite knowledge.

Most general concepts are well within the grasp of human understanding.

My curiosity RE the difference between systemic v opportunistic learning was the effect of longer-termed exposure/use to a tool that enables opportunistic learning.

jbethune 1 hour ago
This was a bit word-salad-y but I share the same basic concern. I think more I worry about the tendency toward greater and greater cognitive off-loading to LLMs. My sister told me a story the other day about how she caught her plumber using chatgpt on his phone to fix an issue with her bathroom. I just think it's good for humans to know how to do stuff.
hn_acc1 1 hour ago
Sure, but.. I've been coding for 40 years and I don't know everything. To me, a LOT depends on what the plumber asked chatgpt about. For example: building codes in that city, to figure out what his options are - like, is he allowed to just put in any old toilet, or is there a gpf restriction? What's the replacement part number for faucet XYZ's gasket? Those seem reasonable.

"how do I fix a clogged toilet?" would be bad..

SirMaster 1 hour ago
>like, is he allowed to just put in any old toilet, or is there a gpf restriction?

And if the LLM gets that wrong? It's his job to know the codes or how to go to a reliable resource to find out the correct codes.

Calazon 49 minutes ago
Hopefully he would be using the LLM as an enhanced search engine that can point him to relevant authoritative sources that he can use to fact-check its output. I have done that in the past to some effect.
bethekidyouwant 52 minutes ago
Maybe he just needs a reminder and he’ll have an oh yeah moment when he reads the output maybe he’ll ask it for primary sources. There’s a lot of bad faith going around.
sidrag22 1 hour ago
I cling a bit to a prompt i sent a while ago about just tossing a chopped pepper into a recipe for baked ziti. I had a recipe that i followed fairly tightly with slight changes to see how they would work out each time. Instead of prompting "when should i add chopped bell pepper?" the small change of just, "what are my options for when to add chopped bell pepper?" opened up a variety of different methods i could try when returning to that recipe, and decide what i like best based on the outcome.

The first prompt style is I think a way society towards drifts incidentally towards a less interesting one, with less variety in solutions. The second one i think allows people to still exercise their potential to try a variety of things and keep that variety.

alpinisme 1 hour ago
Presumably in his jurisdiction he should know what official resources to consult. But the point about it depending on his question is definitely fair.
theappsecguy 1 hour ago
[dead]
dfee 1 hour ago
your sister offloaded to her plumber.

her plumber offloaded to chatgpt.

"i just think it's good for humans to know how to do stuff."

are we talking about your sister or her plumber?

jessetemp 1 hour ago
The plumber obviously. Not everyone needs to know how to be a plumber, but a plumber should know how to be a plumber
danielbln 1 hour ago
Im a software engineer and know how to be a software engineer, yet I find LLMs quite helpful. Why should a plumber be any different.
daveguy 1 hour ago
Because if a plumber moves fast and breaks things, I've got shit all over the place.
enraged_camel 1 hour ago
That, and also the plumber loses their license. So perhaps the solution is professional licensing for software engineers.
bigfishrunning 23 minutes ago
I feel like a licencing process for software engineers would

A) test lots of skills that are common but not universal. I'm thinking javascript trivia here, where I don't write any javascript in my professional capacity as a software engineer; but there are many people who think Software Engineer == Javascript Programmer

B) shine too much of a light on the fact that this industry is full of people who demand high salaries but can't program their way out of a paper bag

davidkhess 24 minutes ago
I think that's coming regardless. AI just might be the perfect storm to bring the timeline in considerably.
c-hendricks 43 minutes ago
Engineer is a protected title in Canada after all
pixl97 1 hour ago
Which part of being a plumber? Was the house installed with something non-typical? Would you rather have them take an additional 30 minutes looking up their technical manual?

Without further knowledge of what was going on it's hard to say why they used ChatGPT.

b2ccb2 1 hour ago
> Would you rather have them take an additional 30 minutes looking up their technical manual?

Yes

NiloCK 54 minutes ago
You know that plumbers charge by the hour, right?
neetle 9 minutes ago
How do you know ChatGPT is referencing the right information if you need to look it up in a manual?
dfee 1 hour ago
the question was rhetorical. but, since you responded – do you think that there are limits to who can or should use ai? if the plumber's use of ChatGPT improved outcomes, isn't that preferable?

some knowledge is likely "cached" in the plumber. maybe he doesn't ask the same question twice. i'm sympathetic to the plumber, but i think your concerns of erosion of knowledge or skill are worth pushing on further.

thwarted 1 hour ago
The issue here is that the sister could have used ChatGPT herself, so why bother hiring the plumber. The plumber has provided less value than was expected. But make no mistake: the value the sister was looking for was to have someone else deal with it, and there's a price that the sister was willing to pay for the service of having someone else deal with it.

In the comments of this HN post, there is a dead comment from someone who posted an LLM's summary of another comment. It's dead because it offers very little/no value: that summary could be obtained directly from ChatGPT by anyone who wants a summary.

The sister offloaded plumbing to the plumber under the economic principle of comparative advantage. The plumber undermines the value they provide by outsourcing yet again. What value is provided by the middle man who does nothing but proxy the issue? Is the person who does this really a plumber? Is a plumber merely someone who has plumbing tools like wrenches and pipe tape?

That the plumber also wanted to outsource it is the concern: right now, the plumber is able to make money because of the difference between what is charged to deal with a problem and what it costs for them to deal with it. Knowledge and experience has become a commodity, which we probably can't do anything about, but along with that comes all the drawbacks (and advantages) of things, and humans, being comoditized.

cortesoft 1 hour ago
This is assuming that ChatGPT had everything needed to do the work. If the plumber was asking specific questions, based on their previous experience and knowledge about what needed to be done, the sister might not have been able to get the same result from her use of ChatGPT that the plumber received.

Experts look things up all the time, because no one can hold all the knowledge of a field in their head. Being an expert means being able to know what to look up and how to use the information retrieved from looking something up.

In the plumber example, ChatGPT is going to tell them to do things using the terminology that plumbers know, and tell them to do tasks that plumbers know how to do. The sister would have to continually look up more and more things about how to do basic plumbing tasks, rather than just looking up particular novelties.

thwarted 52 minutes ago
Yes, this is why I mentioned comparative advantage.
askonomm 1 hour ago
So you are saying that a plumber does not in fact need to know how to be a plumber?
ThalesX 1 hour ago
I've always wanted to be more of a handy man, never knew where to start. I used LLMs to create a toolkit and then used it to fix various stuff around the house. I'm at the point where I'm comfortable with beginner projects moving onto intermediate, and I feel like the quality of my works beats those of hired help at my level of competence. So... I'm glad I could off-load some cognition to LLMs and get to the actual useful parts.
Lerc 46 minutes ago
Would you prefer to have:

The plumber who turned up leave without fixing the problem,

The plumber fixing something that he didn't know how to do by looking up the answer.

The plumber attempting to fix something that they didn't know how to do.

While it's great to have the plumber who knows how to do everything, they are rare and in high demand, so cost way more than you can afford.

bigfishrunning 25 minutes ago
I would prefer to have a plumber with some kind of reference that doesn't just make shit up 10% of the time -- plumbing mistakes are insanely costly (i once owned a house that was destroyed by a plumbing mistake that was made by a previous owner)
comboy 1 hour ago
I mean, yes but LLMs have been making me more cognitively active. I've learned how to do more stuff that I would have without them and it's a decent multiplier not some rounding error.

Obviously you can have a plumber that knows his stuff and the one that doesn't. The good one can check some details and will recognize bs. If you already have the bad one it's probably if better if he uses LLM rather than when doesnt.

NiloCK 53 minutes ago
Did she also catch him with a wrench?
anigbrowl 5 minutes ago
So does talking to uninformed people. The size of the group is inversely correlated with deviation from the mean (of IQ, productivity, or whatever proxy for cognitive capability you care to specify).

I'm not sure why this is at the top of the page; it's not that it's wrong, it's just a sequence of truisms.

giancarlostoro 15 minutes ago
I think the best way I can put it is probably; this is the same as if you just cheat off someone else in school, you aren't learning much are you? AI is the same thing. Don't just cheat, use it to learn instead.
dcre 55 minutes ago
I've never seen an argument like this that, if true, wouldn't also apply to the cognitive offloading we do by relying on culture, by working with others, or working with the artifacts built by others.
0xBA5ED 38 minutes ago
Cognitive offloading via culture has many forms and many of them are not sustainable at all.
bomewish 2 hours ago
Doh. I went in expecting a really cool thesis — because the idea seems somehow intuitive, or at least really intriguing. But I have no clue what I read. Just totally odd and unconvincing. Greenland? Dialectal substrate? The idea is still super intriguing to me though!
chromacity 1 hour ago
Well, at least you know it's not AI-written because it's delightfully weird and evidently about some pet theory of the author. This day and age, that's something to unironically celebrate.
ulf-77723 1 hour ago
I love this! Especially the part about greenland. For quite some time the dashes were a good indicator of a text was written by AI - but now the best option is to write more human like by doing it a little less polished but weird - at least the message is being transported
asdfman123 1 hour ago
While I understand what the paper is saying I'm not sure if what I read was written by someone who is smarter than me and naturally goes higher up the abstraction tree, or just wants to write really smart things.

Either way though I think there's a much simpler way to express what she's trying to say. Offloading thinking to AI is bad because it's less flexible and doesn't easily update its reasoning with new information.

layer8 1 hour ago
It’s a blog post, not a paper.
gobdovan 1 hour ago
By the logic that today's news is fundamental to know as true, there really is no point in reading books older than 6 months old. If Einstein woke up from a coma, he'd be useless, as he doesn't even know who won the World Cup. For real now, if an AI can help you solve a problem using 2,000 years of human logic, does it really matter if it's "skewed" away from a political shift that happened three weeks ago?

I also don't believe that everybody I know is idiosyncratic in the way they view the world. And even if they were, I'd probably just pay attention to the things that are directly relevant to me. So probably I'll misunderstand most of what they say anyway.

Manuel_D 57 minutes ago
> In early 2026, the USA prepared to invade Greenland and, therefore, the EU4. Only a few months prior to that it was completely unthinkable that the USA would even think about threatening an invasion of Greenland. As AI base models are stuck in the past, they do not easily accept these events as real and often label them as “hypothetical”, “fake news”, or “impossible”. This also affects new models like Gemini 3 Pro, GLM-5 or GPT-5.3-codex5.

Isn't this just inherent to any system that takes some time to update? E.g. if a country moves its capital to a different city, then textbooks, maps, etc. are going to contain incorrect information for a while until updated editions are published.

A lot of the complaints about AI are really about the drawbacks of information systems more generally, and the failure modes pointed out are rarely novel. The "Cognitive Inbreeding" effect attributed to AI would also have occurred with Google search would it not? Lots of people type the same question into google and read the top results, instead of searching a more diverse set of information sources. It's interesting that the author mentions web search as a way to ameliorate this, when it seems to me that web search is just as capable of causing cognitive inbreeding.

darepublic 38 minutes ago
I agree that Google kind of serves this role even before llms. But these days people delegate their reasoning, brainstorming to the computer not just lookup. And beyond our generation are those who would have grown up doing this. Therefore I think concern is justified
djrorkrmrk 49 minutes ago
A few years ago, someone blow up a pipeline in eu. Before thwt some people lied about medical stuff.

AI is just current scapegoat.

thepasch 1 hour ago
AI-assisted, I can see. I believe it doesn’t have to be that way, though. If you use AI as a grounding tool - essentially something that can take your stream of consciousness and parse it into a series of concerete and pointed search terms to do real-time research with instead of falling back on what’s in the weights - then it’s honestly hard to think of a technology that had the potential to be more useful in the history of the species - it gives you much more direct access to both your unknown unknowns and your unknown knowns.

That is, of course, provided that you pay attention it actually does research. In their current state, LLMs are practically useless for this purpose for the vast majority of users, as no one knows how they work, what to watch out for, what the failure modes look like, and how to keep nonsense apart from facts when both are presented with an equal amount of conviction. That’s not a user problem, it’s an education problem.

drusepth 42 minutes ago
This is absolutely something to potentially be worried about, but one thing I never see highlighted in critiques of AI-assisted cognition is that some elements of physiology may not actually be biologically necessary if they can be fully supplanted by some replacement (in this case, new tools). I can't traverse as much land on foot as my ancestors did (my muscles are weaker, my endurance is less, etc), but I can travel even further than they could by car/plane/etc.

Nothing about the nature of evolution implies our current cognitive processing is ideal/sacred and shouldn't ever change.

lexandstuff 31 minutes ago
> I can't traverse as much land on foot as my ancestors did, but I can travel further by car/plane/etc

Which is partially how we found ourselves in the midst of an obesity epidemic.

mayankd 31 minutes ago
The cognitive effects are going to be so divergent. While the avid learners will learn knowledge and skills on the fly exponentially faster, the populace offloading thinking to the AI models will see unprecedented cognitive decline. This is similar to the effect that the internet had on knowledge retention but this time on critical thinking
MillionOClock 1 hour ago
Say someone uses AI, treating it as if it was a developer (probably not recommended today due to the risk of errors), and working and speaking with it as if they were some kind of product manager or senior engineer who only makes architectural decisions etc. I wonder what kind of difference would it really make? Sure the person might not be as good anymore as a developer, but how is this different from being a usual product manager or whatever the day AI truly is good enough for a developer role? I'm not saying I know what the answer to this question is, but this is something I genuinely wonder, and I think the same kind of questioning can apply to broader domains.
kmaitreys 1 hour ago
Why and how do you think it applies to broader domains?

Children learning in schools should not become product managers. If they are, what exactly is the "product" that they are "managing"? Reducing everything to and looking everything from a corporate viewpoint is bizarre.

MillionOClock 41 minutes ago
I'm not saying this should be every single domain. This isn't about products or management, instead I would frame it like this: I notice that multiple cases where we are worried about the impact of AI are basically just about the replacement of certain activities that some humans already aren't doing in today's society. If we are worried we will be less good at doing job X once we don't do job X anymore, why are we not worried about people who never did job X in the first place? If we are worried about people not doing jobs anymore, why are not worried for the human development of people wealthy enough not to work anymore for the rest of their days? I would not assume someone who won the lottery is going to have their life become uninteresting or see some cognitive decline. It could probably happen, but you can also see a path where the person just chooses to do the activities they always wanted to do, where they keep learning and exploring without the burden of usual life constraints. People already play chess when machines have beaten us for decades, just because they enjoy it.

Regarding education I think AI is a huge revolution waiting to happen. Usual courses have become boring? Have future super powerful AI generate per student highly personalized programs, create bespoke video games where succeeding can only happen once the student has validated all the notions you wanted them to validate etc.

darepublic 42 minutes ago
The original "person who most of humanity talked to" was, I reckon, google dot com
YackerLose 1 hour ago
A real artificial intelligence would be capable of independent and original thought. What we have today are mere plagiarism factories. They need to be called out for what they are.
adamtaylor_13 1 hour ago
One thing that's always been true with human communication that is becoming increasingly obvious to me through my interactions with LLMs is the art of asking a good question.

The framing of questions massively affects the results you get from discussion with humans, and I'd argue it's even more pronounced with LLMs.

alfalfasprout 1 hour ago
Yep. And this is why as hard as AI companies are pushing that these tools can be a replacement for expertise, it's ironic that the experts are the ones that often get the highest ROI because they know how to converse about the relevant subjects with a high degree of precision (and know what to look for, what to challenge, etc.).
steve_adams_86 2 hours ago
"Cognitive inbreeding" is an interesting (though maybe not entirely accurate) term for something I dislike a lot about LLMs. It really is a thing. You're recycling the same biases over and over, and it can be very difficult to tell if you don't review and distill the contents of your discourse with LLMs. Especially true if you're only using one.

I do think there's a solution to this—kind of—which dramatically reduces the probability and allowing for broad inductive biases. And that's to ask question with narrower scopes, and to ensure you're the one driving conversation.

It's true with programming as well. When you clearly define what you need and how things should be done, the biases are less evident. When you ask broad questions and only define desired outcomes in ambiguous terms, biases will be more likely to take over.

When people ask LLMs to build the world, they will do it in extremely biased ways. This makes sense. When you ask it specifics about narrow topics, this is still be a problem, but greatly mitigated.

I suppose what's happening is an inversion of cognitive load, so the human is taking on more and selecting bias such that the LLM is less free to do so. This is roughly in line with the article's premise (maybe not the entire article, though), which is fine; I think I generally agree that these are cognitive muscles that need exercising, and allowing an LLM to do it all for you is potentially harmful. But I don't think we're trapped with the outcome, we do have agency, and with care it's a technology that can be quite beneficial.

Retr0id 2 hours ago
One of my "let's try out this vibecoding thing" toy projects was a custom programming language. At the time, I felt like it was my design, which I iterated on through collaborative conversations with Claude.

Then I saw someone's Show HN post for their own vibecoded programming language project, and many of the feature bullet points were the same. Maybe it was partly coincidence (all modern PLs have a fair bit of overlap), but it really gave me pause, and I mostly lost interest in the project after that.

Ucalegon 1 hour ago
Thats the thing about a normalization system, it is going to normalize outputs because its not built to output uniqueness, its to winnow uniqueness to a baseline. That is good in some instances, assuming that baseline is correct, but it also closes the aperture of human expression.
Retr0id 49 minutes ago
I agree in a "the purpose of a system is what it does" sense but I'm not sure they're inherently normalization systems.
Ucalegon 17 minutes ago
Token selection is based off normalization, even if you train a model to produce outlier answers, even in that process you are biasing to a subset of outliers, which is inherently normalizing.
demorro 1 hour ago
This Dynamic Dialectical Substrate sounds a lot like Pirsigs Metaphysics of Quality to me, which I think is neat.
chunky1994 2 hours ago
Does anyone use LLMs in such a manner that they believe it always has the most up to date information (without web search tools?).

Isn't this whole thesis negated by the fact that tool calling web search exists? This just feels like a whole lot of words to say, don't treat a LLM as an always up to date infallible statistical predictor.

karmakurtisaani 1 hour ago
> Does anyone use LLMs in such a manner that they believe it always has the most up to date information (without web search tools?).

Probably just 95% of the users. You know, the non-techies.

Peritract 1 hour ago
The AI hype and overstatement of capabilities is at least as strong amongst the 'techies' as the people they treat as more credulous than themselves.
bookofjoe 1 hour ago
sidrag22 1 hour ago
without a doubt yes. I'd encourage you to just try a session on a free chatgpt account, asking questions you think a parent or someone unfamiliar with the space would probably ask.

It will not only answer confidently incorrect, but it will not web search in obvious scenarios where it should.

The words here, aren't meant to be a warning for people in this type of community falling victim to this type of thing, its more for the general public that doesn't grasp the tools they are using, the people that wont ever wander across this article.

This i think is a huge reason we really need to jump into LLM basics classes or something similar as soon as possible. People that others consider "smart" will talk about how great chatgpt or something is, then that person will go try it out because that person they respect must be right, they'll hop on the free model and get an absurdly inferior product and not grasp why. They'll ask something that requires a web search to augment info, not get that web search, and assume the confidently incorrect agent is correct.

The thesis is also I think not entirely about not having that modern info at query time, its more scattered. Someone asks what product they should use to mash potatoes, a tool is suggested. Everyone that asks then receives that same recommendation and instead of having a range of different styles of mashing potatoes, we end up all drifting closer towards one style, and the range of variance in how food is prepared is slowly getting lost.

layer8 1 hour ago
Most users probably don’t ask themselves the question and simply are unwittingly affected by how the model happens to be wired.
xlii 1 hour ago
Gemini can be asked about current events. I was quite surprised it was able to give structured information about love boxing event in realtime.
vorticalbox 1 hour ago
Most agent/chats have access to web search. I’m not overly surprised that it can do it but it is very nice when it actually works.
amluto 1 hour ago
Why do you expect web search tool calls to continue to be useful in the presence of modern AI slop farms, AI-assisted SEO, and search engines largely turning themselves into AI-based question-answering engines?

(At present, Gemini's question-answering capability (which Google kind of makes its users use) seems extremely error-prone -- much worse than competing LLMs when asked the same question.)

fl4regun 1 hour ago
I agree with you, this is a huge concern, and we are still in an age where most content on the internet isn't ai generated yet. What about 10 years from now? We have many instances of people writing posts on reddit or uploading videos and blogs using AI generated text. What happens when that is a significant percentage of content?

I recently saw a video discussing a researcher who published a fake scientific article about a fictitious disease, with bogus author names, even a warning IN the article itself that stated "This is not a real disease, this article is not real" (paraphrasing) but still AI ended up picking up this article and serving information from it as if it was a real disease.

It even got cited in papers (which were later redacted of course), but the fact those papers got published in the first place is a serious issue.

amluto 38 minutes ago
> I recently saw a video discussing a researcher who published a fake scientific article about a fictitious disease, with bogus author names, even a warning IN the article itself that stated "This is not a real disease, this article is not real" (paraphrasing) but still AI ended up picking up this article and serving information from it as if it was a real disease.

Isn’t a lot of pretraining done by chopping sources up into short-context-window-sized pieces and then shoving them into the SGD process? The AI-in-training could be entirely incapable of correlating the beginning with the end of the article in its development of its supposed knowledge base.

blackqueeriroh 50 minutes ago
This is bad science. Horrifically bad science.
cyanydeez 18 minutes ago
Do we think AI is similar to being rich, but without all that cash? I mean, they can basically offload most things to other people to think about.
contingencies 57 minutes ago
Strong disagree. The "AI-Assisted Cognition" phrase is loaded.

Would you attempt to, for example, simultaneously modify for available ingredients, number of diners, and time-optimize the prep method for a recipe you've never cooked before if you were following an old-school cookbook? No. You'd have to be a pretty solid chef to try all that on at once.

Using AI, you might branch out confidently in to new areas, executing all of these modifications simultaneously, and even adapting the output for a specific audience or language.

This toy example shows an important property of AI as decision support systems, which are well studied in the military domain: using these systems, we build confidence to act in unfamiliar domains, thereby extending our reach. From this experience we can learn more. The fact that the learning may then occur through, ie. during or after the experience, rather than beforehand, is secondary. It's still there. The fact we didn't know the language the AI translated to for our chef is totally irrelevant.

Sitting comfortably at the effective apex of millions of years of human cognitive and technology development with the entire world's knowledge at our fingertips, every day we can extend confidence in novel domains through AI, and enjoy it. We should be feeling pretty damn "developed".

Rote formalism and fixed paths in pedagogy are gone: good riddance. This is the hacker age.

measurablefunc 1 hour ago
Calculators endanger the development of mental arithmetic skills as well.
jojomodding 44 minutes ago
And we collectively decided that it's fine, you don't actually need to be able to solve 1234×5678 in your head.

But I am not sure you can compartmentalize the specific skill we can out-source to AI. I would not agree with "you don't need to be able to think in your head."

layer8 1 hour ago
Indeed, which is why it’s preferable to only start using them after some arithmetic maturity is achieved.
add-sub-mul-div 50 minutes ago
Right, which is why people make bad money decisions in everyday scenarios. People don't pull out their calculator at the grocery store, but they also never had to get good at doing simple math in their head due to the calculator.
SegfaultSeagull 2 hours ago
It’s a bit ironic that the author includes an AI generated audio version of the article, you know, so we don’t have to read it.
yakattak 2 hours ago
Sounds great for people who are seeing impaired.
layer8 57 minutes ago
Or driving. Or working around the house.
BigTTYGothGF 1 hour ago
Don't those people tend to have their own setups to do that sort of thing?
Argonaut998 1 hour ago
That's what screenreaders are for
kazinator 1 hour ago
> Speaking and discussing with other humans [who aren't incessantly blathering about AI] is obviously the most effective way to mitigate these problems.

Slightly FTFY.

cowlby 2 hours ago
Sometimes it feels like as developers we live in a a bubble. Don't most jobs endanger human development? I can't help but think about all the billions of factory, food service, assembly line type jobs. Do these not threaten "human development"? My cynical take would be all AI endangers is "white collar" work.
godot 2 hours ago
I think you're not wrong and I also think the author is not wrong -- and this just may be how technology/civilization/humans are going to change inevitably?

For example a possibly trajectory might be that many years in the future because human thinking has degraded due to AI-assisted cognition, most people will get a chip implant and AI-assistance becomes integrated with the brain. Basically same pattern as most everything else -- technology augments solve the new reality. I'm not saying this will happen, but just a possible outcome of this.

geitir 49 minutes ago
Doesn’t that sound lovely
LetsGetTechnicl 1 hour ago
Well no shit
greatpost 1 hour ago
[dead]
throwaway613746 1 hour ago
[dead]
toooomato 1 hour ago
[dead]
waffletower 1 hour ago
[dead]