So I get the frustration that "ai;dr" captures. On the other hand, I've also seen human writing incorrectly labeled AI. I wrote (using AI!) https://seeitwritten.com as a bit of an experiment on that front. It basically is a little keylogger that records your composition of the comment, so someone can replay it and see that it was written by a human (or a very sophisticated agent!). I've found it to be a little unsettling, though, having your rewrites and false starts available for all to see, so I'm not sure if I like it.
Wrote about this before [0] but my 2c: you shouldn't pause and you should keep using them because fuck these companies and their AI tools. We should not give them the power to dictate how we write.
LLMs have a bias towards expertise and confidence due to the proportion of books in their training set. They also lean towards an academic writing style for the same reason.
All this to say, if LLMs write like you were already writing, it means you have very good foundations. It's fine to avoid them out of fear, but you have this Internet stranger's permission to use your em dash pause to think "Oh yeah, I'm the reference for writing style."
Especially if it's unsupervised training
You'll get over it.
"The colors we see—like blue, green, and hazel—are the result of Tyndall scattering."
"Several interlocking cognitive biases create a "safety net" around the familiar, making the unknown—even if objectively better—feel like a threat."
"A retrograde satellite will pass over its launch region twice every 24 hours—once on a "northbound" track and once on a "southbound" track—but because of the way Earth rotates, it won't pass over the exact same spot on every orbit."
"Central, leverages streaming telemetry to provide granular, real-time performance data—including metrics (e.g., CPU utilization, throughput, latency), logs, and traces—from its virtualized core and network edge devices."
"When these conditions are met—indicating a potential degradation in service quality (e.g., increased modem registration failures, high latency on a specific Remote PHY)—Grafana automatically triggers notifications through configured contact points (e.g., Slack, PagerDuty)."
After collecting these samples I've noticed that they are especially probably in questions like explain something or write descriptive text. In the short queries there is not much text in total to trigger this effect.
I wish that were true, but I feel a little bit vindicated nevertheless
Now you can ask for outlandish things at work knowing your boss won’t read it and his summariser will ignore it as slop — win.
It's a matter of style preference. I support spaces around em-dashes — particularly for online writing, since em-dashes without spaces make selecting and copying text with precision an unnecessary frustration.
By the way,what other punctuation mark receives no space on at least one side?Wouldn't it look odd,make sentences harder to read,and make ideas more difficult to grok?I certainly think so.Don't you? /s
AI might suck, but if the author doesn't change, they get categorized as a lazy AI user, unless the rest of their writing is so spectacular that it's obvious an AI didn't write it.
My personal situation is fine though. AI writing usually has better sentence structure, so it's pretty easy (to me at least) to distinguish my own writing from AI because I have run-on sentences and too many commas. Nobody will ever confuse me with a lazy AI user, I'm just plain bad at writing.
I also tend to way overuse parenthesis (because I tend to wander in the middle of sentences) but they haven't shown up much in llms so /shrug.
There's your trouble. The real problem is that most internet users are setting their baseline for "standard issue human writing" at exactly the level they themselves write. The problem is that more and more people do not draw a line between casual/professional writing, and as such balk at very normal professional writing as potentially AI-driven.
Blame OS developers for making it easy—SO easy!—to add all manner of special characters while typing if you wish, but the use of those characters, once they were within easy reach, grew well before AI writing became a widespread thing. If it hadn't, would AI be using it so much now?
No, you are writing for people who see LLM-signals and read on anyway.
Not sure that that's a win for you.
\s
It’s literal content expansion, the opposite of gzip’ing a file.
It’s like a kid who has a 500 word essay due tomorrow who needs to pad their actual message up to spec.
I agree that reading an LLM-produced essay is a waste of time and (human) attention. But in the case of overly-verbose human writing, it's the human that's wasting my time[1], and the LLM is gzip'ing the spew.
[1] Looking at you, New Yorker magazine.
Anyway, it's at https://www.jimkleiber.com/p35/ if you wanna check it out, all sessions posted as blog posts, I think there's a link to the ebook (pay-what-you-want) and there may be audio (I recorded myself reading the writing right after each session).
If you check it out, please let me know :-)
Fun, I'd make playback speed something like 5x or whatever feels appropriate, I think nobody truly wants to watch those at 1x.
https://news.ycombinator.com/item?id=557191
I can't believe etherpad lost this item...
edit: oh, I found the one I was looking for: https://byronm.com/13sentences.html
There are a lot of people like me in software. I’m tempted to say we are “shouted down”, but honestly it’s hard to be shouted down when you can talk circles around some people. But we are definitely in a minority. There are actually a lot of parallels between creative writing and software and a few things that are more than parallel. Like refactoring.
If you’re actually present when writing docs instead of monologuing in your head about how you hate doing “this shit”, then there’s a lot of rubber ducking that can be done while writing documentation. And while I can’t say that “let the AI do it” will wipe out 100% of this value, because the AI will document what you wrote instead of what you meant to write, I do think you will lose at least 80% of that value by skipping out on these steps.
≈
The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it. (Brandolini)
They want all this artisnal hand written prose under the candle light with the moon in the background. And you are a horrible person for using AI, blablabla.
But ask for feedback? And you get Inky, Blinky, Pinky, and Clyde. Aka ghosted. But boy, do they tell a good story. Just ain't fucking true.
Counter: companies deserve the same amount of time invested in their application as they spend on your response.
I've noticed that attitude a lot. Everyone thinks their use of AI is perfectly justified while the others are generating slops. In gamedev it's especially prominent - artists think generating code is perfectly ok but get acute stress response when someone suggests generating art assets.
[1] Code as design, essays by Jack Reeves: https://www.developerdotstar.com/mag/articles/reeves_design_...
Of more concern to me is that when it's unleashed on the ephemera of coding (Jira tickets, bug reports, update logs) it generates so much noise you need another AI to summarize it for you.
- Proliferation of utils/helpers when there are already ones defined in the codebase. Particularly a problem for larger codebases
- Tests with bad mocks and bail-outs due to missing things in the agent's runtime environment ("I see that X isn't available, let me just stub around that...")
- Overly defensive off-happy-path handling, returning null or the semantic "empty" response when the correct behavior is to throw an exception that will be properly handled somewhere up the call chain
- Locally optimal design choices with very little "thought" given to ownership or separation of concerns
All of these can pretty quickly turn into a maintainability problem if you aren't keeping a close eye on things. But broadly I agree that line-per-line frontier LLM code is generally better than what humans write and miles better than what a stressed-out human developer with a short deadline usually produces.
But of course it doesn't do that becaude we can't trust it the way we do a traditional compiler. Someone has to validate its output, meaning it most certainly IS meant for humans. Maybe that will change someday, but we're not there yet.
Communication is for humans. It's our super power. Delegating it loses all the context, all the trust-building potential from effort signals, and all the back-and-forth discussion in which ideas and bonds are formed.
from the preface of SICP.
I don’t think either is inherently bad because it’s AI, but it can definitely be bad if the AI is less good at encoding those ideas into their respective formats.
Yesterday I left a code review comment that someone asked if AI wrote it. The investigation and reasoning were 100% me. I spent over an hour chasing a nuanced timezone/DST edge case, iterating until I was sure the explanation was correct. I did use Codex CLI along the way, but as a power tool, not a ghostwriter.
The comment was good, but it was also “too polished” in a way that felt inorganic. If you know a domain well (code, art, etc.), you start to notice the tells even when the output is high quality.
Now I’m trying to keep my writing conspicuously human, even when a tool can phrase it perfectly. If it doesn’t feel human, it triggers the whole ai;dr reaction.
Some code I cobbled together to pass a badly written assignment at school. Other code I curated to be beautiful for my own benefit or someone else’s.
I think the better analogy in writing would be… using an LLM to draft a reply to a hawkish car dealer you’re trying to not get screwed by is absolutely fine. Using it to write a birthday card for someone you care about is terrible.
i think the real line is about whether the AI output is the product or a tool to build the product. AI-generated code that ships isn't really the product, the behavior it creates is. but AI-generated art that ships is the product in a way the user directly perceives. the uncanny valley isn't in the quality, it's in the relationship between the creator and the output.
But to your users, the visual identity is the identity of the game. Do you really want to outsource that to AI?
I would have been more okay with AI generated code, it would likely have been more objective and less verbose, I refused to review something that he obviously didn't put enough effort himself to do a POC on. When I asked for his own opinion on the different solutions evaluated he didn't have one
It's not about the document per se, but the actual value of these verbose AI-generated slop, code that is executable, even if poorly reviewed, it's still executable and likely to produce the output that satisfies functional requirements.
Our PM is now evaluating tools to generate documentation for our platform based on interpreting source code, it includes description of things such as what is the title and what the back button is for but wouldn't inform valid inputs for the creation of a new artefact. This AI-generated doc is in addition to our human made Confluence docs, which is likely to add to spam and reduce quality of search results for useful information.
No doubt, but I think there a bit of a difference between AI generating something utilitarian vs something expected to at least have some taste/flavor.
AI generated code may not be the best compared to what you could hand craft, along almost any axis you could suggest, but sometimes you just want to get the job done. If it works, it works, and maybe (at least sometimes) that's all the measure of success/progress you need.
Writing articles and posts is a bit different - it's not just about the content, it's about how it's expressed and did someone bother to make it interesting to read, and put some of their own personality into it. Writing is part communication, part art, and even the utilitarian communication part of it works better if it keeps the reader engaged and displays good theory of mind as to where the average reader may be coming from.
So, yeah, getting AI to do your grunt work programming is progress, and a post that reads like a washing machine manual can fairly be judged as slop in a context where you might have hoped for/expected better.
People are happy to shovel shit if they can get away with it.
It's worth pointing out that AI is not a monolith. It might be better at writing code than making art assets. I don't work with gaming, but I've worked with Veo 3, and I can tell you, AI is not replacing Vince Gilligan and Rhea Seehorn. That statement has nothing to do with Claude though...
Mind you this person is an excellent writer, they had great success with ghost writing and running a small news website where they wrote and curated articles. But for some reason the opportunity for Claude to write stuff they can never have the time for is too great for them to ignore.
I don't care if you used AI for 99.99% of your research for writing the content but when I read your content it should be written by you. It's why I never take any article seriously on linkedin, even before AI, they all lack any personalization.
So when someone wants to know something about the topic that my website is focused on, chances are it will not be the material from the website they see directly, but a summary of what the LLM learned from my website.
Ergo, if I want to get my message across I have to write for the LLM. It's the only reader that really matters and it is going to have its stylistic preferences (I suspect bland, corporate, factual, authoritative, avoiding controversy but this will be the new SEO).
We meatbags are not the audience.
A simple query like "Ford Focus wheel nut torque" gives pages with blah blah like:
> Overview Of Lug Nut Torque For Ford Focus
> The Ford Focus uses specific lug nut torque to keep wheels secure while allowing safe driving dynamics. Correct torque helps prevent rotor distortion, brake heat transfer issues, and wheel detachment. While exact values can vary by model year, wheel size, and nut type, applying the proper torque is essential for all Ford Focus owners.
And the site probably has this text for each car model.
Somehow the ways the ad industry destroyed the Internet got very varied...
And I know it's different, but I'm surprised the overall sentiment is so pessimistic on HN. So maybe we will communicate through yet another black box on top of hundreds of existing ones already. But probably mostly when seeking specific information and wanting to get it efficiently. Yes this one is different, it makes human contact over text much more difficult, but the big part of all of this was happening already for years and now it's just widely available.
When posting on HN you don't see the other person typing like using talk command on unix, but it is still meaningful.
Ideally we would like to preserve what we have untouched and only have new stuff as an option but it's never been like this. Did we all enjoy win 3.11? I mean it was interesting.. but clicking.. so inefficient (and of course there are tons of people who will likely scream from their GUIs that it still is and windows sucks, I'd gladly join, but we have our keyboard bindings, other operating systems, and get by somehow)
Perception of new things stays relatively constant over the years though.
And the thought that we’d all be prancing playing guitars by the river on UBI when that happens. No, we just won’t be born anymore.
I no longer feel joy in reading things as almost most of the writing seem same and pale to me as if everyone is putting thoughts in the same way.
Having your own way of writing always felt personal in which you expressed your feelings most of the time.
The most sad part for me is I no longer am able to understand someone's true feelings (which anyway was hard to express in writing as articulation is hard).
We see it being used from our favourite sports person in their retirement post or from someone who has lost their loved ones or someone who just got their first job and it's just sad that we no longer can have that old pre AI days back again.
However, I agree that ordinary people filtering and flattening their communication into a single style is a great loss.
Personally I find it super helpful to discuss stuff back and forth: It takes a view, explores the code and brings some insight. I take a view and steer the analysis. And together we arrive at a conclusion.
By that point the AI’s got so much context it typically does a great job summarising the thought process for wider discussion so I can tweak and polish and share.
See it as code review, reflection, getting a birds eye view.
When I document my code, I often stop in between, and think: That implementation detail doesn't make sense/is over convoluted/can be simplified/seems to be lacking sanity check etc…
There is also the art of subtly injecting humor in it, with, e.g. code examples.
Doesn't ai;dr kind of contradict ai generated documentation? If I want to know what claude thinks about your code I can just ask it. Imo documentation is the least amenable thing to ai. As the article itself says, I want to read some intention and see how you shape whatever you're documenting.
(AI adding tests seems like a good use, not sure what's meant by scaffolding)
> Why should I bother to read something someone else couldn't be bothered to write?
and
> I can't imaging writing code by myself again, specially documentation, tests and most scaffolding.
So they expect nobody to read their documentation.
Yes, exactly. Because AI will read it and learn from it, it's not for humans.
I'll want to communicate something to my team. I'll write 4 bullet points, plug it into an LLM, which will produce a flowing, multi paragraph e-mail. I'll distribute it to my co-workers. They will each open the e-mail, see the size, and immediately plug it into an LLM asking it to make a 4 bullet summary of what I've sent. Somewhere off in the distance a lake will dry up.
a large part of the business models of these systems is going to consist of dealing with these systems... it's a wonderful scheme
I can take the other person's prompt and run it through an LLM myself and proceed from there.
> Why should I bother to read something someone else couldn't be bothered to write?
Interesting mix of sentiments. Is this code you're generating primarily as part of a solo operation? If not, how do coworkers/code reviewers feel about it?
Shouldn’t we bother to write these things?
A blog post is for communicating (primarily, these days) to humans.
They’re not the same audience (yet).
I don't have any solutions though. Sometimes I don't call out an article - like the Hashline post today - because it genuinely contains some interesting content. There is no doubt in my mind that I would have greatly preferred the post if it was just whatever the author promoted the LLM with rather than the LLM output and would have better communicated their thoughts to me. But it also would have died on /new and I never would have seen it.
This is the root cause of the problem. Labeling all things as just "content". Content entering the lexicon is a mind shift in people. People are not looking for information, or art, just content. If all you want is content then AI is acceptable. If you want art then it becomes less good.
For me too and for writing it has the upside that it's sooo relaxing to just type away and not worry about the small errors much anymore.
It's a problem to use a blender to polish your jewelry. However, it's perfectly alright to use a blender to make a smoothie. It's not cognitive dissonance to write a blog post imploring people to stop polishing jewelry using a blender while also making a daily smoothie using the same tool.
I cry every time somebody tries to frame it one dimensionally.
If someone wants to me read a giant text generated by a small and poor prompt, I don't wanna read it
If someone wants to fix that by increasing the effort and do a better prompt and express better the ideas, I rather read that prompt over the llm output
I think using AI for writing feedback is fine, but if you're going to have it write for you, don't call it your writing.
Example (minus the final review): https://chatgpt.com/share/698e417a-4448-8011-9c29-12c9b91318...
I still think that the final review written by ChatGPT is a bit off. But at least, it asked mostly the right questions.
These blanket binary takes are tiresome. There is nuance and rough edges.
Because writing is a dirty, scratched window with liquid between the frames and an LLM can be the microfiber cloth and degreaser that makes it just a bit clearer.
Outsourcing thinking is bad. Using an LLM to assist in communicating thought is or at least can be good.
The real problem I think the author has here is that it can be difficult to tell the difference and therefore difficult to judge if it id worth your time. However, I think author/publisher reputation is a far better signal than looking for AI tells.
If you use an LLM to generate the ideas and justification and formatting and etc etc, you're just delegating your part in the convo to a bot.
Homogenization is good for milk, but not for writing.
Hardly seems mutually exclusive. Surely you should generally consider the reputation of someone who posts LLM-responses (without disclosing it) to be pretty low.
A lot of people don’t particularly want to waste time reading the LLM-responses to someone else’s unknown/unspecified prompts. Someone who would trick you in to that doesn’t have a lot of respect for their readers and is unlikely to post anything of value.
Don’t get me wrong. I don’t want to read (for example) AI fiction because I know there’s no actual mind behind it (to the extent that I can ever know this).
But AI is going to get better and the only thing that’s going to even work going forward is to trust publishers and authors who give high value regardless of how integral LLMs are to the process.
I keep seeing this and I don't think I agree. We outsource thinking everyday. Companies do this everyday. I don't study weather myself, I check an app and bring an umbrella if it says it's gonna rain. My team trusts each other do do some thinking in their area, and present bits sideways / upwards. We delegate lots of things. We collaborate on lots of things.
What needs to be clear is who owns what. I never send something I wouldn't stand by. Not in a correctness sense (I have, am and likely will be wrong on any number of things) but more in a "yeah, that is my output, and I stand by it now" kind of way. Tomorrow it might change.
Also remember that google quip "it's hard to edit an empty file". We have always used tools to help us. From scripts saved here and there, to shortcuts, to macros, IDE setups, extensions and so on. We "think once" and then try not to "think" on every little detail. We'd go nowhere with that approach.
There's a strong overlap between things which bad (unwise, reckless, unethical, fraudulent, etc.) in both cases.
> We outsource thinking everyday. [...] What needs to be clear is who owns what.
Also once you have clarity, there's another layer where some owning/approval/delegation is not permissible.
For example, a student ordering "make me a 3 page report on the Renaissance." Whether the order went to another human or an LLM, it is still cheating, and that wouldn't change even if they carefully reviewed it and gave it a stamp of careful approval.
However, if I had an idea and just fobbed the idea off to an LLM who fleshed it out and posted it to my blog, would you want to read the result? Do you want to argue against that idea if I never even put any thought into it and maybe don’t even care?
I’m like you in this regard. If I used an LLM to write something I still “own” the publishing of that thing. However, not everyone is like this.
ai;dr is what I'm going to start saying, it's just frustrating to see.
I don't understand how they can think it's a good idea, I instantly classify them as lazy and unauthentic. I'd rather get texts full of mistakes coming straight out of their head than this slop.
I haven't even really tried to use LLMs to write anything from a work context because of the ideas you talk about here.
IMO it’s lazy and bad for expressive writing, but for certain things it’s totally fine.
> I need to know there was intention behind it. [...] That someone needed to articulate the chaos in their head, and wrestle it into shape.
If forced to choose, I'd use coherence as evidence of care than use it as a refutation of humanity.
Conclusion:
Dismissing arguments solely because they are AI-generated constitutes a class of genetic fallacy, which should be called 'Argumentum ad machina'.
Premises:
1. The validity of a logical argument is determined by the truth of its premises and the soundness of its inferences, not by the identity of the entity presenting it.
2. Dismissing an argument based on its source rather than its content constitutes a genetic fallacy.
3. The phrase 'that's AI-generated' functions as a dismissal based on source rather than content.
Assumptions:
1. AI-generated arguments can have true premises and sound inferences
2. The genetic fallacy is a legitimate logical error to avoid
3. Source-based dismissals are categorically inappropriate in logical evaluation
4. AI should be treated as equivalent to any other source when evaluating arguments
How we can tell that this wasn't written by an LLM.
At this point, I'm not sure whether you're a clawdbot running amok..
Like always we have to lean on evaluating based on quality. You can produce quality using an LLM, but it's much easier to produce slop, which is why there's so much of it now.
https://www.thenewatlantis.com/publications/one-to-zero
Semantic information, you see, obeys a contrary calculus to that of physical bits. As it increases in determinacy, so its syntactical form increases in indeterminacy; the more exact and intentionally informed semantic information is, the more aperiodic and syntactically random its physical transmission becomes, and the more it eludes compression. I mean, the text of Anna Karenina is, from a purely quantitative vantage of its alphabetic sequences, utterly random; no algorithm could possibly be generated — at least, none that’s conceivable — that could reproduce it. And yet, at the semantic level, the richness and determinacy of the content of the book increases with each aperiodic arrangement of letters and words into coherent meaning.
Edit: add-onIn other words, it is impossible for an LLM (or monkeys at keyboards [0]) to recreate Tolstoy because of the unique role our minds play in writing. The verb writing hardly appears to apply to an LLM when we consider the function it is actually doing.
But of course, like producing code with AI, it's very easy to produce cheap slop with it if you don't put in the time. And, unlike code, the recipient of your work will be reading it word by word and line by line, so you can't just write tests and make sure "it works" - it has to pass the meaningfulness test.
I know it’s just modern writing style to preempt all responses. But can’t you just plainly state your business without professing your appreciation?
People who waste other’s time with bullshit are aholes. I don’t care if it’s My Great Friend And Partner in Crime, Anthropics LLM or it’s a tedious template written in PHP with just enough substitutions and variations to waste five sentences on it before closing it.
Actually, saying that it’s the same thing is a bit like saying “guns don’t shoot people”. At least you had to copy-paste that PHP template from somewhere and adapt it to your spam. Back in the day.
> ..and call me an AI luddite
Oh please do call me an AI luddite. It's an honor for me.
I think it's the size of the audience that the AI-generated content is for, is what makes the difference. AI code is generally for a small team (often one person), and AI prose for one person (email) or a team (internal doc) is often fine as it's hopefully intentional and tailored. But what's even the point for AI content (prose or code) for a wide audience? If you can just give me the prompt and I can generate it myself, there's no value there.
> I can't imaging writing code by myself again
After that, you say that you need to know the intention for "content".
I think it's pretty inconsistent. You have a strict rule in one direction for code and a strict rule in the opposite direction for "content".
I don't think that writing code unassisted should be taken for granted. Addy Osmani covered that in this talk: https://www.youtube.com/watch?v=FoXHScf1mjA I also don't think all "content" is the sort of content where you need to know the intention. I'll grant that some of it is, for sure.
Edit: I do like intentional writing. However, when AI is generating something high quality, it often seems like it has developed an intention for what it's building, whether one that was conceived and communicated clearly by the person working with the AI or one that emerged unexpectedly through the interaction. And this applies not just to prose but to code.
This is an easy but not very insightful framing.
I want to read intelligent, thoughtful text that is useful in some way: to me, to society, to humanity. Ceteris paribus, the source of the information does not necessarily matter; it only matters as a matter of association. To put it another way, “human” vs “machine” is not the core driving factor for me.
All other things equal, I would rather read A over B:
A. high quality AI content, even if it is “only” the result of 6 minutes of human question framing and light editing [1]
B. low quality purely human content, even if it was the result of 60 minutes of effort.
There is increasingly less ability to distinguish “human” writing from “AI” writing. Some people fool themselves on their AI-detection prowess.
To be direct: I want meaningful and satisfying lives for humans. If we want to reward humans for writing more, we better reflect on why, and if we still really want that, we better find ways that work. I don’t think “buy local” as a PR campaign will be easily transferred to a “read human” movement.
[1]: Of course AI training data is drawn from humans, so I do not discount the human factor. My point is that quantifying the effort put into it is not simple.
Chicken.
Seriously, the degree to which supposed engineering professionals have jumped on a tool that lets them outsource their work and their thinking to a bot astounds me. Have they no shame?
https://noonker.github.io/posts/2024-07-25-i-respect-our-sha...
Also you could long use "logit_bias" in the API of models which supported it to ban the EM dash, ban the word "not", ban semicolons, and ban the "fancy quotes" that were clearly added by "those who need to watch" to make sure that they can clearly figure out if you used an LLM or not.
If you care about your voice, don't let LLMs write your words. But that doesn't mean you can't use AI to think, critique and draft lots of words for you. It depends on what purpose you're writing it for. If you're writing an impersonal document, like a design document, briefing, etc then who cares. In some cases you already have to write them in a voice that is not your own. Go ahead and write these in AI. But if you're trying to say something more personal then the words should be your own, AI will always try to 'smooth' out your voice, and if you care about it, you gotta write it yourself.
Now, how do you use AI effectively and still retain your voice? Here's one technique that works well: start with a voice memo, just record yourself maybe during a walk, and talk about a subject you want, free form, skip around jump sentences, just get it all out of your brain. Then open up a chat, add the recording or transcript, clearly state your intent in one sentence and ask the AI to consider your thoughts, your intent and ask clarifying questions. Like, what does the AI not understand about how your thoughts support the clearly stated intent of what you want to say? That'll produce a first draft, which will be bad. Then tell the AI all the things that don't make sense to you, that you don't like, just comment on the whole doc, get a second draft. Ask the AI if it has more questions for you, you can use live chat to make this conversation go smoother as well, when the AI is asking you questions, you can talk freely by voice. Repeat this one or two more times, and a much finer draft will take shape that is closer to what you want to say. During this drafting state, the AI will always try to smooth or average out your ideas, so it is important to keep pointing out all the ways in which it is wrong.
This process will help you with all the thinking involved being more up-front. Once you're read and critiqued several drafts, all your ideas will be much more clear and sort of 'cached' and ready to be used in your head. Then, sit down and write your own words from scratch, they will come much easier after all your thoughts have been exercised during the drafting process.
And you're wrong for suggesting that's the first use of ai;dr and further assuming that the author "stole" it from that post. https://rollenspiel.social/@holothuroid/113078030925958957 - September 4, 2024
But if the post was generated through a long process of back-and-forth with the model, where significant modifications/additions were made by a human? I don't think there's anything wrong with that.
I do agree with your core point - the thinking is what matters. Where I've found LLMs most useful in my own writing is as a thinking tool, not a writing tool.
Using them to challenge my assumptions, point out gaps in my argument, or steelman the opposing view. The final prose is mine, but the thinking got sharper through the process.
But AI-generated content is here to stay, and it's only going to get harder to distinguish the two over time. At some point we probably just have to judge text on its own merits regardless of how it was produced.
I do note that recently, I wonder what was the point the author wanted to make more often only to then note that there are a lot of what seems to be the agreed on standard telltale signs of excessive AI usage.
Effectively there was a lot of spam before already hence in general I don't mind so much. It is interesting to see, though, that the “new spam” often gets some traction and interesting comments on HN which used to not be the case.
It also means that behind the spam layer there is possibly some interesting info the writer wanted to share and for that purpose, I imagine I'd prefer to read the unpolished/prompt input variant over the outcome. So far, I haven't seen any posts where both versions were shared to test whether this would indeed be any better, though.
I do think there's a great deal wrong with that, and I won't read it at all.
Human can speak unto human unless there's language barrier. I am not interested in anyone's mechanically-recovered verbiage, no matter how much they massaged it.
Edit: ok, I've checked your profile and now I see that this is your website that you're astroturfing every thread you reply to. Stop doing that.
This take is baffling to me when I see it repeated. It's like saying why should people use Windows if Bill Gates did not write every line of it himself. We won't be able to see into Bill's mind. Why should you read a book if they couldn't bother to write it properly and have an editor come in and fix things.
The main purpose of a creative work is not seeing intimately into the creator's mind. And the idea that it is only people who don't care who use LLMs is wrong.
What? It’s nothing like that, at all. I don’t know that Gates has claimed to have written even a single line of Windows code. I’m not asking for the perfect analogy, but the analogy has to have some tie to reality or it’s not an analogy at all. I’m only half-joking when I wonder if an AI wrote this comment.