I want the code to have subsequently been deployed in production and demonstrably robust, without additional work outside of the livestream.
The livestream should include code review, test creation, testing, PR creation.
It should not be on a greenfield project, because nearly all coding is not.
I want to use Claude and I want to be more productive, but my experience to date is that for writing code beyond autocomplete AI is not good enough and leads to low quality code that can’t be maintained, or else requires so much hand holding that it is actually less efficient than a good programmer.
There are lots of incentives for marketing at the grassroots level. I am totally open to changing my mind but I need evidence.
Mind you I've never wrote a non-trivial game before in my life. It would take me weeks to do this on my own without any AI assistance.
Right now I'm working on a 3d world map editor for Final Fantasy VII that was also almost exclusively vibe-coded. It's almost finished and I plan a write up and a video about it when I'm done.
Now of course you've made so many qualifiers in your post that you'll probably dismiss this as "not production", "not robust enough", "not clean" etc. But this doesn't matter to me. What matters is I manage to finish projects that I would not otherwise if not for the AI coding tools, so having them is a huge win for me.
I think the problem is in your definition of finishing a project.
Can you support said code, can you extend it, are you able to figure out where bugs are when they show up? In a professional setting, the answer to all of those should likely be yes. That's what production code is.
And your starry-eyed CEO is asking the same old question: How come everything takes so long when a 2-person team over two days was able to produce a shiny new thing?!. sigh
Could be used for early prototyping, though, before you hire your first engineers just to fire them 6 months later.
I suspect videos meeting your criteria are rare because most AI coding demos either cherry-pick simple problems or skip the messy reality of maintaining real codebases.
First off, Rust represents quite a small part of the training dataset (last I checked it was under 1% of code dataset) in most public sets, so it's got waaay less training then other languages like TS or Java. You added 2 solid features, backed with tests and documentation and nice commit messages. 80% of devs would not deliver this in 2.5 hours.
Second, there was a lot of time/token waste messing around with git and git messages. Few tips I noticed that could help you in the workflow:
#1: Add a subagent for git that knows your style, so you don't poison direct claude context and spend less tokens/time fighting it.
#2: Claude has hooks, if your favorite language has a formatter like rust fmt, just use hooks to run rust fmt and similar.
#3: Limit what they test, as most LLM models tend to write overeager tests, including testing if "the field you set as null is null", wasting tokens.
#5: Saying "max 50 characters title" doesn't really mean anything to the LLM. They have no inherent ability to count, so you are relying on probability, which is quite low since your context is quite filled at this point. If they want to count the line length, they also have to use external tools. This is an inherent LLM design issue and discussing it with an LLM doesn't get you anywhere really.
I think it misses a feedback loop. Something that evaluates what went wrong, what works, what wont, and remembers that and then can use that to make better plans. From making sure it runs the tests correctly (instead of trying 5 different methods each time) to how to do TDD and what comments to add.
A common thread in articles about developers using AI is that they're not impressed at first but then write more precise instructions and provide context in a more intuitive manner for the AI to read and that's the point at which they start to see results.
Would these principles not apply to regular developers as well? I suspect that most of my disappointment with these tools is that I haven't spend enough time learning how to use them correctly.
With Claude Code you can tell it what it did wrong. It's a bit hit-or-miss as to whether it will take your comments on board (or take them too literally) but I do think it's too powerful a tool to just ignore.
I don't want someone to just come and eat my cake because they've figured out how to make themselves productive with it.
Why don't you do the Livestream to show us? Why are you demanding other do your laundry list of demands?
You want someone to spend their time to live-stream their codebase and them working on it using Claude code, which will then make it into production, going through all the processes on a non-greenfield project just so you can be convinced that it is worth it?
Sorry but that sounds like a heavily entitled way of dismissing a whole new technology without even wanting to try it, not like "totally open to changing my mind".
We have about 3B active devices running our software, it is both client and server side. We use Claude Code daily to either fix bugs, do small feature PR's or do refactors while we do our work. I use it on my private projects, where some are in production. I have over 20 years of experience writing software now, across many stacks, frameworks, tools, including my own, and I can tell you that it is the biggest change to programming since syntax highlighting.
Yeah, it makes mistakes, yeah it takes sometimes like 10 minutes to find those mistakes, but if the code written would take me an hour, that is a 50 minute profit.
The problem is if your codebase is written like absolute dogshit without any indication of architecture, in that case yeah, its going to write more dogshit. If you have things cleanly defined, if you tell it how you work, it will follow that style, mostly. Like with everything, you have to know how to use it.
Try playing with technology and using it on your own before dismissing it this hard, or expecting someone to spend their day proving to you "it works".
- Working on hobby projects/sideprojects
- Working on open-source projects
- Selling stuff
For someone to create this example, they would either have to do it in a codebase they don't have problem open sourcing or which is open source, so they do not break NDA's and divulge company info/source code.
How many people are ready to do that?
The conditions of the OP are:
- No demo, independent programmer
- Non-greenfield project
- Non-trivial problem
- Code deployed in production and robust
- Code review, test, testing, PR creation
- Person be willing to live-stream their work and code while building
Which is a pretty unreasonable set of conditions to prove "it works", when the person could read a tutorial and try it themselves.
Secondly, extraordinary claims require extraordinary evidence.
Why not? Plenty of people stream their work that later makes into production. Gaming community for example has no end of people building their games publicly.
And yet, for all the "amazing one-shot capabilities that obviate the need for programmers" no one streams working with any of the AI tools.
All we have is unverifiable claims like yours.
"Amazing one-shot capabilities" is not the same as "an extremely useful tool that saves a ton of time"
If it's as useful or saving a lot of time, it would be a no brainer for people who build in public to use this tool, right?
1) Don't ask for large / complex change. Ask for a plan but ask it to implement the plan in small steps and ask the model to test each step before starting the next.
2) For really complex steps, ask the model to write code to visualize the problem and solution.
3) If the model fails on a given step, ask it to add logging to the code, save the logs, run the tests and the review the logs to determine what went wrong. Do this repeatedly until the step works well.
4) Ask the model to look at your existing code and determine how it was designed to implement a task. Some times the model will put all of the changes in one file but your code has a cleaner design the model doesn't take into account.
I've seen other people blog about their tricks and tips. I do still see garbage results but not as high as 95%.
That's been my experience.
I've been working on a 100% vibe-coded app for a few weeks. API, React-Native frontend, marketing website, CMS, CI/CD - all of it without changing a single line of code myself. Overall, the resulting codebase has been better than I expected before I started. But I would have accomplished everything it has (except for the detailed specs, detailed commit log, and thousands of tests), in about 1/3 of the time.
I'm at the point now where I have to yell at the AI once in a while, but I touch essentially zero code manually, and it's acceptable quality. Once I stopped and tried to fully refactor a commit that CC had created, but I was only able to make marginal improvements in return for an enormous time commitment. If I had spent that time improving my prompts and running refactoring/cleanup passes in CC, I suspect I would have come out ahead. So I'm deliberately trying not to do that.
I expect at some point on a Friday (last Friday was close) I will get frustrated and go build things manually. But for now it's a cognitive and effort reduction for similar quality. It helps to use the most standard libraries and languages possible, and great tests are a must.
Edit: Also, use the "thinking" commands. think / think hard / think harder / ultrathink are your best friend when attempting complicated changes (of course, if you're attempting complicated changes, don't.)
In theory. In practice, it's not a very secure sandbox and Claude will happily go around updating files if you insist / the prompt is bad / it goes off on a tangent.
I really should just set up a completely sandboxed VM for it so that I don't care if it goes rm-rf happy.
A sandboxed devcontainer is worth setting up though. Lets me run it with —dangerously-skip-permissions
But here are fine prints, it has "exit plan mode" tool, documented here: https://minusx.ai/blog/decoding-claude-code/#appendix
So it can exit plan mode on its own and you wouldn't know!
I.e. not its own tools, but command-line executables.
Its assumptions about these commands, and specifically the way it ran them, were correct.
But I have seen it run commands in plan mode.
Permission limitations on the root agent have, in many cases, not been propagated to child agents, and they’ve been able to execute different commands. The documentation is incomplete and unclear, and even to the extent that it is clear it has a different syntax with different limitations than are used to configure permissions for the root agent. When you ask Claude itself to generate agent configurations, as is recommended, it will generate permissions that do not exist anywhere in the documentation and may or may not be valid, but there’s no error admitted if an invalid permission is set. If you ask it to explain, it gets confused by their own documentation and tells you it doesn’t know why it did that. I’m not sure if it’s hallucinating or if the agent-generating-agent has access to internal detail details that are not documented anywhere in which the normal agent can’t see.
Anthropic is pretty consistently the best in this space in terms of security and product quality. They seem to actually care about doing software engineering properly. (I’ve personally discovered security bugs in several competing products that are more severe and exploitable than what I’m talking about here.) I have a ton of respect for Anthropic. Unfortunately, when it comes to sub agents in Claude code, they are not living up to standard they have set.
In order for it not to do useless stuff I need to expend more energy on prompting than writing stuff myself. I find myself getting paranoid about minutia in the prompt, turns of phrase, unintended associations in case it gives shit-tier code because my prompt looked too much like something off experts-exchange or whatever.
What I really want is something like a front-end framework but for LLM prompting, that takes away a lot of the fucking about with generalised stuff like prompt structure, default to best practices for finding something in code, or designing a new feature, or writing tests..
It's not simple to even imagine ideal solution. The more you think about it the more complicated your solution becomes. Simple solution will be restricted to your use cases. Generic is either visual or a programming language. I's like to have visual constructor, graph of actions, but it's complicated. The language is more powerful.
Writing the code is the fast and easy part once you know what you want to do. I use AI as a rubber duck to shorten that cycle, then write it myself.
Doesn't that sound ridiculous to you?
Admittedly, part of it is my own desire for code that looks a certain way, not just that which solves the problem.
Choosing the battles to pick is part of the skill at the moment.
I use AI for a lot of boiler plate, tedious tasks I can’t quite do a vim recording for, small targeted scripts.
The boilerplate argument is becoming quite old.
It’s basically just a translation, but with dozens of tables, each with dozens of columns it gets tedious pretty fast.
If given other files from the project as context it’s also pretty good at generating the table and column descriptions for documentation, which I would probably just not write at all if doing it by hand.
I think you need to imagine all the things you could be doing with LLMs.
For me the biggest thing is so many tedious things are now unlocked. Refactors that are just slightly beyond the IDE, checking your config (the number of typos it’s picked up that could take me hours because eyes can be stupid), data processing that’s similar to what you have done before but different enough to be annoying.
It's not AI, there is no intelligence. A language model as the name says deals with language. Current ones are surprisingly good at it but it's still not more than that.
But I can’t tell you any useful tips or tricks to be honest. It’s like trying to teach a new driver the intuition of knowing when to brake or go when a traffic light turns yellow. There’s like nothing you can really say that will be that helpful.
The funny thing is - we need less. Less of everything. But an up-tick in quality.
This seems to happen with humans with everything - the gates get opened, enabling a flood of producers to come in. But this causes a mountain of slop to form, and overtime the tastes of folks get eroded away.
Engineers don't need to write more lines of code / faster - they need to get better at interfacing with other folks in the business organisation and get better at project selection and making better choices over how to allocate their time. Writing lines of code is a tiny part of what it takes to get great products to market and to grow/sustain market share etc.
But hey, good luck with that - ones thinking power is diminished overtime by interacing with LLMs etc.
Sometimes I reflect on how much more efficiently I can learn (and thus create) new things because of these technologies, then get anxiety when I project that to everyone else being similarly more capable.
Then I read comments like this and remember that most people don't even want to try.
Come back and post here when you have built something that has commercial success.
Show us all how it's done.
Until then go away - more noise doesn't help.
I'm still the one doing the doing after the learning is complete.
I just don't enjoy the work as much as I did when was younger. Now I want to get things done and then spend the day on other more enjoyable (to me) stuff.
I’ve noticed colleagues who enjoy Claude code are more interested in “just ship it!” (and anecdotally are more extroverted than myself).
I find Claude code to be oddly unsatisfying. Still trying to put my finger on it, but I think it’s that I quickly lose context. Even if I understand the changes CC makes, it’s not the same as wrestling with a problem and hitting roadblocks and overcoming. With CC I have no bearing on whether I’m in an area of code with lots of room for error, or if I’m standing in the edge of a cliff and can’t cross some line in the design.
I’m way more concerned with understanding the design and avoiding future pain than my “ship it” colleagues (and anecdotally am way more introverted). I see what they build and, yes, it’s working, for now, but the table relationships aren’t right and this is going to have to be rebuilt later, except now it’s feeding a downstream report that’s being consumed by the business, so the beta version is now production. But the 20 other things this app touches indirectly weren’t part of the vibe coding context, so the design obviously doesn’t account for that. It could, but of course the “ship it” folks aren’t the ones that are going to build out lengthy requirements and scopes of work and document how a dozen systems relate to and interact with each other.
I guess I’m seeing that the speed limit of quality is still the speed of my understanding, and (maybe more importantly) that my weaponizing of my own obsession only works when I’m wrestling and overcoming, not just generating code as fast as possible.
I do wonder about the weaponized obsession. People will draw or play music obsessively, something about the intrinsic motivation of mastery, and having AI create the same drawing, or music, isn’t the same in terms of interest or engagement.
They are the single closest thing we've ever had to objective evaluation on if an engineering practice is better or worse. Simply because just about every single engineering practice that I see that makes coding agents work well also makes humans work well.
And so many of these circular debates and other best practices (TDD, static typing, keeping todo lists, working in smaller pieces, testing independently before testing together, clearly defined codebase practices, ...) have all been settled in my mind.
The most controversial take, and the one I dislike but may reluctantly have to agree with is "Is it better for a business to use a popular language less suited for the task than a less popular language more suited for it." While obviously it's a sliding scale, coding agents clearly weight in on one side of this debate... as little as I like seeing it.
The best way is to create tests yourself, and block any attempts to modify them
Right now it's not easy prompting claude code (for example) to keep fixing until a test suite passes. It always does some fixed amount of work until it feels it's most of the way there and stops. So I have to babysit to keep telling it that yes I really mean for it to make the tests pass.
I've interviewed with three tier one AI labs and _no-one_ I talked to had any idea where the business value of their models came in.
Meanwhile Chinese labs are releasing open source models that do what you need. At this point I've build local agentic tools that are better than anything Claude and OAI have as paid offerings, including the $2,000 tier.
Of course they cost between a few dollars to a few hundred dollars per query so until hardware gets better they will stay happily behind corporate moats and be used by the people blessed to burn money like paper.
This doesn't match the sentiment on hackernews and elsewhere that claude code is the superior agentic coding tool, as it's developed by one of the AI labs, instead of a developer tool company.
You don't see better ones from code tooling companies because the economics don't work out. No one is going to pay $1,000 for a two line change on a 500,000k line code base after waiting four hours.
LLMs today the equivalent of a 4bit ALU without memory being sold as a fully functional personal computer. And like ALUs today, you will need _thousands_ of LLMs to get anything useful done, also like ALUs in 1950 we're a long way off from a personal computer being possible.
Doesn't specifically seem to jive with the claim Anthropic made where they were worried about Claude Code being their secret sauce, leaving them unsure whether to publicly release it. (I know some skeptical about that claim.)
One option is to write "Please implement this change in small steps?" more-or-less exactly
Another option is to figure out the steps and then ask it "Please figure this out in small steps. The first step is to add code to the parser so that it handles the first new XML element I'm interested in, please do this by making the change X, we'll get to Y and Z later"
I'm sure there's other options, too.
I give an outline of what I want to do, and give some breadcrumbs for any relevant existing files that are related in some way, ask it to figure out context for my change and to write up a summary of the full scope of the change we're making, including an index of file paths to all relevant files with a very concise blurb about what each file does/contains, and then also to produce a step-by-step plan at the end. I generally always have to tell it to NOT think about this like a traditional engineering team plan, this is a senior engineer and LLM code agent working together, think only about technical architecture, otherwise you get "phase 1 (1-2 weeks), phase 2 (2-4 weeks), step a (4-8 hours)" sort of nonsense timelines in your plan. Then I review the steps myself to make sure they are coherent and make sense, and I poke and prod the LLM to fix anything that seems weird, either fixing context or directions or whatever. Then I feed the entire document to another clean context window (or two or three) and ask it to "evaluate this plan for cohesiveness and coherency, tell me if it's ready for engineering or if there's anything underspecified or unclear" and iterate on that like 1-3 times until I run a fresh context window and it says "This plan looks great, it's well crafted, organized, etc...." and doesn't give feedback. Then I go to a fresh context window and tell it "Review the document @MY_PLAN.md thoroughly and begin implementation of step 1, stop after step 1 before doing step 2" and I start working through the steps with it.
As an engineer, especially as you get more experience, you can kind of visualize the plan for a change very quickly and flesh out the next step while implementing the current step
All you have really accomplished with the kind of process described is make the worlds least precise, most verbose programming language
I can say the right precise wording in my prompt to guide it to a good plan very quickly. As the other commenter mentioned, the entire above process only takes something like 30-120 minutes depending on scope, and then I can generate code in a few minutes that would take 2-6 weeks to write myself, working 8 hr days. Then, it takes something like 0.5-1.5 days to work out all the bugs and clean up the weird AI quirks and maybe have the LLM write some playwright tests or whatever testing framework you use for integration tests to verify it's own work.
So yes, it takes significant time to plan things well for good results, and yes the results are often sloppy in some parts and have weird quirks that no human engineer would make on purpose, but if you stick to working on prompt/context engineering and getting better and faster at the above process, the key unlock is not that it just does the same coding for you, with it generating the code instead. It's that you can work as a solo developer at the abstraction level of a small startup company. I can design and implement an enterprise grade SSO auth system over a weekend that integrates with Okta and passes security testing. I can take a library written in one language and fully re-implement it in another language in a matter of hours. I recently took the native libraries for Android and iOS for a fairly large, non-trivial SDK, and had Claude build me a React Native wrapper library with native modules that integrates both natives libraries and presents a clean, unified interface and typescript types to the react native layer. This took me about two days, plus one more for validation testing. I have never done this before. I have no idea how "Nitro Modules" works, or how to configure a react native library from scratch. But given the immense scaffolding abilities of LLMs, plus my debugging/hacking skills, I can get to a really confident place, really quickly and ship production code at work with this process, regularly.
It takes 30-40 minutes to generate a plan and it generates code that would have taken 20-30 minutes to write.
When it’s generating “weeks” worth of code, it inevitably goes off the rails and the crap you get goes in the garbage.
This isn’t to say agents don’t have their uses, but i have not seen this specific problem actually work. They’re great for refactoring (usually) and crapping out proof of concepts and debugging specific problems. It’s also great for exploring a new code base where you have little prior knowledge.
It makes sense that it sucks at generating large amounts of code that fits cohesively into the project. The context is too small. My code base is millions of lines of code. My brain has a shitload more of that in context than any of the models. So they have to guess and check and end up incorrect and poor and i don’t. I know which abstractions exist that i can use. It doesn’t. Sometimes it guesses right. Often Times it doesn’t. And once it’s wrong, it’s fucked for the entire rest of the session so you just have to start over
Take this for example: https://www.reddit.com/r/ClaudeAI/comments/1m7zlot/how_planm...
This trick is just the basic stuff, but it works really well. You can add on and customize from there. I have a “/task” slash command that will run a full development cycle with agents generating code, many more (12-20) agent critics analyzing the unstaged work, all orchestrated by a planning agent that breaks the complex task into small atomic steps.
The first stage of this project (generating the plan) is interactive. It can then go off and make 10kLOC code spread over a dozen commits and the quality is good enough to ship, most of the time. If it goes off the rails, keep the plan document but nuke the commits and restart. On the Claude MAX plan this costs nothing.
This is how I do all my development now. I spend my time diagnosing agent failures and fixing my workflows, not guiding the agent anymore (other than the initial plan document).
I still review every line of code before pushing changes.
So I'll say something like "evaluate the URL fetcher library for best practices, security, performance, and test coverage. Write this up in a markdown file. Add a design for single flighting and retry policy. Break this down into steps so simple even the dumbest LLM won't get confused.
Then I clear the context window and spawn workers to do the implementation.
I asked Claude Code to read a variable from a .env file.
It proceeded to write a .env parser from scratch.
I then asked it to just use Node's built in .env file parsing....
This was the 2nd time in the same session that it wrote a .env file parser from scratch. :/
Claude Code is amazing, but it'll goes off and does stupid even for simple requests.
Most users will just give a vague tasks like: "write a clone of Steam" or "create a rocket" and then they blame Claude Code.
If you want AI to code for you, you have to decompose your problem like a product owner would do. You can get helped by AI as well, but you should have a plan and specifications.
Once your plan is ready, you have to decompose the problem into different modules, then make sure each modules are tested.
The issue is often with the user, not the tool, as they have to learn how to use the tool first.
This seems like half of HN with how much HN hates AI. Those who hate it or say it’s not useful to them seem to be fighting against it and not wanting to learn how to use it. I still haven’t seen good examples of it not working even with obscure languages or proprietary stuff.
The main difference is that with the current batch of genai tools, the AI's context resets after use, whereas a (good) intern truly learns from prior behavior.
Additionally, as you point out, the language and frameworks need to be part of the training set since AI isn't really "learning" it's just prepolulating a context window for its pre-existing knowledge (token prediction), so ymmv depending on hidden variables from the secret (to you, the consumers) training data and weights. I use Ruby primarily these days, which is solidly in the "boring tech" camp and most AIs fail to produce useful output that isn't rails boilerplate.
If I did all my IC contributions via directed intern commits I'd leave the industry out of frustration. Using only AI outputs for producing code changes would be akin to torture (personally.)
Edit: To clarify I'm not against AI use, I'm just stating that with the current generation of tools it is a pretty lackluster experience when it comes to net new code generation. It excells at one off throwaway scripts and making large tedious redactors less drudgerly. I wouldn't pivot to it being my primary method of code generation until some of the more blatant productiviy losses are addressed.
Now, it's not always useless. It's GREAT at adding debugging output and knowing which variables I just added and thus want to add to the debugging output. And that does save me time.
And it does surprise me sometimes with how well it picks up on my thinking and makes a good suggestion.
But I can honestly only accept maybe 15-20% of the suggestions it makes - the rest are often totally different from what I'm working on / trying to do.
And it's C++. But we have a very custom library to do user-space context switching, and everything is built on that.
I kind of feel this. I’ll code for days and forget to eat or shower. I love it. Using Claude code is oddly unsatisfying to me. Probably a different skillset, one that doesn’t hit my obsessive tendencies for whatever reason.
I could see being obsessed with some future flavor of it, and I think it would be some change with the interface, something more visual (gamified?). Not low-code per se, but some kind of mashup of current functionality with graph database visualization (not just node force graphs, something more functional but more ergonomic). I haven’t seen anything that does this well, yet.
I’ve seen incredible improvements just by doing this and using precise prompting to get Claude to implement full services by itself, tests included. Of course it requires manual correction later but just telling Claude to check the development documentation before starting work on a feature prevents most hallucinations (that and telling it to use the Context7 MCP for external documentation), at least in my experience.
The downside to this is that 30% of your context window will be filled with documentation but hey, at least it won’t hallucinate API methods or completely forget that it shouldn’t reimplement something.
Just my 2 cents.
Tried this on a developer I worked with once and he just scoffed at me and pushed to prod on a Friday.
that's the --yolo flag in cc :D
I've been building commercial codebases with Claude for the last few months and almost all of my input is on taste and what defines success. The code itself is basically disposable.
I'm finding this is the case for my work as well. The spec is the secret sauce, the code (and its many drafts) are disposable. Eventually I land on something serviceable, but until I do, I will easily drop a draft and start on a new one with a spec that is a little more refined.
This is key. We’re in mass production of software era. It’s easier and cheaper to replace a broken thing/part than to fix it, things being some units of code.
Things that make you go "Hmmmmmm."
It’s a very different discussion when you’re building a product to sell.
We'll just keep getting submission after submission talking about how amazing Claude Code is with zero real world examples.
it's funny because as I have gotten better as a dev I've gone backwards through his progression. when I was less experienced I relied on Google; now, just read the docs
https://www-cdn.anthropic.com/58284b19e702b49db9302d5b6f135a...
Abstracting the boilerplate is how you make things easier for future you.
Giving it to an AI to generate just makes the boilerplate more of a problem when there's a change that needs to be made to _all_ the instances of it. Even worse if the boilerplate isn't consistent between copies in the codebase.
I'm lazy af. I have not been manually typing up boilerplate for the past 15 years. I use computers to do repetitive tasks. LLMs are good at some of them, but it's just another tool in the box for me. For some it seems like their first and only one.
What I can't understand is how people are ok with all that typing that you still have to do just going into /dev/null while only some translation of what you wrote ends up in the codebase. That one makes me even less likely to want to type. At least if I'm writing source code I know it's going into the repository directly.
First I know my problem space better than the LLM.
Second, the best way to express coding intention is with code. The models often have excellent suggestions on improvements I wouldn’t have thought of. I suspect the probability of providing a good answer has been increased significantly by narrowing the scope.
Another technique is to say “do this like <some good project> does it” but I suspect that might be close to copyright theft.
If the average US salaried developer is 10-15% more productive for just 1k more a month it is literally a no-brainer for companies to invest in that.
Of course on the other side of the coin there are many companies that are very stingy with paying for literally anything for their employees that could measurably improve productivity, and hamper their ability to be productive by intentionally paying for cheap shitty tools. They will just lose out.
Not to mention - while I know many don't like it, they may be able to achieve enough of a productivity boost to not require hiring as many of those crazy salaried devs.
Its literally a no-brainer. Thinking about it from just the individual cost factor is too simplified a view.
Having said the above some level of AI spending is the new reality. Your workplace pays for internet right? Probably a really expensive fast corporate grade connection? Well they now also need to pay for an AI subscription. That's just the current reality.
Aider felt similar when I tried it in architect mode; my prompt would be very short and then I'd chew through thousands of tokens while it planned and thought and found relevant code snippets and etc.
What happens if you don't pay $1k/mo for Claude? Do you get an appreciable drop in productivity and output?
Genuinely asking.
Here's what works for me:
- Detailed claude.md containing overall information about the project.
- Anytime Claude chooses a different route that's not my preferred route - ask my preference to be saved in global memory.
- Detailed planning documentation for each feature - Describe high-level functionality.
- As I develop the feature, add documentation with database schema, sample records, sample JSON responses, API endpoints used, test scripts.
- MCP, MCP, MCP! Playwright is a game changer
The more context you give upfront, the less back-and-forth you need. It's been absolutely transformative for my productivity.
Thank you Claude Code team!
Claude code is amazing at producing code for this stack. It does excellent job at outputting ffmpeg, curl commands, linux shell script etc.
I have written detailed project plan and feature plan in MarkDown - and Claude has no trouble understanding the instructions.
I am curious - what is your usecase?
EDIT: I see, you're asking Claude to modify claude.md to track your preference there, right?
Ask Claude to update the preference and document the moment you realize that claude has deviated away from the path.
Really simple workflow!
“The future of agentic coding with Claude Code”
Is this another case of someone using API keys and not knowing about the claude MAX plans? It's $100 or $200 a month, if you're not pure yolo brute-force vibe coding $100 plan works.
Claude code can access pretty much all those third party services in the shell, using curl or gh and so on. And in at least one case using MCP can cause trouble: the linear MCP server truncates long issues, in my experience, whereas curling the API does not.
What am I missing?
I agree it's wasteful, but from a long-form view of what spending looks like (or at least should/used to look like). Those who see 1.5k/month as "saving" money typically only care about next quarter.
As the old adage goes: a thousand dollars saved this month is 100 thousand spent next year.
Detachment from the code has been excellent for me. Just started a v2 rewrite of something I’d never had done in the past. Mostly because it would have taken me too much time to try it out if I wrote it all by hand.
Also, there may be selfish reasons to do this as well: (1) "Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance" https://arxiv.org/abs/2402.14531 (2) "Three Things to Know About Prompting LLMs" https://sloanreview.mit.edu/article/three-things-to-know-abo...
1. I don't see the output of the compiler, as in, all I get is an executable blob. It could be inspected, but I don't think that I ever have in my 20+ year career. Maybe I lie and I've rocked up with a Hex editor once or twice, out of pure curiousity, but I've never got past looking for strings that I recognise.
2. When I use Claude, I am using it to do things that I can do, by hand, myself. I am reviewing the code as I go along, and I know what I want it to do because it's what I would be writing myself if I didn't have Claude (or Gemini for that matter).
So, no, I have never congratulated the compiler (or interpreter, linker, assembler, or even the CPU).
Finally, I view the AI as a pairing partner, sometimes it's better than me, sometimes it's not, and I have to be "in the game" in order to make sure I don't end up with a vibe coded mess.
edit: This is from yesterday (Claude had just fixed a bug for me - all I did was paste the block of code that the bug was in, and say "x behaviour but getting y behaviour instead)
perfect, thanks
Edit You're welcome! That was a tricky bug - using rowCount instead of colCount in the index calculation is the kind of subtle error that can be really hard to spot. It's especially sneaky because row 0 worked correctly by accident, making it seem like the logic was mostly right. Glad we got it sorted out! Your Gaps redeal should now work properly with all the 2s (and other correctly placed cards) staying in their proper positions across all rows.
I fed Claude a copy of everything I've ever written on Hacker News. Then I asked it to generate an essay that sounds like me.
Out of five paragraphs I had to change one sentence. Everything else sounded exactly as I would have written it.
It was scary good.
https://www.linkedin.com/posts/reidhoffman_can-talking-with-...
I've watched a handful of videos with this "digital twin", and I don't know how much post-processing has gone into them, but it is scary accurate. And this was a year+ ago.
I'm not comfortable using it to generate code for this project, but I can absolutely see using it to generate code for a project I'm familiar with in a language I know well.
Personally I'm a Neovim addict, so you can pry TUIs out of my cold dead hands (although I recognize that's not a preference everyone shares). I'm also not purely vibecoding; I just use it to speed up annoying tasks, especially UI work.
Claude code is more user friendly than cursor with its CLI like interface. The file modifications are easy to view and it automatically runs psql, cd, ls , grep command. Output of the commands is shown in more user friendly fashion. Agents and MCPs are easy to organized and used.
1) Summarize what I think my project currently does
2) Summarize what I think it should do
3) Give a couple of hints about how to do it
4) Watch it iterate a write-compile-test loop until it thinks it's ready
I haven't added any files or instructions anywhere, I just do that loop above. I know of people who put their Claude in YOLO mode on multiple sessions, but for the moment I'm just sitting there watching it.
Example:
"So at the moment, we're connecting to a websocket and subscribing to data, and it works fine, all the parsing tests are working, all good. But I want to connect over multiple sockets and just take whichever one receives the message first, and discard subsequent copies. Maybe you need a module that remembers what sequence number it has seen?"
Claude will then praise my insightful guidance and start making edits.
At some point, it will do something silly, and I will say:
"Why are you doing this with a bunch of Arc<RwLock> things? Let's share state by sharing messages!"
Claude will then apologize profusely and give reasons why I'm so wise, and then build the module in an async way.
I just keep an eye on what it tries, and it's completely changed how I code. For instance, I don't need to be fully concentrated anymore. I can be sitting in a meeting while I tell Claude what to do. Or I can be close to falling asleep, but still be productive.
I don't know if this is a question of the language or what but I just have no good luck with its consistency. And I did invest time into defining various CLAUDE.md files. To no avail.
Does it end in a forever loop for you? I used to have this problem with other models.
But yeah, strongly typed languages, test driven development, and good high quality compiler errors are real game changers for LLM performance. I use Rust for everything now.
Typescript on the other hand, seems to do much better on first pass. Still not always beautiful code, but much more application ready.
My hypothesis is that this is due to the billions LOC of Jupyter Notebook it was probably trained on :/
It will fix those if you catch them, but I haven't been able to figure out a prompt that prevents this in the first place.
for the record, I've been bullish on the tooling from the beginning
My dev-tooling AI journey has been chatGPT -> vscode + copilot -> early cursor adopter -> early claude + cursor adopter -> cursor agent with claude -> and now claude code
I've also spent a lot of time trying out self-hosted LLMs such as couple version of Qwen coder 2.5/3 32B, as well as deepseek 30B - and talking to them through the vscode continue.dev extension
My personal feelings are that the AI coding/tooling industry has seen a major plateau in usefulness as soon as agents became apart of the tooling. The reality is coding is a highly precise task, and LLMs down to the very core of the model architecture are not precise in the way coding needs them to be. and it's not that I don't think we won't one day see coding agents, but I think it will take a deep and complete bottom up kind of change and an possibly an entirely new model architecture to get us to what people imagine a coding agent is
I've accepted to just use claude w/ cursor and to be done with experimenting. the agent tooling just slows my engineering team down
I think the worst part about this dev tooling space is the comment sections on these kinds of articles is completely useless. it's either AI hype bots just saying non-sense, or the most mid an obvious takes that you here everywhere else. I've genuinely have become frustrated with all this vague advice and how the AI dev community talks about this domain space. there is no science, data, or reason as to why these things fail or how to improve it
I think anyone who tries to take this domain space seriously knows that there's limit to all this tooling, we're probably not going to see anything group breaking for a while, and there doesn't exist a person, outside the AI researchers at the the big AI companies, that could tell ya how to actually improve the performance of a coding agent
I think that famous vibe-code reddit post said it best
"what's the point of using these tools if I still need a software engineer to actually build it when I'm done prototyping"
I am sorry, but this is so out of touch with reality. Maybe in the US most companies are willing to allocate you 1000 or 1500 USD/month/engineer, but I am sure that in many countries outside of the US not even a single line (or other type of) manager will allocate you such a budget.
I know for a fact that in countries like Japan you even need to present your arguments for a pizza party :D So that's all you need to know about AI adoption and what's driving it
Edit: Why is this downvoted? Different corp cultures have different ideas about what is worthwhile. Some places value innovation and experimentation and some places don't.
I notice what worked and what didn't, what was good and what was garbage -- and also how my own opinion of what should be done changed. I have Claude Code help me update the initial prompt, help me update what should have been in the initial context, maybe add some of the bits that looked good to the initial context as well, and then write it all to a file.
Then I revert everything else and start with a totally blank context, except that file. In this session I care about the code, I review it, I am vigilant to not let any slop through. I've been trying for the second session to be the one that's gonna work -- but I'm open to another round or two of this iteration.
OK I made up the statistic, but the core idea is true, and it's something that is rarely considered in this debate. At least with code you wrote, you can probably recognize it later when you need to maintain it or just figure out what it does.
I think I can also end up with a better result, and having learned more myself. It's just better in a whole host of directions all at once.
I don't end up intimately familiar with the solution however. Which I think is still a major cost.
I havn't put a huge effort into learning to write prompts but in short, it seems easier to write the code myself than determine prompts. If you don't know every detail ahead of time and ask a slightly off question, the entire result will be garbage.
It’s way easier to let the agent code the whole thing if your prompt is good enough than to give instructions bit by bit only because your colleagues cannot review a PR with 50 file changes.
"Ask the LLM" is a good enough solution to an absurd number of situations. Being open to questioning your approach - or even asking the LLM (with the right context) to question your approach has been valuable in my experience.
But from a more general POV, its something we'll have to spend the next decade figuring out. 'Agile'/scrum & friends is a sort of industry-wide standard approach, and all of that should be rethought - once a bit of the dust settles.
We're so early in the change that I haven't even seen anybody get it wrong, let alone right.
The 50 file changes is most likely unsafe to deploy and unmaintainable.