548 points by heldrida 23 hours ago | 78 comments
legerdemain 14 hours ago
From 4 days ago: https://news.ycombinator.com/item?id=48019226

  > I work on Bun and this is my branch
  >
  > This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
  >
  > I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.
Jarred 13 hours ago
cargo check reported over 16,000 compiler errors when I wrote that message. It could not print a version number or run JavaScript. I didn’t expect it to work this quickly and I also didn’t expect the performance to be as competitive. There’ll be a blog post with more details.
gobdovan 11 hours ago
If this experiment ends up resulting in a real migration path, I think that would be completely awesome. Maybe it means we have a chance to revive older projects such as ngspice [0], but with modern affordances and better safety properties.

From your post, though, it sounds like Bun may have been a pretty direct rewrite, without too many hard choices along the way. Is that fair?

[0] https://ngspice.sourceforge.io/

bsder 11 hours ago
Erm, what's the problem with ngspice? There appears to be people working on it and it even recently got integrated into KiCad.

That sounds like a perfectly functional project, to me.

gobdovan 9 hours ago
As an amateur in the space: I download on Mac, run `ngspice`, "Error: Can't open display: :0". I look in the code - hardcoded X11-era assumptions. Not exactly modern affordances...

Then I try to understand and extract the actual formulas, and there isn't a clean formula layer anywhere. All is procedural, e.g. in `b4v6temp.c` formulas are tangled with branching, caching, model-state mutation. Extracting the computation, embedding cleanly and exposing through a sane API feels hair-pulling.

So yeah, maintained, but not as in 'modern, embeddable, understandable software component' I'd be looking forward in a rewrite. Maybe not even touch the simulation core, just rewriting Embedding/API layer and the UX would already be a big deal.

bsder 9 hours ago
> As an amateur in the space

Why are you not using this through KiCad? That's what I would expect an amateur to do; especially since they handle the UX that you are complaining about.

And you are complaining about tangled code but that code is almost certainly hyper-optimized since performance actually mattered a LOT to people running spice simulations. ng-spice (and Spice3 and Spice2) were not written for programming ease; they were written to get a real job worth real money done.

In addition, any change you make to that code needs to be run back through numerical regression tests to make sure you didn't break things since this is software that people expect to get correct answers.

However, if the legacy seems to bother you so much, perhaps you should look at Xyce from Sandia?

dwattttt 7 hours ago
> Why are you not using this through KiCad? That's what I would expect an amateur to do; especially since they handle the UX that you are complaining about.

They sound like an amateur at circuit design, not software engineering (which is how I'd describe myself too).

ted_dunning 6 hours ago
KiCad is still the preferred interface.

The original point stands. Ngspice shows its heritage from the days of Fortran far more than a modern code base would or should. It's sole great virtue (from my point of view) is that it integrates with KiCad and only falls over with no reason about 5% of the time.

I would suspect that some of the simulation systems coming out of the Julia community or Xyce would be a better base.

6 hours ago
skeledrew 9 hours ago
I see "sourceforge" and immediately I think "this project is way behind time and is going to pose a lot of issues to new users, if it's still active".
gobdovan 9 hours ago
I could have linked Github repo which has been abandoned for 11 years and ranks higher on Google than the sourceforge page, but that would have maybe been disingenuous. (https://github.com/ngspice/ngspice)
eqvinox 11 hours ago
+1, a project presenting at FOSDEM certainly does not need a "revive".
etimberg 9 hours ago
The spice core that ngspice is built off is terrible code. It has a long history going back to 1970s era fortran. Starting fresh is probably preferable
eqvinox 9 hours ago
That's not a revive though, revive (at least to me) implies it's dead.
bsder 9 hours ago
> The spice core that ngspice is built off is terrible code. It has a long history going back to 1970s era fortran. Starting fresh is probably preferable

That code is also hyper-optimized for performance. I sincerely doubt you are going to match the performance easily with any random rewrite.

Now, if you had a very clear idea of why the code was making assumptions from the 1990s that are no longer valid, then you might stand a chance of producing something that would outperform it. Or, perhaps, if you had particular knowledge of modern high-performance numerical libraries that you could apply to the problem, then you might be able to beat it.

However, circuit simulation is remarkably difficult to get right (stiff systems with multiple time constants are not uncommon) and generally resistant to parallelization (each device can have its own model which are a unique set of linear differential equations).

If, however, the legacy of ngspice bugs you that much, go look at Xyce and see if that is more to your taste.

nextaccountic 7 hours ago
> and generally resistant to parallelization (each device can have its own model which are a unique set of linear differential equations).

Solving sets of differential equations is something that's parallelizable though

See for example how there's physics engines running on GPU. That's mechanics and not electric circuits, however it's differential equations all the same.

aragilar 6 hours ago
Which differential equations are you talking about? Linear ones have standard solutions and are definitely parallelisable (though you can basically just write the solution down by hand). Non-linear ones vary from can basically be approximated by a linear solution with corrections to needing to use relaxation methods (which are obviously not parallelisable).

Mechanics is generally linear, and for game physics engines fast is more valuable than correct (fast inverse square root being the obvious poster child). Add viscosity and you're in for a bad time.

ted_dunning 6 hours ago
To be specific, a linear solver can be (as in I have done) written in a week.

A serious non-linear solver that handles legacy Spice models is another beast entirely. And if you want to integrate modern advances in algebraic-differential systems you take that to a higher level.

These are not partial differential equations such as you find in Navier-Stokes. These are sparse non-linear differential equations that do not parallelize nearly as simply.

Another example of related problems that parallelize poorly even though they are linear are the FDTD formulations for Maxwell's equations. These are relatively simple systems, but the bottleneck is almost always the memory bandwidth because it is so hard to parallelize.

ycui7 6 hours ago
The type of people who need spice is dead serious about accuracy. 1ppm error sometimes is not tolerable. So, an optimization in a game engine is definitely not suitable for engineering simulation.
vintermann 1 hour ago
> Now, if you had a very clear idea of why the code was making assumptions from the 1990s that are no longer valid, then you might stand a chance of producing something that would outperform it. Or, perhaps, if you had particular knowledge of modern high-performance numerical libraries that you could apply to the problem, then you might be able to beat it.

But that's exactly the sort of exotic domain knowledge that AI models have that I don't.

pkolaczk 3 hours ago
That code was optimized for performance for 1980s hardware. It’s very far from optimized for modern CPUs.
10 hours ago
folderquestion 30 minutes ago
Just an aside, is there any way to know how many of those 16,000 compiler errors are independent. I mean, could it be that just by changing say 500 lines of code all those errors disappear?

Perhaps 16,000 could just measure cascade breakage, for example one lifetime mismatch can cause errors in every function that tries to use that reference.

Rust reference lifetime bookkeeping is a difficult task for LLMs. The LLM has to maintain, across multiple functions and structs, which references outlive which. Furthermore compiler messages are highly contextual and lifetime patterns are sparse in the training set.

inglor 13 hours ago
Rust is really fun to work with and the compiler is great, just make sure the rewrite takes compile times into account since larger projects often have to be organized in a way that makes compilation reasonably fast.
laurencerowe 10 hours ago
In my experience Bun in Zig compiles more slowly than Deno in Rust.
hiccuphippo 10 hours ago
Single compiles for sure. Where Zig is optimizing compilation is in the incremental compiler, which I've seen compile the compiler itself in an instant after a single line change. Of course, that kind of speed is probably not interesting to some people if the AI is writing tons of lines of code before they go to the compilation step.
laurencerowe 8 hours ago
I found making single line changes in Bun’s zig code led to very long compiles compared to doing the same in Rust code. It was a while ago though and maybe I was doing something wrong.
cdud3 4 hours ago
Probably a very long time ago then. Try again with Zig 0.16. It's amazing how fast recompiles can be.
lukaslalinsky 1 hour ago
They can't, because Bun is tied to a fork of Zig 0.14 which is not compatible with regular Zig compiler.
ignoramous 13 hours ago

  how long does it take to compile?

  @jarredsumner: It's basically the same as in zig using our faster zig compiler. If we were using the upstream zig compiler, rust port would compile faster.
https://x.com/jarredsumner/status/2053050239423312035
13 hours ago
cpeterso 8 hours ago
What coding model are you using for the rewrite? Opus for everything? A prerelease model like Mythos?
Aeolun 8 hours ago
This does not surprise me in the least. Several Claudes are very good at splitting up and working through them all.
nhatcher 12 hours ago
That's a post I am eagerly waiting to read.

Basically we are seeing now an "inverse Hofstadter's Law" where doing something with an LLM takes less time thanexpected even when you take into account this law.

I am a Rust developper myself but I really love Zig and Bun. I am just overly curious of all this.

nextaccountic 7 hours ago
> Basically we are seeing now an "inverse Hofstadter's Law" where doing something with an LLM takes less time thanexpected even when you take into account this law.

Even LLMs themselves can't accurately estimate this (though this may be out of distribution stuff)

sysguest 13 hours ago
> I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.

haven't used zig...(only used rust)

but zig doesn't solve those problems?

nyrikki 12 hours ago
Zig is a middle ground. It solves some of the common foot-guns in C, Without the costs of affine substructural typing that offers Rust its super powers.

I am of the opinion that it is horses for courses and not a universal better proposition.

Because my needs don’t fit in with Rust’s decisions very well I will use zig for personal projects when needed. I just need linked lists, graphs etc…

While hopefully someone can provide a more comprehensive explanation here are the two huge wins for my use case.

1) In Zig, accessing an array or slice out of bounds is considered detectable illegal behavior.

2) defer[0] allows you to collocate the the freeing of resources with code.

That at least ‘feels’ safer to me than a bunch of ‘unsafe’ rust that is required for my very specific use case.

I was working on some eBPF code in C and did really miss zig.

For me it fits the Pareto principle but zig is also just a sometimes food for me, so take that for what it is worth.

[0] https://zig.guide/language-basics/defer/

IshKebab 12 hours ago
Fwiw you don't need unsafe for graphs or linked lists in Rust. At least not directly - these things can be abstracted. The petgraph crate is the most popular for graphs. I'm not sure about linked lists because linked lists are the wrong choice 99.9% of the time.

I've written hundreds of thousands of lines of Rust and outside of FFI, I've written I think one line of unsafe Rust.

fao_ 12 hours ago
[flagged]
IshKebab 11 hours ago
It's not as simple as that. All software is abstraction and with any software if you go deep enough you'll find unsafe code.

E.g. look at a Python list. Is it safe? In Python sure, but that's abstracting a C implementation which definitely isn't safe.

If you look at Rust's std::Vec you'll find a very similar story - safe interface over an unsafe implementation.

It isn't as binary as you think.

dralley 10 hours ago
Not really though. That's like saying that no language is "safe" because the compiler could have a bug.

It's true that safe wrappers around unsafe code sometimes have bugs in them, but it's orders of magnitude easier to get the abstraction right once than to use unsafe correctly in many places sprawled across a large codebase.

paulddraper 11 hours ago
If you don’t see any difference between those two, I’m really not sure what to say.
awesome_dude 12 hours ago
Show code
IshKebab 11 hours ago
Err https://github.com/petgraph/petgraph

What are you asking for exactly?

awesome_dude 9 hours ago
I don't think it's unreasonable, even though I am getting marked down for daring to ask, for people who are making assertions, even if they are well understood *within their own community* (that is, not necessarily universally known) to show examples of what they are talking about.

You're correcting someone, so it's clear that your understanding isn't universal, and example code is the absolute minimum.

rascul 8 hours ago
It doesn't seem clear what code you're asking for.
SuperV1234 10 hours ago
Zig doesn't even have RAII...
reactordev 8 hours ago
which is a good thing. C++'s RAII is magic-sauce that does a lot for you when you can simply use `defer` in zig. A constructor is just a function call. A destructor is just a function call.
shakow 8 hours ago
And a function call is just a fancy JMP, still it's generally acknowledged to be better to have all the bookkeeping automated.
fooker 8 hours ago
How is defer not magic sauce?
zephen 6 hours ago
Whether you consider it magic is up to you, but, unlike a destructor in RAII, there is nothing automatic going on. If you don't explicitly invoke a destructor, you won't get a destructor.

The fact that you can explicitly invoke the destructor to happen later is simply syntactic sugar, just like if/else/while, or any other control construct more powerful than a conditional jump instruction.

smj-edison 1 hour ago
And more importantly, you can choose what destructor to call. This is perhaps what's most underrated about defer, because defer can select among many different destructors possible, at multiple different levels (group free with arenas, individual free, etc).
drysine 3 hours ago
> If you don't explicitly invoke a destructor, you won't get a destructor.

When you explicitly invoke a "destructor", you do it on many code paths (and miss one or two)

>The fact that you can explicitly invoke the destructor to happen later

You don't specify where the `defer`-red "destructor" will be invoked.

efficax 13 hours ago
zig is unmanaged memory. But rust also allows memory leaks, and they're not uncommon in large, complex programs. So this rewrite will not necessarily control for that.
X0Refraction 19 minutes ago
What language doesn't allow memory leaks?
dmytrish 6 minutes ago
There are two kinds of memory leaks: forgotten manual freeing (all references are gone, but allocation is not) and forgetting to get rid of references that keeps an allocation alive. Both are a kind of logical error, but the first is mostly possible in languages with manual memory management. The second one is a universal logical error (only programmer knows which live references are really needed).
josephg 13 hours ago
Nope! Zig is like C in this regard. There’s no borrow checker. Managing memory is your responsibility.

It gives you a few more tools than C - like a debug allocator, bounds checked array slices and so on. But it’s not a memory safe language like rust.

dnautics 12 hours ago
It's not.. but im pretty sure it could be. could probably even take this (WIP) idea and bolt on a formal verifier pretty easily.

https://github.com/ityonemo/clr

josephg 9 hours ago
It'd take more than that to match rust's borrow checker. Rust's borrow checker tracks lifetimes, and sometimes needs annotations in code to help it understand what you're actually trying to do. I suppose you could work around that by adding lifetime annotations in zig comments. Then you've have a language that's a lot like rust, but without an ecosystem of borrowck-safe libraries. And with worse ergonomics (rust knows when it can Drop). And rust can put noalias everywhere in emitted code. And you'd probably have worse error messages than the rust compiler emits.

Its an interesting idea. But if you want static memory safety in a low level systems language, its probably much easier to just use rust.

dnautics 7 hours ago
> I suppose you could work around that by adding lifetime annotations in zig comments.

you can make a no-op function that gets compiled out but survives AIR

> rust knows when it can Drop.

and its possible to cause problems if you aren't aware where rust picks to dropp.

> And rust can put noalias everywhere in emitted code.

zig has noalias and it should be posssible to do alias tracking as a refinement.

> But if you want static memory safety in a low level systems language, its probably much easier to just use rust.

don't use that attitude to suck oxygen out of the air. rust comes with its own baggage, so "just using rust because its the only choice" keeps you in a local minimum.

josephg 6 hours ago
> and its possible to cause problems if you aren't aware where rust picks to drop.

Can you give some examples? I've never ran into problems due to this.

> don't use that attitude to suck oxygen out of the air. rust comes with its own baggage

Yeah, that's a totally fair argument. One nice aspect of the approach you're proposing is it'd give you the opportunity to explore more of the borrow checker design space. I'm convinced there's a giant forest of different ways we could do compile time memory safety. Rust has gone down one particular road in that forest. But there's probably loads of other options that nobody has tried yet. Some of them will probably be better than rust - but nobody has thought them through yet.

I wish you luck in your project! If you land somewhere interesting, I hope you write it up.

dnautics 6 hours ago
> Can you give some examples? I've never ran into problems due to this.

If it's doing a drop in the hot loop that may be an unexpected performance regression that could be carefully lifted.

thank you. Unfortunately in the last few weeks i've been too busy with my startup to put as much work into it. We'll see =D

josephg 4 hours ago
> If it's doing a drop in the hot loop that may be an unexpected performance regression that could be carefully lifted.

Yeah, I've heard of people being surprised that when they make massive collections of Box'ed entries, then get surprised that it takes a long time to Drop the whole thing. But this would be the same in C or Zig too. Malloc and free are really complex functions. Reducing heap allocations is an essential tool for optimisation.

The solution to this "unexpected performance regression" in rust is the same as it is in C, C++ and Zig: Stop heap allocating so much. Use primitive types, SSO types (SmartString and friends in rust) or memory arenas. Drop isn't the problem.

brabel 1 hour ago
In zig the solution is to use an arena allocator. That’s about as easy as it gets. Maybe Rust also allows doing that, I don’t know.
baranul 3 hours ago
It is quite obvious that Zig is pre 1.0 with thousands of stranded unsolved issues (per their GitHub repo). A review of Zig hype gives the strong impression it was created by being relentlessly and suspiciously pushed on HN, beyond logic or its language rankings (per TIOBE or GitHub stats), so that many were under the illusion that the language was something more or other than what it really is.

Zig is still under development and beta. Stability, crashes, and leaks should not be surprising, and even expected. To stick with a beta language, usually companies and developers are philosophically and/or financially aligned with the language. An example is JangaFX and Odin, where they not only have committed to using the language (despite being beta) in their products, but have directly hired GingerBill.

Team Bun appears to have "alignment and relationship issues" with Zig, to the point they have decided to extensively explore their options. Now Bun is rewritten in Rust. They are seeing if Rust solves their requirements. As with any relationship, if one ignores or takes a partner for granted, don't be surprised if they want a divorce or jump to someone else.

smj-edison 1 hour ago
You might want to check their Codeberg then, because they've moved all their development over there...
baranul 1 hour ago
Zig very much could of moved all of their GitHub issues over to Codeberg, to be resolved, but chose not to do so. Thus left thousands of issues unsolved and stranded.

This maneuver was arguably obfuscated by the anti-LLM stance and finger pointing at Microsoft, but nevertheless, many still have noticed. Zig, for a long time, had been falling behind and doing poorly on their open to close ratio for resolving issues. It should be embarrassing to leave so many issues open.

Even if not accepting new GitHub issues, they have demonstrated an inability to resolve existing issues, except at an extremely slow pace. Considering there are just about no new issues on their GitHub repo, it is understandable if there are those that find the pace to close and amount of issues unacceptable or questionable, in addition to the clearly bad open to close ratio.

smj-edison 1 hour ago
Did you read their migration post? They are thinking about it as COW, so they're using both issue trackers right now, but as soon as the update an issue it jumps straight to the Codeberg issue tracker. It's an unconventional way of doing it, but it's no conspiracy.
lelanthran 13 hours ago
Peter Naur: Programming as Theory Building

Bun: Hold my beer

10 hours ago
Eufrat 11 hours ago
I think given the current mood of things, it would be prudent to not make such strong assertions on anything. Trust is in increasingly short supply these days.
minimaxir 11 hours ago
Nothing Jarred said is an assertion other than "There’ll be a blog post with more details."
dakj12iH 9 hours ago
"I didn’t expect it to work this quickly and I also didn’t expect the performance to be as competitive."

These are two assertions. There could have been a prior secret rewrite that took much longer than six days and this is a marketing stunt for Anthropic. In case people still don't get it, Jarred works for Anthropic and Bun belongs to Anthropic.

preommr 9 hours ago
Those are not assertions of anything meaningful. We have no idea what his expectations were. Maybe he expected it to be absolute crap, and it was only kind of crap. None of it means that it's actually viable. My fat uncle trying to beat Bolt's time could exceed my expectations by improving from 30s to 20s, doesn't mean it's ever going to be a reality.

> In case people still don't get it, Jarred works for Anthropic and Bun belongs to Anthropic.

In case people still don't get it, Jarred works for Anthropic and Bun belongs to Antrhopic. This means that people that have an ax to grind against anthropic (admittedly a reasonable position), will take the most antagonistic position they possibly can because of personal bias.

thrwaway55 8 hours ago
I disagree. This is the same sort of marketing strategy as Mythos.Wow it out performed so much we have to tell you in the future. If he wasn't aligned financially with the outcome I'd agree but he's not
perching_aix 8 hours ago
So do you picture them locking up the Rust port behind closed doors as well, or what's the game gonna be? Cause it reads like it's kinda all public already.
thrwaway55 7 hours ago
Absolutely not, I think they prioritize it because it's internal. I to expect to see a stronger marketing push on its ability to do language translations because there is honestly value in that. Question is when they have compute but it's less crisis marketing then their security stuff so I'd see it at a lower priority. I just don't think it's as honest as the parent post posits
refulgentis 7 hours ago
The Mythos-truther community is absolutely batshit, sorry. You wrote fanfic and now you're writing more fanfic. The company is faking for marketing so therefore they're faking for marketing. The only things in common between the two situations are you and the word Anthropic, the rest of us are just confused and worried. I'm worried, that's why I'm speaking to you plainly.
logicprog 14 hours ago
Looks like he did the maintainability performance and test suite checks and made his decision :)
jazzypants 14 hours ago
Honestly, I fully support the rewrite to Rust, but he should have just owned this from the start. I'm sure he knew in the back of his mind how dedicated he was to that branch as he had already spent the equivalent of thousands of dollars in tokens by that point.
nvme0n1p1 14 hours ago
Bun was VC funded and acquired by Anthropic. He's spending company money, not his own money.
jazzypants 14 hours ago
That's why I said "the equivalent of". Additionally, time and cognitive effort are not free. The work spent on this branch was work that was not spent on other branches. Does that make sense?
nvme0n1p1 13 hours ago
6 days is also nothing when you're doing R&D on your company's dime. He could have spent a month trying a dozen different things and thrown away all the code at the end. As long as he ends that month with a clear picture of where to steer the company over the next 5 years, it's time well spent.
nozzlegear 13 hours ago
Had my former employers been so lenient with how I spend company time, I might still be an office worker instead of self-employed!
throw1234567891 13 hours ago
Not even the company is spending money. It’s their employee working on a rework of the code owned by the company that owns the infrastructure on which the rework is done. And that company is still yet to turn profit. This work is subsidised by everyone who pays for Claude.
skybrian 14 hours ago
Announcing the decision a week earlier wouldn't help anyone. Maybe he expected it to work (though he didn't say that), but there's no reason to make a final call before seeing that it did work.
jazzypants 13 hours ago
Fair enough. I didn't say anything about a "final call". It just feels like there is a middle ground between that and telling people they are overreacting.
fragmede 14 hours ago
Yeah but with no guarantee that it was going to work, why should he have?
jazzypants 14 hours ago
Yeah, but he obviously had enough confidence in this project to keep the agents working at it, didn't he? Given infinite time and money, if you prompt an LLM about something enough times, it will eventually work.

Insert something about monkeys, typewriters, and Shakespeare here.

furyofantares 12 hours ago
He was 2 days into a project that ended up taking 6. You're being extremely unreasonable.
throw1234567891 13 hours ago
But you didn’t have to sit and type. Assuming that you look at what it did, why not?
zephen 6 hours ago
But he was just working along and someone else outed his branch, right? Dude doesn't owe you any sort of explanation.
raincole 12 hours ago
Yeah, that means it's an extremely successful experiment so far.
4aksh19 14 hours ago
"No one has the intention of building a wall" - Walter Ulbricht, chairman of the central committee, a couple of months before the Berlin Wall was built.

The AI companies and their associates are beginning to surpass that level of denials and lies.

christopherwxyz 13 hours ago
It’s disrespectful to immediately jump to adversarial conclusions from a simple desire to refactor and poor netiquette.
yrjrjjrjjtjjr 12 hours ago
The right to be suspicious of the motives of powerful people is infinitely more important than protecting their feelings from being hurt by suspicion.
johncolanduoni 10 hours ago
Powerful people figured out how to make suspicion work for them long ago. You have every right to be unconditionally suspicious, but it’s not a good way of accomplishing any change. Also their feelings are not hurt by what you or I think, they don’t care.
irishcoffee 5 hours ago
> Powerful people figured out how to make suspicion work for them long ago. You have every right to be unconditionally suspicious, but it’s not a good way of accomplishing any change.

How does one accomplish change? Even being a martyr doesn't get traction. As far as I can tell, you need to already be powerful. Nobody lets you into that group if you're not aligned with said group.

Protests (at least in their current form) don't work. Trying to assassinate someone doesn't move the needle (also not the play, I don't support murder), vocal grassroots leaders are no longer relevant at all, if they ever were.

How does one accomplish any change?

johncolanduoni 4 hours ago
Not by trading the same suspicions on the internet with fellow true believers over and over again, I think the past 10 years have proven that pretty conclusively. Maybe people should try some of the things previous social movements did, seemed to work pretty well even against a much more uniform media environment and a stronger hostile social consensus.

Protests don’t immediately solve everything, but I think looking at 2026 and concluding they don’t move the needle at all is a weird take. There are recent examples of protest movements (especially long-term ones) working all over the world.

otterley 10 hours ago
This isn’t about rights. It’s about not being a jerk. Assume positive intent unless you have direct evidence to the contrary.
christopherwxyz 11 hours ago
Protecting software creators, engineers, builders, and their work, regardless of their tools, is infinitely more important. Full stop.
erkat 12 hours ago
If experienced (in open source and corporate politics) developers would bet on Polymarket if the rewrite is going to be ultimately merged, which side would you bet on?

What would the emerging odds be? My guess is 19/20 in favor of ditching Zig.

I have followed many initial denials on a wide range of topics, not only rewrites, over the years. Like clockwork, most of them were lies.

geysersam 5 hours ago
I don't think there's much chance it gets merged.

Even if it passed the full test suite there are a ton of software qualities that are not captured by tests and I think it's unlikely the AI made the right trade-off in every such case.

* We haven't seen the benchmarks yet.

* It hasn't seen wide usage. Zig Bun has had tons of bugs ironed out, Rust Bun has a different set of bugs to iron out.

* The developers know the zig codebase well, they don't know the rust code base.

christopherwxyz 11 hours ago
I don't think most serious developers have time to watch prediction markets.
dandellion 12 hours ago
Four days ago there was no intention to rewrite, now it's a simple desire to refactor. It's not adversarial conclusion, it's pointing out the clear hypocrisy.
johncolanduoni 12 hours ago
Running an experiment, the experiment being more successful than you thought, and then deciding to put more effort into a bigger experiment is not hypocrisy. It’s engineering. If you think some of the objective facts they’re putting out (like test coverage and performance) are lies, go and prove it instead of appealing to emotion.
baranul 9 hours ago
Especially if given near unlimited tokens to burn through, because any level of success fuels the LLM hype machine, which brings ROI.

> It’s engineering.

Significantly, but not totally. The marketing value can't be ignored.

johncolanduoni 7 hours ago
What do you think one would have to pay to have flesh-and-blood engineers get a cross-language port of a codebase of over half a million lines with a broad test suite to over 99% conformance? I think it would be astronomically high, especially given that for this specific project your hiring pool is going to be limited to people who can get up to speed with Zig and JavaScriptCore right away (or you’re going to have to pay them for low output for a while as you train them). Also it would be literally impossible to do in 6 days no matter how much money you paid, so unless they’re lying about that it’s still something that couldn’t have been done prior for any price.

More handwaving about the LLM hype machine is incredibly boring and enough of it is spewed everywhere that whatever social good it was going to accomplish must have already happened by now. If you want to inject reality into the situation, talk about reality (like Anthropic is at least pretending to).

rkajdh 7 hours ago
The hype machine is real and we will talk about it as long as it pleases us. It took decades to get rid of smoking in public places and restaurants, and the clankers will eventually fall, too.

So cash out before that.

johncolanduoni 6 hours ago
Did I say it wasn’t real? Or tell you that you couldn’t talk about it? No, I just pointed out that it’s all anybody talks about and it’s boring and doesn’t engage with anything specific about this stunt/project. And I can make melodramatic analogies too — like to the panic about global overpopulation that led to mass sterilizations in The Emergency. Panic is not an unalloyed good, and if you want to fight “the clankers” you should understand what they are and are not capable of.

Also I already cashed out, jokes on you.

christopherwxyz 11 hours ago
Being able to change your mind is a excellent exercise in free will.
y3ahd0g 6 hours ago
"People cannot change their mind!

One must stick to old assertions forever!

Giant foot is gonna squish us!"

...this forum is as bad as a single backwater sub Reddit.

I am so sick of emotionally frail software engineers. I don't know why I keep bothering floating back here every once in a while to see what is up.

Same old rustled jimmies over technology evolution like back during the emacs and vi! tabs vs spaces! Sysv init vs systemd!

Super hero power scaling message boards are more engaging than this site.

AI save us from these needlessly economically empowered labor exploiting non-contributor script kiddies. Such an unserious community.

Cthulhu_ 13 hours ago
Not to mention invoking a major historical event, appeal to emotion move.
dzonga 12 hours ago
you know this whole exercise is both a marketing exercise and a way to make noise.

would the world come to a standstill tomorrow if every Bun instance out there ran on Node.js ?

they know their A.I can't sell without the noise that it's now on the edge of the frontier. this is hype.

zig adopting a strict 'no LLM' policy affects the LLM vendors.

baranul 10 hours ago
A good point. The business and marketing aspect of this situation can not be overlooked. The rewrite in Rust was a clear marketing opportunity, to maintain the LLM hype, that team Bun warmly embraced.
geodel 9 hours ago
At this point one should just say Anthropic team. I can't think of a Bun team since Anthropic bought Bun.

Jared, the hacker is now replaced by Jared, the millionaire soon to be billionaire as Anthropic valuation keeps going up.

aleksiy123 11 hours ago
It’s also just a useful exercise in general, especially for getting feedback for models and harnesses.

I’ve been thinking about setting up a non trivial project to use as a benchmark for any plugins and/or harness changes I make.

Having a prebuilt verification suite is great. You can use it to asses things like token usage, time, across different harnesses, models, plugins.

johncolanduoni 12 hours ago
I don’t think the Zig project adopting a strict ‘no LLM’ policy affects the LLM vendors at all. How many developers are working on the Zig project itself that will (maybe) now not buy a Claude subscription? I can buy that this is a marketing stunt, but nobody at the top cares if a relatively small open source project doesn’t allow AI contributions.
baranul 10 hours ago
I don't know about that. Zig's bdfl got significant mainstream press attention for his anti-LLM stance. Definitely enough attention for various LLM vendors to notice.
johncolanduoni 7 hours ago
Based on their actions, I don’t think the LLM vendors take anti-AI sentiment very seriously. If anything they court it, though I think it’s more likely they’re just high on their own supply. I doubt the Zig statement had any effect on the thoughts of the people who actually sign contracts with Anthropic, who are mostly not engineers.

The marketing opportunity here is in promoting Claude Code, not giving a smackdown to Andrew Kelley (who vanishingly few people who throw around millions of dollars on AI contracts have heard of).

sdevonoes 11 hours ago
Exactly. Always asks “who benefits from this?” . The answer in this case is: AI vendors, not us.
tracerbulletx 11 hours ago
If you think Claude needs manufactured hype at this point to sell it you're delusional.
claude_delusion 11 hours ago
Anthropic literally has an astroturfing program:

https://news.ycombinator.com/item?id=47945021

geodel 9 hours ago
Manufactured hype is just marketing. And companies losing money and looking to get listed very soon absolutely need it.
wiseowise 11 hours ago
That’s how marketing works.
sdevonoes 11 hours ago
If you think they can survive without hype, you are the naive one
13 hours ago
faangguyindia 2 hours ago
He works at claude, he has unlimited tokens. He can do anything, he is using mythos.
lioeters 12 hours ago
Also a few days before that:

> I expect OSS to go the opposite direction: no human contribution allowed. Slop will be a nostalgic relic of 2025 & 2026.

We should have seen this coming after they got acquired by Anthropic, but it's still disappointing. I'm not against large language models as a technology, just thoroughly disgusted how these "AI" companies rose to power, eating the software industry and the rest of society. It's creating a very unhealthy dependency.

Think a few steps ahead and start preparing a slop-free software stack and community. That includes Zig and its ecosystem. Even if we (and future generations) don't manage to live entirely without slop, it's more important than ever to ensure a sustainable computing culture, free as in freedom.

tempaccount5050 11 hours ago
Software companies have been about automating human labor since the invention of computers. It's the whole damn point. Why do you think finance used to be (sometimes still is) the head of the IT dept? Because we automated accounting away. Then typists. Then secretaries. Then drafting. Etc etc.
wiseowise 11 hours ago
> It's the whole damn point.

Believe it or not, for some of us it’s not “the whole damn point”.

tempaccount5050 11 hours ago
Whether or not you want to admit that is up to you. If you're selling automation or efficiency gains, you're removing human labor.
gravypod 8 hours ago
My first "job" in computing, where someone else paid me for code, was in a research context where we were modeling radio propagation. Nothing about that was removing human labor. It in face eventually called for a bunch of humans to interact with each other. See: https://www.hamsci.org/basic-project/2017-total-solar-eclips...

I don't think it is fair to claim computers are about putting people out of jobs.

tempaccount5050 6 hours ago
I think it is. Before computers you would have had to write all that down on paper logs. By using code, you saved yourself time. If it wasn't less labor, you wouldn't have done it that way.
dash2 6 hours ago
Before it was less labor, they might not have done it at all. Computers let you do things quicker. So you do more things.
casey2 2 hours ago
Ok, then go work on homelessness or political corruption. It's not like we have a dearth of problems. Coding is solved.
gravypod 4 hours ago
People *did* write down these logs, manually, and submit them.
tempaccount5050 4 hours ago
And without software, what then, make a bunch of books and mail it to all these people? On this site of all sites, it's blowing my mind that this kind of thing isn't obvious to everyone. I guess maybe it isn't if you were born before the internet, but man, I'm really surprised by some of these comments.
Georgelemental 7 hours ago
Human labor could do the math by hand
aaomidi 6 hours ago
And in fact, was how it was done.
skeledrew 7 hours ago
Why else would one create software, if not to do something that a human does/did?
matt_kantor 6 hours ago
A few off the top of my head:

- Video games

- Medical device firmware

- Synthesizers

- Detailed universe-scale physics simulations

- Mars rover control software

- The Linux kernel

skeledrew 3 hours ago
- Video games - only feasible because of computers.

- Medical device firmware - hardware control layer for medical devices, which are used to aid in medical procedures.

- Synthesizers - help to make music.

- Detailed universe-scale physics simulations - help to make certain physics problems more tractable.

- Mars rover control software - helps to remote control rovers.

- The Linux kernel - control layer that sits between firmware and actual applications, pretty much just a common shared library so apps don't have to each ship with a full stack.

I don't really see your point here. None of these examples counter the argument that software is created to automate human labour as much as is practical.

Video games are an interesting category since they're entirely enabled by software: I can't imagine anyone driving a video game manually (note I don't consider things like Chess, etc software to be video games in this context; more things like FPS, racing, etc). I do remember as a kid I thought that there were actually little people doing the stuff in video games though.

tempaccount5050 4 hours ago
This list is funny.

All of these things existed in pre computer form.

A scheduler used to be a person putting punch cards into a machine.

TheRoque 3 hours ago
What's the human form of a video game ?
brabel 1 hour ago
Board games? All sorts of toys?
TheRoque 1 minute ago
Well not really, since the board game itself doesn't need a paid human to work. It's been crafted by a human, but video games are also crafted by (arguably many more) humans. The closest would be escape games, or larger scale games maybe
grim_io 11 hours ago
No one is taking away programming as a hobby from you :)
sdevonoes 11 hours ago
There are software components out there that are the backbone of our industry, and they are not governed by multibillion dollar companies. Linux, postgres, HTTP, TCP/IP, qemu,…

It’s not that anthropic/google/openai/etc are unavoidable

tomnipotent 11 hours ago
> they are not governed by multibillion dollar companies

Every tech you mentioned is absolutely governed by multibillion dollar companies. Something like 75-85% of OSS code is contributed by employees doing their day job. Most Linux and Postgres contributions come from those same employees. HTTP and TCP/IP are managed by standard bodies and industry working groups that, you guessed it, are governed by multibillion dollar companies. Red Hat and IBM are responsible for 40-60% of contributions to Qemu.

raj1298 11 hours ago
The usual model for OSS projects is that initially they are written for free. Then an inner circle forms and exploits the second generation of idealists who write entire large features without ever getting the same rights.

Some of the inner circle move to corporations to increase their power and are joined by corporate developers (sometimes their bosses) to take over the project.

A lot of corporate OSS development are entirely unnecessary rewrites or simple things like release management. So I'd put the number of useful code by employees much lower.

But governed, hell yeah, I agree. The corporations crack the whip and oppress real contributors.

claude_delusion 11 hours ago
[flagged]
satvikpendem 9 hours ago
Don't make accounts just to add comments for a specific thread, you will get flagged.
tempaccount5050 11 hours ago
"ok guys, that's enough progress since now it's my job at stake, we can stop."
foxes 10 hours ago
How could it possibly be open source if it requires proprietary models developed by a few companies to writs the code.

Seems like that would make open source entirely controlled by open ai, anthropic et al.

brabel 1 hour ago
Open source and open weight models are already really good. I don’t think anyone really depends on the big AI companies anymore, if they go away, the open source models seem to be already sufficiently good to take the torch and will continue to improve thanks to research. They may require money to train , but the cost of that is already covered quite well and if these model became the mainstream way to use AI , more money from governments and research institutions would be poured into them.

That is actually a very plausible scenario!

andy_ppp 7 hours ago
It isn’t really slop anymore and it will keep improving.
14 hours ago
des429 13 hours ago
What's your point
righthand 13 hours ago
To demonstrate engineers may not be as skilled and knowledgeable as they appear. To make such a comment then turn around and make an announcement days later indicates that the engineers are not skilled in the tools they’re using or even possibly the domain they’re working in.
nerdsniper 12 hours ago
The quote doesn’t provide warrant for this claim. The developer did a great job investigating the applicability of a new tool and it appears the investigation yielded fruit.

Your kind of negativity is pathological.

righthand 12 hours ago
[flagged]
fastball 12 hours ago
What are you even talking about?
esquivalience 13 hours ago
I totally disagree with this! I think it's very important for experts to be able to adapt to their opinions based on evidence.
righthand 13 hours ago
Sure but if you’re an expert you’re probably finishing your project and collecting results, not sprinting to an online thread to evangelize for Llms with partial results. That sounds amateur to me.
staticassertion 12 hours ago
He's tweeting his experiences. Calling this "sprinting" and "evangelizing" is just rhetoric. Posting about a project you're working on is hardly amateurish.
righthand 12 hours ago
[flagged]
staticassertion 12 hours ago
[flagged]
supern0va 12 hours ago
Ugh, I really find this sort of thing frustrating. I like people developing, and testing, and ideating, and exploring in public!

This is one of my problems with academia: people only sharing results when they're positive and complete. I want to hear about what people tried that didn't work, and see the string of failures. People are already inclined to avoid sharing their work out of concern that they'll be judged--let's not encourage that behavior, please.

ianbutler 12 hours ago
[flagged]
righthand 12 hours ago
[flagged]
ianbutler 12 hours ago
[flagged]
antonvs 6 hours ago
Being an expert software developer - which Jarred Sumner indisputably is, having created Bun - doesn't automatically make you an expert on predicting the improvements in software development performance that LLMs enable. All of us - experts and amateurs alike - are in the process of figuring that out, in real time, around the world, right now.

Underestimating how quickly a non-trivial project will come together is an almost unheard of phenomenon. It used to invariably be the other way around, to the point that there are laws about it, like Hofstadter's Law, which says that projects always take longer than anticipated, even when accounting for the law itself. Or Fred Brooks' work, which puts limits on how much the development of software projects can be sped up.

The sane takeaway here is that if what's being reported is true (keeping in mind it's coming from a newly minted Anthropic employee), it implies an astonishing, unheard of improvement in software development speed, at least for certain kinds of tasks, enabled by LLMs.

To somehow twist that into "experts may not be as skilled and knowledgeable as they appear" or "not skilled in the tools they’re using" makes me think of the Charles Babbage quote, "I am not able rightly to apprehend the kind of confusion of ideas that could provoke such [an opinion]."

mohsen1 14 hours ago
Very impressive that they could do this so quickly because I have been on a similar project (porting TypeScript to Rust) for 5 months. But I guess I don't have access to Mythos and unlimited tokens. I'm also close to 100% pass rate. 99.6% at the time of writing.

https://tsz.dev

Rust is perfect for writing all of code using LLM. It's strict type system makes is less likely to make very dumb mistakes that other languages might allow.

Also want to note that writing the code using LLM doesn't remove the need to have a vision for the design and tradeoffs you make as you build a project. So Jarred and his team are the right kind of people to be able to leverage LLMs to write huge amounts of code.

cornholio 12 hours ago
> Rust is perfect for writing all of code using LLM. It's strict type system makes is less likely to make very dumb mistakes that other languages might allow.

I question this. Yes, strong enforcement of invariants at compile time helps the LLM generate functional code since it gets rapid feedback and retraces as opposed to generating buggy code that fails at runtime in edge cases.

On the other hand, Rust is a complex language prone to refactoring avalanches, where a small change in a component forces refactoring distant code. If the initial architecture is bad or lacking, growing the code base incrementally as LLMs typically do will tend towards spaghettification. So I fear a program that compiles and even runs ok, but no longer human readable or maintainable.

theptip 12 hours ago
> Rust is a complex language prone to refactoring avalanches

This may be so, but LLMs are great at slogging through such tedious repercussions.

I would say if the language prevents sloppy intermediate states, that actually makes it more amenable to AI; if you just half-ass a refactor into a conceptually inconsistent state, it’s possible for bad tests to fail to catch it in Python, say. But if many such incomplete states are just forbidden, then the compiler errors provide a clean objective function that the LLM can keep iterating on.

geysersam 5 hours ago
This is true in my experience as well. I'd even say it's the most common failure mode of current AI! It "fixes" some problem locally and declares victory, but it doesn't fully address the consequences of the change everywhere, and then the codebase is inconsistent.
brabel 1 hour ago
I’ve seen Claude address the consequences of a change in a way that honestly was more comprehensive than I would be capable of. But I still agree that sometimes it misses the mark. I think that may be due to “adaptive effort “ which Claude used now by default.
carllerche 12 hours ago
> On the other hand, Rust is a complex language prone to refactoring avalanches, where a small change in a component forces refactoring distant code.

Are you saying this out of personal experience or just hypothesizing? I am working on a large, complex rust project with Claude Code and do not experience this at all.

gobdovan 12 hours ago
It can happen like this:

- write sleek operator-overloading-based code for simple mathematical operations on your custom pet algebra

- decide that you want to turn it into an autograd library [0]

- realise that you now need either `RefCell` for interior mutability, or arenas to save the computation graph and local gradients

- realise that `RefCell` puts borrow checks on the runtime path and can panic if you get aliasing wrong

- realise that plain arenas cannot use your sleek operator-overloaded expressions, since `a + b` has no access to the arena, so you need to rewrite them as `tape.sum(node_a, node_b)`

- cry

This was my introduction to why you kinda need to know what you will end up building with Rust, or suffer the cascade refactors. In Python, for example, this issue mostly wouldn't happen, since objects are already reference-like, so the tape/graph can stay implicit and you just chug along.

I still prefer Rust, just that these refactor cascades will happen. But they are mechanically doable, because you just need to 'break' one type, and let an LLM correct the fallout errors surfaced by the compiler till you reach a consistent new ownership model, and I suppose this is common enough that LLM saw it being done hundreds of times, haha.

[0] https://github.com/karpathy/micrograd

zozbot234 2 hours ago
You can still use the fancy operators for readability, just use a macro to translate them into the actual code. Very common pattern in non-trivial Rust libraries.
apitman 11 hours ago
This post has some good examples of this sort of problem: https://loglog.games/blog/leaving-rust-gamedev/
rstuart4133 9 hours ago
That link reads like an autobiography about his love affair with Rust and subsequent breaking up after pushing the relationship a step too far: into gaming. He has been using Rust much, much longer than me, but I rekcon I already hit most of the pain points he mentions. (And I notice he left some things out, like async.)

I've come away feeling that most it looks fixable - but it won't be fixed in Rust. Some of the language choices (like favouring monomorphization to the point of making dll's near impossible) are near impossible to undo now, and in other cases where it might conceivably be fixed (like async) it won't be because the community is too invested with their current solution.

So we are stuck with the Rust we have; warts and all. That blog post convinced me those warts mean the language should be avoided for game development. Similarly sqlite developers convinced me the current state of Rust tooling meant it wasn't a good fit for their style of high reliability coding, so they are sticking with C. Which is a downright perverse outcome.

But for most of us C programmers who aren't willing to put in the huge effort Sqlite does to get the reliability up, Rust is the only game in town right now. It's the first and currently only language to implement a usable formal proof checker that eliminates most of the serious footguns in C and C++. But I am now hoping it becomes a victim of the old engineering adage: plan to throw the first one away, because you will anyway.

staticassertion 12 hours ago
It's very easy to just instruct the LLM to build using isolated crates, to maintain boundaries, focus on "ports and adapters", etc, and not run into this - in my experience.

I haven't had any issues with this getting out of hand on >10KLOC vibed rust codebases.

mohsen1 12 hours ago
From the languages that I know, Rust is the only language that I can look at a multi-threaded code and understand it. This stuff being checked by the compiler is a huge advantage
TheMrZZ 12 hours ago
I only used Rust for fun maths projects crunching billions of numbers (else python is easier for me), but I have to say rayon is the most amazing multi-processing experience I've ever had!
nm980 12 hours ago
> I haven't had any issues with this getting out of hand on >10KLOC vibed rust codebases.

This rewrite is >750k lines of Rust

staticassertion 12 hours ago
I don't see any reason why the approach wouldn't hold just fine, if not better, as the codebase scaled. Indeed this appears to be exactly what the author has done, they mention that they made heavy use of crates.
mohamedkoubaa 12 hours ago
[flagged]
kayson 13 hours ago
When Microsoft rewrote it in go, there was a comment from one of the leads that they chose it over rust because of the similarity in paradigms (garbage collection, etc), and that using rust would've been more difficult, requiring a lot of "hoop jumping". Now that you've done it... Thoughts?
mohsen1 13 hours ago
Yes indeed. More than 1 million lines of code (including tests) is jumping lots of hoops but with LLMs it's not as painful so you can just ask it to do the hard things.

Example of a Claude Code session after 2 hours of "Crunching" that came out without results https://github.com/mohsen1/tsz/pull/4868 (Edit I force pushed to PR to solve the problem, you can see the initial refuse message in the initial version of PR description)

Funny thing is, the last percent of the test have been so hard to work on that Opus 4.7 routinely bails and says "it's too involved or complicated" so I had to add prompts specifically asking it not to bail.

baq 13 hours ago
You should try GPT, I’d be really interested to hear if it works better. (Exclusively using GPT for systems work at $DAYJOB, but compare with opus every couple weeks and GPT consistently gives me better results)
X-Istence 12 hours ago
I've been comparing Claude vs Codex using GPT and Claude consistently is better than GPT about reasoning, about writing code, and using the tools as appropriate.

GPT for instance had a lot of issues using git worktrees, and didn't understand how to correctly use it to then merge stuff back into a main branch, vs Claude which seems to do this much more naturally.

GPT also left me with broken tests/code that I had to iterate on manually, Claude is much better about reasoning through code. Primarily Python.

_flux 2 hours ago
> GPT for instance had a lot of issues using git worktrees, and didn't understand how to correctly use it to then merge stuff back into a main branch, vs Claude which seems to do this much more naturally.

I wonder how much of that is due to the model being somehow better, or the harness having built-in instructions on how to use them.

I've used worktrees with Codex just fine, but I instructed it to use my scripts for setting it up and tearing it down. The scripts also reflinked existing compilation artifacts to speed up compiling and allocated a fresh db instance for it, but then also applied a simple protocol for locking the master repository during merges, so multiple agents wouldn't try to merge at the same time. It has been following those instructions quite well.

12 hours ago
mohsen1 13 hours ago
OpenAI gave me that 10x boost and used it all already for this week. I'm guessing the last 50 tests is only doable by GPT 5.5 xhigh
odie5533 13 hours ago
Do you have any write ups on your workflow with Claude and github dev?
mebcitto 13 hours ago
That might be opus 4.7 behaviour because I also get that all the time in the past few weeks. Also complex code base, but likely an order of magnitude simpler than yours.
13 hours ago
adambrod 13 hours ago
They mentioned that they wanted to port their compiler over to retain existing behavior (vs a re-write) and Rust has a hard time with their cyclic data structures.
calmoo 13 hours ago
Is GC useful for a static type checker? Or did they make a new runtime?
aardvark179 10 hours ago
The point is that having a GC will affect your data structure and algorithm design, so it’s easier to automatically transform JS or TS to Go than to rust because you’re mostly reducing things down to one problem (translation) rather than multiple intertwined problems.
malisper 12 hours ago
Same but for multi-threaded Postgres[0]. 96% pg regression tests pass after 1 month and 823K LOC. 8 Codex accounts at $200/mo is what i could use up with no Mythos

I've also seen the benefits of Rust for this too. And making the bet that my pg experience will help me make good design choices around many of the things people have been having trouble with in pg for a long time[1]. Excited to see AI make it more possible to improve complex pieces of software than has historically been practical.

[0] https://github.com/malisper/pgrust [1] https://malisper.me/the-four-horsemen-behind-thousands-of-po...

mohsen1 11 hours ago
Very cool! If you have extra tokens laying around ask the agent try to break things and open GitHub issues. This is what I do for tsz and beyond conformance test I can see it finding very good bugs.
brcmthrowaway 10 hours ago
1600/mo, there is now a token-rich class.
IshKebab 1 hour ago
96% tests passing sounds impressive, but I remember that C compiler that had similar (or better) stats yet was still hilariously broken because the test suite didn't cover many "obvious" things that a human wouldn't get wrong even without the tests.
mixtureoftakes 11 hours ago
wow!

curious about your workflow for running all these accounts. different harnesses in parallel? manually switching in codex? 5.5pro only?

what works for you?

malisper 10 hours ago
I wrote up a bit about my workflow here[0][1]. I'm using conductor.build to manage multiple codex sessions at once. When I hit the rate limit, I'm using codex-auth[2] to switch codex accounts.

[0] https://malisper.me/pgrust-rebuilding-postgres-in-rust-with-... [1] https://malisper.me/pgrust-update-at-67-postgres-compatibili... [2] https://github.com/loongphy/codex-auth

bicepjai 12 hours ago
Rust is amazing, but the way I want to build Rust software breaks down on large projects with LLMs. Maintaining clean boundaries or even just establishing them stops being a flow state and turns into painful reviews that push me into procrastination mode.
girvo 11 hours ago
I’ve struggled to get Opus to not write the weirdest possible Rust, ignoring all idioms and so on. Any tips?
antonvs 6 hours ago
Give it coding guidelines. It'll largely try to do what you ask.

Left to itself, it often follows human developers who conceive of their goal as "get the program working, the end justifies the means." Which makes sense because there are a lot of systems like that in the training corpus.

Ciantic 13 hours ago
Wow, amazing work.

Pretty impressive that it is faster than the Go version already.

mohsen1 13 hours ago
Thank you!

It's much faster in single file benchmarks (3 to 5x)

https://tsz.dev/benchmarks/micro

I have optimizations planned for large projects that I'm still flushing out.

aabhay 13 hours ago
Zig is much more type aligned to bun than typescript. And there’s a common interface of C ffi so you could imagine porting it modularly and keeping the test suite in zig
logicchains 2 hours ago
>Rust is perfect for writing all of code using LLM.

Rust is a terrible language for using LLMs to write code if Rust's low latency isn't needed, because of its extreme compile times. LLMs code faster than humans so a far bigger fraction of the time is spent waiting for the compiler, and a reasonably sized project will take literally 10x longer to compile in Rust than in e.g. Zig or Go.

lanthissa 12 hours ago
shouldn't typed code that uses functional style be kinda the perfect end game for llms? You can parallelize generation at any granularity, easily ring fence changes, reproduce everything, types give clues to the llm.
45h2avf 13 hours ago
[flagged]
Aurornis 13 hours ago
> How do we know it is true?

The branch is open.

You can check it out and run the tests if you don’t believe it.

christopherwxyz 13 hours ago
Zig isn’t so much on the blacklist because of the culture it carries from its maintainers, but because the ecosystem is no longer easily composed with other GitHub projects/GitHub Actions.
madspindel 13 hours ago
> We are dealing with a company of habitual liars and promoters.

Any sources to back this up?

Tiberium 14 hours ago
I just want to comment that I think it's a good change if we look past the AI involvement.

Bun has had an extremely high amount of crashes/memory bugs due to them using Zig, unlike Deno which is Rust.

Of course, if Bun's Rust port has tons of `unsafe`, it won't magically solve them all, but it'll still get better

lionkor 1 hour ago
> Of course, if Bun's Rust port has tons of `unsafe`, it won't magically solve them all, but it'll still get better

You get very few of the Rust guarantees when you litter your code with unsafe to get around the safety checks (which is what they're doing here). I would not recommend running this in production.

mi_lk 14 hours ago
> Bun has had an extremely high amount of crashes/memory bugs

Any stats/source? Not that I think it's false

> and the ugly parts look uglier (unsafe) which encourages refactoring.

Looks like Bun owes that to itself to some extent, not solely because of the language

dmd 14 hours ago
You want a better source than the actual author of Bun?
nesarkvechnep 13 hours ago
Authors can't exaggerate? Maybe some actual numbers can convince people.
nicce 11 hours ago
Here: https://github.com/oven-sh/bun/issues?q=is%3Aissue%20%22Segm...

Around 2500 issues with segmentation fault.

shpx 8 hours ago
As compared with 41 for deno

https://github.com/denoland/deno/issues?q=is%3Aissue%20%22Se...

With the total number of issues being 16,458 for bun and 14,259 for deno.

frde_me 13 hours ago
The cool thing is the author doesn't actually have to convince anyone
enricozb 14 hours ago
I believe the author is the creator of Bun.
brazukadev 9 hours ago
Is he working for Anthropic now?
danaw 1 hour ago
anthropic bought bun recently
dminik 13 hours ago
Not that it's a particularly accurate stat, but:

https://github.com/oven-sh/bun/issues?q=is%3Aissue%20state%3...

119 open, 885 closed

https://github.com/denoland/deno/issues?q=is%3Aissue%20state...

10 open, 46 closed

afavour 14 hours ago
FTA:

> why: I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.

Not a hard number obviously but a clear indication those issues exist.

qudat 13 hours ago
I don’t understand: just use an agent to find all memory leaks and segfaults. I don’t get the argument if you are gonna vibe code anyway.

With unlimited tokens make it a lint rule or auto formatter.

duggan 40 minutes ago
LLMs are a force multiplier, not magic. They benefit from good tooling.
Ygg2 14 hours ago
If you look at percent of segfault errors in each repo, Bun had a much larger percent. Although don't quote on me.
ozgrakkurt 7 hours ago
> Bun has had an extremely high amount of crashes/memory bugs due to them using Zig

This just sounds like they are not good at using Zig. I have been daily driving ghostty on linux for a fairly long time now and I have never seen these kinds of issues. I have also used ghostty on macos for a bit and didn't have any problems there either.

Zig is really good for writing stable and reliable code. There is also a database written in Zig that seems to be fairly successful [0].

I also wrote zig for some time and compiler/toolchain was really pleasant to use. I wrote more segfaults in Rust ffi code than all segfaults I had in Zig in total while I was writing Zig.

[0] https://tigerbeetle.com/

baranul 8 minutes ago
> This just sounds like they are not good at using Zig.

That's odd, because of the visibility of team Bun using the language, one would think they could get whatever help and guidance they asked for. Seems weird for team Bun to complain about crashes, leaks, and bugs if they could have what they are doing wrong explained to them or their issues fixed in a timely manner.

f311a 10 minutes ago
I think the main problem with Bun is that they are trying to move very quickly.

Tigebeetle devs spend 90% time working on stability, safety, tests and so on. They don't need new features, they need reliable software. Their database is pretty simple in terms of features and their goal was always stability and speed. Bun devs spend the majority of the time adding new features.

wallstop 6 hours ago
Ah, yes, the "you're holding it wrong" defense. If one tool has a higher safety rating than another, significantly so, preventing entire classes of mistakes from happening that the other does not, in a kind of superset manner - even the most skilled craftsman will inevitably make mistakes that would have been prevented by the safer tool.
bergheim 2 hours ago
Not sure if ghostty is the best example https://mitchellh.com/writing/ghostty-memory-leak-fix
esjeon 4 hours ago
Last time i checked their issue tracker (in 2025), the main source of problem was the engine, not their Zig code. A lot of core dump was happening inside and around JSC.
mawadev 6 minutes ago
I remember back in the day we used to blame the user and not the tool, but I guess we changed that notion when it comes to tool vs tool comparisons LOL
chris_st 14 hours ago
And they're clearly marked as `unsafe`, so easy to find, which gives them a nice list of issues to address.
leecommamichael 14 hours ago
Is your claim that using Zig ends in an "extremely high amount of crashes/memory bugs?" Wouldn't that mean that it isn't even feasible to make high-quality software with such a tool? There is a lot of quality stuff made with C/C++, so what is Zig doing wrong?
aystatic 14 hours ago
> Is your claim that using Zig ends in an "extremely high amount of crashes/memory bugs?" Wouldn't that mean that it isn't even feasible to make high-quality software with such a tool?

What caused you to hallucinate such a broad blanket statement? The point is the memory unsafety issues they ran into would be categorically impossible in safe Rust, which is why they're doing this in the first place.

mort96 14 hours ago
It's not hallucination, it's a basic extrapolation. "Bun has had an extremely high amount of crashes/memory bugs due to them using Zig" is the same statement as "using Zig resulted in Bun having an extremely high amount of crashes/memory bugs". It is then natural to ask whether their position is "using Zig results in an extremely high amount of crashes/bugs" in general.
aystatic 13 hours ago
That's a hell of a lot more than "basic extrapolation." You're misrepresenting the original claim to fight against one that's trivially easy to dispute. "Bun has had an extremely high amount of crashes/memory bugs due to them using Zig" (which unlike Rust, doesn't prevent you from writing them) is a completely different statement than your "using Zig results in an extremely high amount of crashes/bugs." Implying that such a generalization was even on the table is insulting.

Yes, obviously you can write high-quality software in Zig. But does Zig categorically reject the kind of bugs Bun was suffering from? Rust does.

mort96 12 hours ago
The point is that the "extremely high amount of crashes/bugs" is maybe not the fault of Zig after all, as was implied.
fastball 12 hours ago
How software behaves is very obviously downstream of the tools (in this case programming language) used to build it.
mort96 12 hours ago
"Downstream of" is doing a lot of work in that sentence. Language has an effect on, but in no way determines, the reliability of software written in it.
fastball 12 hours ago
Downstream doesn't imply determinism.
mort96 11 hours ago
The original claim is one of determinism. Your use of the term "downstream" is hiding the distinction; it can be read in either way, so it bridges the gap between the position you want to defend ("using Zig causes a higher probability of memory bugs") and the position you're forced to defend ("using Zig results in extremely many memory bugs").

In short, I'm accusing you of doing a motte-and-bailey.

skybrian 13 hours ago
It's generalizing from Bun (which might be especially tricky code) to other software that might not have the similar issues. There are lots of different kinds of software.
afdbcreid 12 hours ago
Even assuming that's a correct interpretation, does "using C/C++ results in having an extremely high amount of crashes/memory bugs" not true?
mort96 12 hours ago
No, that's provably false by a fairly simple existence proof. If it was true that using C results in an "extremely high amount of crashes/memory bugs", we would expect to not find any substantial pieces of software written in C without an "extremely high amount of crashes/memory bugs". Now where exactly you draw that line is necessarily going to be somewhat arbitrary, but by any definition, I think we can all agree that SQLite does not fit that description. Yet SQLite is written in C. Therefore, we conclude that the statement must be false. QED.

Now C does have some aspects which make it more prone to crashes and memory bugs. The less strong statement of "using C results in a higher propensity for crashes/memory bugs than Rust" is absolutely true, I would argue. And both C++ and Rust inherit some (but not all, and not the same) of the aspects which make C prone to memory bugs. (So does Go, I would argue, but less than C++ and Zig.)

mort96 42 minutes ago
Bah waking up today to notice a typo, after the edit window. "And both C++ and Rust inherit some ... aspects" was of course meant to be "And both C++ and Zig inherit some ... aspects".
leecommamichael 11 hours ago
You know, I try to ask questions rather than making assertions in order to better my chances at provoking useful thought and conversation.
pjmlp 14 hours ago
It is basically Modula-2 / Object Pascal with C like syntax.

While bounds checking, improved argument passing, typed pointers, proper strings and arrays are an improvement over C, it still suffers from use after free cases.

C++ already prevents many of those scenarios, at least for those folks that don't use it as a plain Better C, and actually make use of the standard library in hardned mode. When not, naturally is as bad as C.

Also to note that the tools that Zig offers to prevent that, are also available in C and C++, but people have to actually use them, e.g. I was using Purify back in 2000's.

Then there is the whole point that Zig is not yet 1.0, and who knows what will still change until then.

baranul 10 hours ago
> Then there is the whole point that Zig is not yet 1.0, and who knows what will still change until then.

Seems like their luck finally ran out. For the longest time, they were getting all kinds of passes, as if a post 1.0 language, that others don't get. 10 years is quite a long time not to hit 1.0 or still be into beta breaking changes. Though I think that (the luck) was significantly aided by their perpetual and odd HN boosting.

> While bounds checking, improved argument passing, typed pointers, proper strings and arrays are an improvement over C, it still suffers from use after free cases.

While Zig was a bit safer and more modern C alternative, safety was arguably not so much their selling point. Plenty of other C alternative languages are equally or more safe. Dlang and Vlang, both now having optional GCs and ownership, are examples.

leecommamichael 10 hours ago
Thank you for actually making the effort to respond to the curiosity in my question.
anthk 11 hours ago
You would like the T3X language as an exercise to port stuff from Free Pascal too it. In a near future I plan to port two libre text adventures with it, Beyond the Titanic and Supernova. If it fits under T3X, it might run in 'high end' CP/M systems out there.

https://t3x.org/t3x/0/index.html

https://t3x.org/t3x/0/t3xref.html

Beyond these Curses simple games, there's a 6502 assembler and disassembler among a Kim-1 simulator, Micro Common Lisps and whatnot.

thayne 13 hours ago
It is much harder to write quality stuff in c/c++ that doesn't have memory bugs (use after free, out of bounds access, use of unitialized memory, double free, memory races, etc.). I wouldn't say it isn't feasible to build high quality software in those languages, but even the highest quality software written in those languages has these types of bugs. Zig is better than c, and maybe a little bit better than c++, especially with respect to spatial memory bugs, but it doesn't provide the same garantees as rust.
ozgrakkurt 7 hours ago
I use clang, LLM, zig compiler, brave, firefox, kde, linux, steam, PC games, neovim, ghostty and more software written in c/c++/zig, and I can't remember the last time I had a crash issue with memory issues.

KDE also includes many other programs inside it like music player, document reader etc. that I never had any issues with.

dinkumthinkum 10 hours ago
Based on what? I am not familiar with this language called called "c/c++" but if you are writing Modern C++, you shouldn't be creating problems like "double free." It's really not that hard to avoid at all. This reminds me of how all the people carried on as if they were making the kernel so much safer not realizing they needed to use unsafe rust. I think so many people call themselves programmers now but so few know very much about computing beyond whatever the latest fad web framework is up to.
thayne 6 hours ago
Sure if you restrict yourself to a subset of c++ that avoids the more unsafe features, you can avoid some of those problems, but not all of them. And IME, a lot of c++ in the wild still uses those unsafe features, especially when interfacing with c libraries. And even if you always use smart pointers and make sure you always initialize your variables there are still plenty of ways you can get undefined behavior in c++.

> This reminds me of how all the people carried on as if they were making the kernel so much safer not realizing they needed to use unsafe rust.

Those are not contradictory. Confining unsafe code to a few unsafe blocks makes it easier to identify areas that need closer scrutiny. Just because there are unsafe blocks doesn't mean that using rust in the kernel isn't making it safer.

dminik 14 hours ago
The answer is that C (and by extension Zig, C++) code goes through a hardening process. New code in these languages tends to be unsafe. But bugs and vulnerabilities get squashed over time. Bun gets updated fast and so has a lot of new unsafe code.
jph00 13 hours ago
The statement “there exists a project where zig led to an extremely high amount of crashes/memory bugs” does not imply “all zig projects have an extremely high amount of crashes/memory bugs”.

This is a classic logic problem - eg “there is an orange cat” doesn’t imply “all cats are orange”.

13 hours ago
afavour 14 hours ago
> There is a lot of quality stuff made with C/C++

There’s a lot of leaky crap written in those languages too. One of the core promises of Rust is that the compiler will catch memory issues other languages won’t experience until runtime. If Zig doesn’t offer something similar it’ll make Rust very compelling.

kllrnohj 10 hours ago
Zig is a love letter to C. It does not do much of anything to address memory management. Doesn't even have any concept of ownership like C++ does (ergo, no equivalent of unique_ptr / shared_ptr). All you get over C is the addition of defer, and even that isn't really that different if you're using GCC or Clang and thus have __attribute__((cleanup)).
ChrisTrenkamp 6 hours ago
This is a hot take, but programming languages haven't progressed since the 90's. We've been conditioned to believe that if you want to be a serious programmer, you have to either use C++-style RAII (which includes Rust), or garbage collection, and there's no in-between, and C programmers are dinosaurs who can be ignored.

Arena allocators are a great way to automatically manage memory allocations. You malloc a whole bunch of memory and release it all with a single free, which makes it much easier to reason about your program's memory safety.

Casey Muratori has a good video talking about this. https://www.youtube.com/watch?v=xt1KNDmOYqA

And about Zig, you have an Arena Allocator out of the box: https://zig.guide/standard-library/allocators/ . And it's not just limited to that, you have debug allocators that detects memory leaks and gives you stack traces where they occurred.

This isn't to say that Zig is great at everything. I think Rust is great for things like kernels, high-frequency trading systems, and authentication servers where memory safety and performance is paramount. But for things like video games, memory leaks and buffer overflows aren't that big of a deal, and Zig's "Good Enough" approach is great for those types of applications.

dnautics 13 hours ago
rust does not promise leak safety.
josephg 13 hours ago
True. But rust does make it a lot harder to leak memory by accident. Rust variables are automatically freed when they go out of scope. Ownership semantics mean the compiler knows when to free almost everything.
ethin 6 hours ago
> But rust does make it a lot harder to leak memory by accident. Rust variables are automatically freed when they go out of scope.

RAII has entered the chat.

dnautics 13 hours ago
> Wouldn't that mean that it isn't even feasible to make high-quality software with such a tool?

plenty of other companies/entities making high quality software in zig? tigerbeetle, zig itself for example.

Bun's entire history has been a kind of haphazard move as fast as you can story, so...

Barrin92 14 hours ago
it's feasible to write good software but anything on the scale of millions of lines of code will have memory and pointer issues. I've worked in large C++ code bases with people much more experienced and skilled than I was and every single one of them would tell you that at that scale, no matter how economic and simple you program you will produce memory bugs, the smartest person in the world makes errors holding that much stuff in their head.

They're difficult to find, difficult to reason about in big software and you'll always create some. Languages that rule that out are a huge improvement in terms of correctness.

margorczynski 13 hours ago
This is correct but people with too big of an ego or affected too much by Dunning-Kruger) will try to say otherwise even when presented with ample evidence. Instead of a valid response you'll get "skill issue" from people that produce segfaulting code on a regular basis.
energy123 3 hours ago
Can you or someone shed some light on how much compute it took to do this?
aurareturn 22 hours ago
6 days of work to do this. Even if it doesn't end up becoming meaningful, it shows just how tokens and work done will be linked now and in the future.

It's going to be hard to compete with someone or a company that has more compute. They will just be able to do things you can't.

Aurornis 13 hours ago
Translating a project that includes a good test suite from one language to another is known to be a great case where LLMs work well.

When you’re starting with a complete codebase to use as an example and a test suite to check everything it’s much easier to iterate toward the desired goal. The LLM can already see what the goals are and how they’ve been implemented once already, which is a much easier problem than starting from a spec.

mezyt 11 hours ago
Great case where rust works well too. I won't cite every famous libs that got rewritten in rust but it wasn't all with LLM.
lionkor 1 hour ago
I fail to think of a successful Rust rewrite, so far what I've seen is just programmers who aren't sufficiently experienced, who decide to pick Rust and rewrite something in it, and then (this is the bad part) claim it's better for that reason only. It never is. It's always worse, because rewrites fundamentally end up with a worse product first.
apitman 11 hours ago
It's not hard to imagine a future where the only things committed to git repos are tests and specs.
taftster 2 hours ago
And maybe not even the tests. Just a specification for the tests.
aurareturn 2 hours ago
I can see open source projects as just prompts as well.
Gigachad 6 hours ago
The goal posts are always moving. This would have been an unthinkable task a couple years ago.
osti 5 hours ago
Even last year at this time people wouldn't believe it.
twoodfin 18 hours ago
You could have said the same thing about steam power or electricity. And it’s not just an analogy: The magic of these things is in being universal information engines. You spend capital to build them, using well-understood, scalable techniques, plug them into electricity, and out comes value.

My point is, there’s no chance of a “haves and have nots” emerging, any more than electricity turned out that way in the modern world.

carefulfungi 13 hours ago
Electricity might be a good analogy - but for the other side of this argument.

In the US, (nearly) full electrification wasn't achieved until the late 1940's/early 1950's - a process of nearly a century. (A moment of personal trivia, my great grandfather worked on crews electrifying rural areas of the midwest.)

twoodfin 8 hours ago
We already have SOTA local inference devices in everyone’s pocket, which also provide high bandwidth access to SOTA data center inference at what is rapidly becoming commodity pricing.

What comparable gap is there to bridge?

suddenlybananas 14 hours ago
>My point is, there’s no chance of a “haves and have nots” emerging, any more than electricity turned out that way in the modern world.

Energy costs vary widely across the world and that has enormous capacity for the economies of different countries and their industrial capacity.

Dylan16807 13 hours ago
https://worldpopulationreview.com/country-rankings/cost-of-e...

Electricity looks pretty even. Higher in Europe but they can afford that.

alphabeta3r56 12 hours ago
Due to purchasing power parity, it is actually much hhigher in poorer countries, in that they are absolutely still asking the have nots.
Dylan16807 10 hours ago
When something is expensive specifically because a country is poor and everything is harder to buy, that expense isn't making inequality worse.
alphabeta3r56 2 hours ago
I am talking about have nots at a nation scale here. At level of British empire.
throwaway82012 13 hours ago
[dead]
kzrdude 3 hours ago
It's a new era of capital, literally, in software development. Ownership of the means of production is now concentrated.
sdevonoes 11 hours ago
Unclear. Very good products tend to be about doing one or a few things very well; not about doing tons of stuff. So far, all I see is “Man, Im a 10x engineer now!”, shipping more code but without clear direction and taste. At this point, most of LLM-based work is just noise.
qudat 12 hours ago
Nah. These agents are getting easier and easier to run local. Have you tried Qwen 3.6 27b? It’s insane what it can do compared to its size. Like 100% vibe small projects if you manage context properly.

These models are a race to the bottom just like compute.

aurareturn 2 hours ago
I don’t think it matters. Local matters becoming better has not stopped demand for SOTA models.
nbf_1995 15 hours ago
I can't help but wonder what this cost in USD assuming you paid standard rates from Anthropic. Can someone even ballpark the price?
baq 13 hours ago
Much less than what it’d costs for a team of rust engineers.

This is both amazing and scary; has been for a while now.

BearOso 12 hours ago
It costs several times what it would cost a small team of engineers, even assuming you gave the engineers more time to do it. I'm guessing (wildly) this was around 0.5M USD in compute time. You do get the result quicker, though.
dwohnitmok 7 hours ago
> I'm guessing (wildly) this was around 0.5M USD in compute time.

That seems like an especially wild guess. If you take e.g. Opus 4.7 prices, and make the assumption that you are consuming roughly $30 for every million tokens of output (this comes from just summing the $25 per million tokens of output and $5 per million tokens of input and assuming that caching basically makes all that work out), and assume an output rate of 80 tokens per second (which seems like a high estimate based on online searching), it would take you about 2411 days of non-stop Opus 4.7 usage to hit 500k in API spend.

The only way you could possibly run that amount of usage in 6 days is if you were running ~400 instances in parallel. From personal experience, that seems crazy high for this project.

I think you are off by at least an order of magnitude (potentially even 2 depending on how the person is managing agents, but I could see something like dozens of agents 24/7, so I'm way less confident in 2, but I think it's still more likely to be closer to 10-20k in API spend).

alice-i-cecile 11 hours ago
Half a million is pretty damn cheap for a full rewrite into Rust of a million line of code codebase.
fg137 8 hours ago
But usually companies are much more careful before even spending that half a million. (And most companies don't have that money sitting around.) They would do small PoCs, do comprehensive benchmarks and evaluations of those PoCs, and decide whether to actually go ahead, and, more importantly, stick to it.

Being able to afford half a million doesn't mean you do it on a whim, or just throw all of that away if things don't go well.

But what do I know. I am nothing compared to our AI overlords like Anthropic.

Supermancho 12 hours ago
10k lines ~$250 in OpenAI API calls (no plan)

45 million lines would get to ~$1.125 mil for the linux kernel.

950k lines for Bun would get to $23,750

use whatever math you like ofc.

Does an Anthropic/employee pay that, no. Even if it's at a loss in terms of company revenue, it's worth burning the private capital for all kinds of other reasons.

pjmlp 19 hours ago
With less employees....
aurareturn 19 hours ago
Isn’t just one guy?
Defletter 19 hours ago
Exactly
rvz 16 hours ago
This is exactly how Anthropic will market this rewrite towards companies thinking about doing more layoffs.

1 person did a rust rewrite that took 6 days that would have taken hundreds of engineers more than a year to do.

Aurornis 13 hours ago
> 1 person did a rust rewrite that took 6 days that would have taken hundreds of engineers more than a year to do.

The entire bun team was only about a dozen people and they wrote it from scratch.

It would not take hundreds of engineers to port the existing codebase to another language.

I think this is a cool experiment, but some of these claims are getting absurd.

baq 13 hours ago
The saving grace here is a rewrite of a project with a good test suite is the sweet spot: LLMs are great at translation and do great with verifiable goals.

I agree it’s still mind blowing compared to before times, though.

Dylan16807 13 hours ago
> would have taken hundreds of engineers more than a year

This is estimating what, 10 lines per day each? No way translating code is anywhere near that slow.

59nadir 12 hours ago
It probably wouldn't take a single person who knew what they were doing more than a year to re-implement Bun in basically anything, by hand and from scratch, i.e. not even looking at source. Writing the code for something you already understand and have built before is incredibly fast.

I'm sure they'll market what you said, but it's so ridiculous that I would hope people would see through this stuff.

seanclayton 8 hours ago
And he has zero idea how it works. His capacity for understanding it is tied to his wallet now.
xienze 13 hours ago
> 1 person did a rust rewrite that took 6 days that would have taken hundreds of engineers more than a year to do.

Even cheaper would just be to not do it in the first place. Was there a pressing need to rewrite it?

slopinthebag 8 hours ago
The majority of Bun was written by one guy in less than a year. In what world would a rewrite take hundreds of engineers more than a year to do? The hyperbole is getting ridiculous.
jwpapi 10 hours ago
Completely unbased, but I don’t want to have to do anything with bun anymore. It’s just a gut feeling, but I don’t trust them and support them.

They fork Zig to utilize LLM rewrites and build something the Zig team clearly disregarded (non-deterministic compiling)

And now like a whiny baby they LLM rewrite to Rust. There is a very real chance that Zig design philosophy got them to the point where they are now by enforcing to make the tough but precise decisions and the Rust rewrite is the start of the downfall.

It’s purely politics-based not technical, but it seems like bun is full on pampered by Claude. So much that I wouldn’t wonder that the next marketing piece of Anthropic is. Claude Mythos rewrote leading 950k LOC JS Runtime to Rust.

woah 9 hours ago
Who's the whiny baby? The developer writing some code in their own repo, or the guy complaining about it on Hacker News?
stingraycharles 8 hours ago
Yeah I also noticed this irony. In addition to accusing the rewrite to being political and not technical, while their whole comment is being political not technical.
jwpapi 8 hours ago
I meant my comment not the rewrite
stingraycharles 8 hours ago
Ah, fair enough then, you mean want to clarify that a bit as it can be interpreted both ways. And the whiny baby part seems a bit uncalled for and distracting from the point you’re trying to make.
NewsaHackO 8 hours ago
Don't give them too much credit, they responded to other comments clearly referring to the developers' comments on twitter about his technical motivations. He's just backtracking now due to your comment.
jwpapi 3 hours ago
I meant the developers motivation with "whiny baby" and I take the point that this was over the top and I could’ve found better words.

But I meant that my comment is "politics-based and not technical", because the gut feeling is more based on my reading of soft factors than it is from in-depth technical analysis of everything involved.

Validark 7 hours ago
I'm team Zig in most cases but I genuinely think they are better off with Rust. They have had a lot of buffer overruns and segfaults as a result of undisciplined Zig code. I think Rust actually is a better technical choice for them.
BigJono 4 hours ago
Yeah I agree. Rust is a great language for shit programmers using shit AI.
casey2 1 hour ago
I don't think that's going to save them. There are big problems and little problems. RAII+ownership/borrowing solves some memory and file handle issues. But the big problem and this happened before the rewrite, is that they have ceded the system level. Which locks the project into a local minima.

It's not a "your holding it wrong" problem it's a you fundamentally have no idea how your own program works past 1 or 2 level of indentation in most places. If the LLM says that something isn't possible you just have to take it at it's word.

tln 10 hours ago
> And now like a winey baby they LLM rewrite to Rust.

I didn't see any whining from Jarred, this seems like misplaced sentiment

> It’s purely politics-based

The linked twitter thread gives clear technical justifications

jwpapi 10 hours ago
Jarreds Twitter is a Claude Code Billboard
baranul 10 hours ago
Whether incidentally or intentionally, that rings true.
jwpapi 10 hours ago
> I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues

There are legit reasons to rewrite a program in a better fitting language, but as a runtime to be "tired of worrying about & spending lots of time fixing memory leaks and crashes and stability" is really borderline to me.

Also there are way more things to it than just compile time and tests: you reset mental model and will lose contributers. There is philosophy, developer skill and more attached to a language.

In this case both compile via LLVM the same and there is no performance benefit given the code is written exactly the same, so it’s developer preference, where the current head seemed to prioritize his own DX over everyone else’s.

But again this is mainly my gut feeling. I’m not the first dev that doesn’t like the way bun changes : https://news.ycombinator.com/item?id=48011184

42iak18 10 hours ago
"They" likely refers to Anthropic in this case rather than being an indeterminate singular pronoun:

https://bun.com/blog/bun-joins-anthropic

I'm not sure if the 50% of people defending the whole rewrite live under a rock with regard to the acquisition or have never worked at a US company or a deliberately naive. Companies give instructions. Nothing of this is accidental or prompted by curiosity.

cute_boi 10 hours ago
It looks more political than technical. Also, criticizing the Zig team for not making any AI contributions before this gives a hint.
10 hours ago
titularcomment 10 hours ago
I agree. From the get-go, Bun was apparent in its design philosophy: we do everything you'd ever want; runtime, bundler, test suite, package manager, all in a new breaking patch each week. With each and every one blowing the established competition out, better, faster and stronger. But it was glaringly obvious that they'd do anything but Keep It Simple Stupid. It was obvious that the only production environment it would see the light of the day in the near future would be YC startups burning one after another at the speed of an accelerant. Now, they're past the point of no return.
pier25 7 hours ago
> It’s purely politics-based not technical

Jarred mentioned having to work on fixing memory leaks as the main motivation to try this.

https://xcancel.com/jarredsumner/status/2053058171338682875#...

I was never fully comfortable with Zig given it's much less mature than Rust. Maybe this will be for the better.

elktown 2 hours ago
People are seriously naive about corporate incentives. You think he'll go "Yeah, it being in Zig has put a wrench in our AI usage and that's not a good look now that we're with Anthropic"? No, he'll confirm everyone's biases instead - and it's working as well as expected on this crowd.
giancarlostoro 5 hours ago
I mentioned a similar sentiment 4 days ago in the original discussion about this project, and HN for some reason did not like that I noted Rust is used in production longer and way more than Zig is, including Firefox, CloudFlare's own reverse proxy, Discord, and many other massive effort projects that affect millions if not billions of people.
Greed 9 hours ago
I don't have the personal investment that you appear to have with Bun, but why does this matter? Do you scrutinize the rest of your dependencies this way?

Much of working in the JS / NPM ecosystem is already pure faith on un-vetted dependencies, and this appears no different pre or post LLM rewrite. If it satisfies the intended goal and API contract it originally did, is there any difference? Were you carefully reading the original source code before?

fg137 8 hours ago
> Do you scrutinize the rest of your dependencies this way?

You don't?

Greed 4 hours ago
Enough to make judgement calls on them based on the individual Twitter posts of each of their developers? Absolutely not!

If I go beyond the initial vetting, that's a minimum of 30+ projects multiplied by however many contributors each. Without even mentioning all of their sub dependencies. It's a pipe dream to think you can ever have a complete picture of the motivations and political machinations of your entire dependency tree.

csande17 3 hours ago
I have definitely dropped dependencies from production codebases in the past because "lead developer is widely known to be a clown". You don't need to catch everything but it's generally a good idea to have a picture of, like, the twenty most important dependencies in your codebase and the 90th percentile most notorious clowns in the community.
himata4113 7 hours ago
I consider zig the "whiny baby" approach to be honest.
HumanOstrich 9 hours ago
Yep, the Anthropic acquisition, this petulant Rust rewrite, and bun's increasingly buggy releases (slop) have caused me to migrate my projects (personal and work) to nodejs+pnpm.

The risks of using bun are no longer just those concerns around a newer tech and "drop-in" replacement for nodejs. Now you have to marry Anthropic, Rust, and a founder with conflicting priorities.

stouset 8 hours ago
Having read the comments from the actual engineer doing this rewrite, the only petulance I have seen is from those reacting so strongly to it.
fg137 8 hours ago
just wait a year or two.
stouset 4 hours ago
How exactly will waiting a year or two make this effort appear “characterized by impatience and grumpy annoyance”, as opposed to the people right now who are loudly bemoaning an engineer trying something out as an experiment?
10 hours ago
kakwa_ 9 hours ago
Bun is effectively dead.

Anthropic bought it in a somewhat dumb attempt to solve their "performance" issues (not realizing their horrible code was the issue in the first place).

It probably helped them, simply because they brought in some actually competent developers.

But doing so, Bun went from being a public project to more of an internal tool for Anthropic, spoiled for now with AI money and losing quite a bit of focus.

Let's hope that when the bubble pops, some of the Bun effort could at least be salvaged. I don't see Anthropic maintaining it long term, they are simply not in the business of selling support for a runtime nor have the (Google) scale justifying maintaining one on the side.

ksec 19 hours ago
I think a lot of people taking this at face value , a lot of this was possible because of the beyond standard extensive and comprehensive test suit previously built.
Jcampuzano2 13 hours ago
It's still an impressive achievement that would have taken even the most competent engineers an exponentially longer time to accomplish.

I just hope it's noted when this is eventually marketed how much human effort went into designing and curating the test suite that even enabled this speed in the first place.

A test suite sort of functions exactly like the ideal scenario for current gen llms. A comprehensive enough test suite essentially forms the spec for agents to implement however they see fit - in this case rust.

You could probably throw away the entire actual source code in certain cases and reimplement the whole thing from scratch just giving an agent access to the tests when it's as well crafted as a project like bun.

leecommamichael 10 hours ago
If this is a "beyond standard" test suite, (so much so that it _uniquely_ makes this work possible compared to other projects,) then how is Bun also uniquely unstable compared to other Zig programs (and so deserving of rewrite?) If the blame lies partially with the test suite, what does this imply (if anything) about the Rust port?
stingraycharles 7 hours ago
Because tests validate behavior, not undefined behavior.

The thesis is that Rust makes undefined behavior less likely.

scuff3d 13 hours ago
Look what it can do in 6 days!

Ignore the hundreds of thousands of hours put into the original architecture and test suite that made it possible in the first place.

zaptheimpaler 12 hours ago
This is such a bad faith argument. How long would it take a dev or a team of devs to do this with the same architecture and test suite? A hell of a lot longer than 6 days..
oytis 11 hours ago
But what is the purpose? When you rewrite a project in another language, it's for engineers to be able to maintain and further develop the project better on some metrics due to advantages of the language. It doesn't hold when LLM does the rewrite, since there is no one who understands the code after that.

It's a good demonstration of capabilities, sure, but the result itself makes no sense. We'll have to figure out where these capabilities can bring real advantage

gamegod 10 hours ago
This is such an insightful comment. It also underscores why these AI companies' marketing efforts are promoting rewrites.
stingraycharles 2 hours ago
I agree that the comment is insightful, but I don’t think AI companies are particularly promoting rewrites, other than that it’s a task LLMs are good at as “the code is the spec”.

The industry as a whole still is realizing that any LLM usage that actually writes all the code for you is causing cognitive debt, and we’re even slowly losing our skills of the art.

I’m trying my best to navigate this myself, but no matter what we do, using LLMs is both a blessing and a curse.

cdelsolar 9 hours ago
why do you think no one understands the code after the LLM rewrites it?
oytis 2 hours ago
Becase no one has written it. You can't ask the guy who has written it, not because this guy has left, but because he does not exist. Also, it often reads weirdly.
stephen_cagle 10 hours ago
I disagree with calling this bad faith. For instance:

* I can agive you one quarter of amazing profits, if you let me dismantle and sell all the assets of a company.

* I can give you a few years of incredible food production, if you let me strip a rainforest and plant commercial crops.

* I can give you incredibly cheap energy, if you let me mine non renewing fossil fuels from the earth.

The context of why something is possible matters. In this case, because a very large and comprehensive test suite was seen as a necessity to specify a successful project (managed by humans). I do not believe a LLM coded project could ever have made such a test suite. In this case, the LLM is consuming the result of expensive human labor (the test suite) to make what ultimately is a minor variation to it (the implementation language).

andriy_koval 9 hours ago
> This is such a bad faith argument. How long would it take a dev or a team of devs to do this with the same architecture and test suite? A hell of a lot longer than 6 days..

Pocket calculator also can multiply numbers much faster than engineer, it doesn't make it engineer itself..

scuff3d 11 hours ago
You missed the point.

People want to use stuff like this as somehow evidence for AI being able to write entire software systems in a few days. We saw the same shit with the "compiler" they made with a bunch of agents. Literally the only reason it's possible is because the hundreds of thousands of man hours and God knows how much money that was poured into the reference projects befoes the AI got anywhere near it.

To replicate this kind of thing with a green field project would take an absolute ton of spec work and requirements derivation, which will substantially eat into any savings from having AI generate it.

The accomplishment itself is interesting, and unlocks opportunities to do work no one would have bothered with before, but it doesn't represent what a lot of people desperately want it to.

cmrdporcupine 13 hours ago
Exactly this.

I am not sure why people sound so astounded, to be honest. This has been my frank experience of the agentic tools both Codex and Claude since about December.

When given the right constraints this kind of thing is entirely conceivable.

However the important question not being answered here is: does anybody working on it have a full understanding of what has been built?

My experience having constructed similar types of projects using these tools is yes, you could do this in a week or two but now you'll have a month or two of digging through what it made, understanding what was built, and undoing critical yolo leaps of faith it made that you didn't want.

scuff3d 12 hours ago
Not to mention to even attempt something like this from scratch would take hundreds of hours if spec work. I see it all day everyday in the aerospace sector. Software engineers have absolutely no idea what deriving a design document and all its associated artifacts actually looks like, and they're in for a rude surprise if the industry really does shift hard that direction
perlgeek 1 hour ago
Has anybody thought through the legal aspects of this, regarding code ownership?

As far as I understand the situation in the US (sorry, no idea where he is located), output from LLMs, once published, is essentially in the Public Domain, since there isn't any human who owns it.

However, in some sense, this is also a machine-assisted translation from one computer language into another, so one could argue that the ownership of the original code base still applies to the new one.

Which one is it? Is there any way to find out before a similar case goes to court?

muglug 57 minutes ago
> output from LLMs, once published, is essentially in the Public Domain, since there isn't any human who owns it

That’s not what the court case in question was about: https://www.morganlewis.com/pubs/2026/03/us-supreme-court-de...

If I ask an LLM to come up with an entirely new story on its own, the output is not copyrightable.

But if I feed an LLM a Tom Clancy novel and ask it to regurgitate that same novel, I cannot legally then put the output on a website for anyone to download.

afavour 14 hours ago
Presumably the biggest loser in all this is Zig, I only know of the language because of Bun.

But the timescale still gives me pause… just because AI lets us convert a codebase in 6 days doesn’t mean it’s wise. There are surely a lot of downstream implications! It’s always felt a little like Bun is making up a plan as it goes along (and maybe that’s unfair), this seems to underline the point.

nine_k 14 hours ago
Zig is a great low-level language. It's much better than C, while not being so much larger as e.g. Rust or C++. AFAICT Zig does well in embedded development, and should continue to do so. Note that Zig is not even 1.0 yet.
internet2000 14 hours ago
Yeah but now they got the fame of the language that fumbled the ball because of an overly onerous anti-AI stance.
toshinoriyagi 10 hours ago
They haven't fumbled anything. One person has used AI to vibe code a rewrite of a Zig program in another language. Zig didn't gain popularity due to Bun, last I checked Bun doesn't even mention it is written in Zig on the homepage. Zig is appreciate for major improvements over C, while being simple and concise.

In addition, a core Zig developer has explained why the PR was rejected, because it would introduce non-deterministic bugs into the compiler, just to achieve a speedup Zig is already gaining thanks to recent work on the self-hosted backend and incremental compilation, which are far more general as well.

9 hours ago
Chris2048 13 hours ago
It's been repeated many times that the rejection of the Bun PR was unrelated to their AI-policy. It's also not clear they've "fumbled the ball" given how many projects are complaining about slop PRs.
stingraycharles 2 hours ago
I think it would help if Zig put out a statement on their actual AI policy, regardless of whether they’d be repeating something that should already be known.

As often happens, the online discourse has, for some reason, decided that this was an anti-AI stance, while - as far as I understand - the problem was simply that the PR had problems, which lead to Bun forking Zig.

scuff3d 13 hours ago
Lol. What a goofy take.
wolttam 13 hours ago
These tools let you get a massive codebase functional in 6 days. But, presumably, there's no better language to target than Rust (in terms of safety/performance), and therefore the rest of time can be spent making the birthed-in-6-days codebase better.
iwontberude 12 hours ago
But the author said "the code truly works, passing the test suite on Linux and soon other platforms" which just sounds really wise.
anilgulecha 21 hours ago
I think the industry is moving to English as the programming language, and specifications-context-tdd as the framework for building software.

Many find it distasteful, and many finding liberating. I think it's broadly correlates with how they feel about expressing themselves in english vs say C++.

As a side question, is there anyone who's using LLMs primarily in non-english mode to program? I suspect there's quite a few people using mandarin, but can someone share first-hand account.

danipark 5 hours ago
I’m Korean, and I’ve used GitHub Copilot, Claude Code, and Codex. At first, I prompted them in English, but over time I came to the conclusion that using Korean works better for me. It may consume more tokens, but reducing the time spent understanding and correcting the plan is more valuable. That said, when the context gets close to its limit, the responses sometimes include Korean words that do not actually exist.

As an aside, I don’t think the benefits LLMs bring to non-English users are widely understood. I studied linguistics and Russian, and I’m capable of professional interpretation in English and Russian. Even so, I can read technical documents, understand them, and communicate about them much faster and with far less effort in my native language, Korean. These days, I read most English documentation and HN posts through Chrome’s automatic translation. Sometimes the translation is ambiguous, but in those cases I can immediately refer back to the original English. This has been a major help to me and to other Korean developers I work with.

pyonpyon 19 hours ago
I'm using it 50% English (personal projects)/50% Polish (workplace; reasons being agents.md / team is not that english proficient) and honestly I haven't seen much difference in the output/ambiguity.

Polish prompts tend to be shorter due to the language having a lot of verb forms/conjugations, the only "bad" thing for me is that when it's saying "it broke" it tends to use uncanny / blunt words that make me sometimes laugh.

thedevilslawyer 19 hours ago
Interesting. Some questions: Would you say polish is more dense or less dense than english? It's interesting to hear that code quality is not suffering but the response text is sillier or blunter. Any other descrepenacies compared to English?
pyonpyon 19 hours ago
I would say it certainly can be more dense but even if it's more dense, the tokenizers count it as more. Last time I checked in OpenAI tokenizer for my agents.md it ate 30/40%~ more tokens than the English version at roughly 1:1 meaning.
eikenberry 13 hours ago
I think it will eventually be its own dialect of English. Telling LLMs what to do is better using not quite normal English and I think this will continue until it isn't recognizable as natural English anymore, but a new fuzzy programming language (probably >1).
tayo42 13 hours ago
>Telling LLMs what to do is better using not quite normal English

What are your prompts like?

SwiftyBug 20 hours ago
I wonder how well Mandarin works for LLM-based programming. On one hand, it's very token efficient as Mandarin script is very dense in meaning. On the other, I suppose this can increase ambiguity.
jamesdutc 13 hours ago
I can speak, read, and write Taiwanese Mandarin (which is likely relatively underrepresented in the training sets and, which is, in my practical experience, materially different in its usage.)

The authoritative answer for this question would best come from the millions (or tens of millions) of Chinese-speakers who are currently using LLMs to write software.

However, it is my suspicion that you would see no advantages using any language other than English. While there is a certain token-level density to written texts, it seems the benefits of this (and the more recent discussion around “caveman talk”) are quite limited.

Furthermore, consider that the vast majority of textbooks, technical documentation, blog posts, StackOverflow answers, &c. are originally in English. Historically, where these have been translated to Chinese, the translations have often been of very poor quality (and the terminology and phraseology is often incomprehensible unless you also understand some English.) I would suspect that this makes up the overwhelming majority of the training sets for these models.

That said, my experience using the most recent models, is that they are surprisingly language-agnostic in a way that surpasses readily-available human capability. For example, I can prompt the LLM to translate English into something that uses German grammar, Chinese vocabulary, and Japanese characters, and I'll get an output that is worse than what a human expert could do… but where am I going to find a multilingual expert?

(Of course, I have so far only ever been impressed that a model could generate an output but never impressed with the output it did generate. Everything—translations, prose, code—seems universally sloppy and bland and muddy.)

So what I would anticipate the biggest benefit for a Chinese-speaker today… is that if they are disinterested in working internationally, they have significantly less dependency on learning English.

arjie 12 hours ago
Character-density and token-efficiency are different things. Latter is data and, therefore, tokenizer specific e.g. take GPT-5's tokenizer o200k_base and run mandarin text and its translation through. Some amount of the time en will beat zh. I just tested with news articles and wikipedia.

After all `def func():` is only 3 tokens on o200k_base.

13 hours ago
nesk_ 12 hours ago
I use French nearly all the time, it works well. Not that I can't write English prompts, but I find it easier to use my native language.
nothinkjustai 13 hours ago
Natural language doesn’t have the precision required for building systems. We already have languages for specifying systems precisely. It’s called “code”…
anilgulecha 3 hours ago
Well, what we're seeing the past few months is that natural language does - at least enough to build code and tests.
_woland 18 hours ago
I'm using it in english / albanian. Not much difference really. Impressive.
pjmlp 19 hours ago
I agree, and those are still too focused on code generation for specific languages are fighting the last war.

It is the revenge of UML modeling.

Eventually it will get good enough that what comes out of agent work, is a matter of formal specification.

Assuming that code is actually needed and cannot be achieved as pure agent orchestration workflows.

mohamedkoubaa 12 hours ago
I'm teaching my kids to be fluent in tokenese
dinkumthinkum 10 hours ago
You really think that's what the positions on either side boil down to, how they feel about expressing themselves in English vs C++? No, that's ridiculous. That's such a wild reductionistic simplification.
tmaly 14 hours ago
Just a cautionary case of porting to Rust using AI

https://blog.katanaquant.com/p/your-llm-doesnt-write-correct...

yrds96 11 hours ago
Also passing tests doesn't mean something works.

Claude code C compiler passed 100% of gcc tests and couldn't even run a hello world...

rst 9 hours ago
It couldn't run "hello, world" on systems where the include files were not located in the directory that it expected -- producing instead diagnostics saying, quite clearly, that the header files were not found. On systems where they were, it built versions of postgresql, redis, and several other things which passed their test suites completely.

If you've heard this problem described as a fundamental limitation of the compiler, and not the kind of packaging glitch that's routine to find in pre-alpha software of all descriptions, whoever described it to you that way is not serving their readers well.

I'm not saying CCC was production-ready, or close -- the total lack of an optimizer would be a killer in any real use, and I assume that there were problems with the diagnostics at least as bad as problems with performance and the include files, for similar reasons -- the LLMs hadn't been asked to optimize for that stuff yet, just test suite correctness. But it did achieve that, and the amount of cope I've seen on social media claiming otherwise is more than a bit disturbing.

fg137 8 hours ago
I have a colleague who multiple times committed code that doesn't work, like at all. Why? His code is only used in tests but not in the actual application. And apparently he never even bothered to click through things even once, let alone reviewing the code.

If it doesn't work, it doesn't. You can find all these excuses. But at the end of the day, there is a difference between an end user being able to get something out of your code or not.

GaggiX 10 hours ago
The C compiler written by Claude a few months was able to compile a hello world.

The main problem I think that it was extremely slow.

8note 14 hours ago
i think theres a different lesson to be taken from those cases - the LLM will build to what you give a feedback loop for.

if you give just the logical tests, it wont consider the speed at all. if you included tests that measure the speed and ask the llm to match the performance, itll do that too.

its the same class of error as everything else with llms. it has no common sense context for things people consider important. if you dont enforce the boundaries, it will ignore them

alphabeta3r56 12 hours ago
Question is, are our optimization functions well specified enough? (No)

How important is well specified opt function? No one knows. We will find out

dang 13 hours ago
Discussed here if anyone's interested:

LLMs work best when the user defines their acceptance criteria first - https://news.ycombinator.com/item?id=47283337 - March 2026 (422 comments)

spicyusername 20 hours ago
What a time to be alive.

So much of the fundamental dynamics of the industry and the job have changed in so little time. Basically over night.

Some days I am so excited at how much I can do now. You can build anything you want, in basically no time! 100% of my software dreams can be a reality.

Some days I am terrified at what's going to happen to the job market.

Suddenly you can get so much with so little. The world only needs so much software.

Is every company that sells software as their core business model going to go out of business?

What will happen if only certain companies or governments get access to the best models?

perlgeek 48 minutes ago
> Is every company that sells software as their core business model going to go out of business?

Probably not, for a number of reasons:

* Some software suites are (probably still for a few years) too big to regenerate them through a coding LLM

* There's quite a lot of proprietary knowledge not just in the code itself, but in the requirements, industry knowledge etc. For example if you want to write a hospital management system, you need to know a lot about how hospital works, how they are billing their services in different legislatures, data protection rules etc.

* For some pieces of software (like computer-aided engineering), validation of the software is just as important as the software itself.

* Liability: suppose you build bridges, and you're on the hook if it fails too early. Do you really want to vibe-code your own software that validates the bridge's design? Will any insurance company cover that? Probably not in the near future...

* Currently, security and safety of LLM-generated code is still a pretty big concern. I guess this will get better as the LLM-Coding industry matures.

keeda 11 hours ago
> The world only needs so much software.

Around the time of the dot com crash, there was a decent amount of rhetoric advising students and job seekers against getting into the software industry, because it was getting "too saturated." The thinking was there's just not that much work to go around, especially for the number of people flocking to the field. And the crash just reinforced that narrative.

But even as a student back then, I could tell that there was unlimited scope for software. Pretty much any cognitive thing we do manually could be done in software. I once idly tried to enumerate those and quickly realized there was soooo much to do. Plus, I also understood that the more you do things a new way, a lot more things pop up that we haven't even imagined yet. The possibilities were countless. It was clear that the "saturation" narrative stemmed from a lack of people's imagination and understanding of what software really was.

I just knew that this field would never get saturated because it was impossible to run out of things to write software for.

But these days...

I mean, I know we will always have new software to build as things evolve, which they will do faster than ever with AI. But these days, I wonder if it's now possible to write software faster than we can imagine new things to do.

EMIRELADERO 11 hours ago
> Pretty much any cognitive thing we do manually could be done in software.

Yes, although I suggest being careful with that kind of thinking.

https://www.orwell.ru/library/novels/The_Road_to_Wigan_Pier/...

keeda 10 hours ago
Ooh, I hadn't read that one, have put on my list. I couldn't read the page properly because ads keep popping up and making the page jump around... but it seems the linked section was be about displacement of workers? If so, that's always been true of all technology, but that's less a problem with technology and more with the social system it is applied in. I just posted this comment elsewhere that may be relevant: https://news.ycombinator.com/item?id=48078930
EMIRELADERO 10 hours ago
It's not about the displacement of workers. It talks about a fundamental principles-level objection to unbounded "progress". It's not an absolute argument and Orwell himself says so, but it is worth keeping in mind.

Try reading it here: https://www.george-orwell.org/The_Road_to_Wigan_Pier/11.html

vb-8448 9 hours ago
Let's take a SW business like a ticketing system.

Do you think 100 enterprises with 1 bln of tokens are going to make a better product than specialized vendor with 100bln of tokens?

For sure SW vendors and SAAS like "logo creator" are already dead, but unless the next generation of LLMs aren't going to have an embedded ticketing system the ticketing system vendor will be fine(maybe less headcount, but not sure).

perlgeek 37 minutes ago
> Do you think 100 enterprises with 1 bln of tokens are going to make a better product than specialized vendor with 100bln of tokens?

I'm not sure if this is sound reasoning, because "better product" is very context-dependent.

My currently employer has migrated away from RT to OTRS as ticket system, and now moving to servicenow.

The RT instance was heavily patched/customized.

The OTRS instance was heavily patched/customized.

We try not to customize servicenow quite as much, but the less we customize it, the more we have to change the workflows in our company. And humans are slow to adapt.

With this experience in mind, the question is more: do we want to spend lots of money on a vendor-supplied ticket system, and then spend lots more LLM tokens to customize it, or do we LLM-build it from the ground-up?

If we started a new ticket system migration project today, maybe the best answer would be to start with an easily-customizable Open Source ticket system, and then throw LLM-power at customizing it.

wolttam 13 hours ago
Certainly companies and governments will have access to better models than the public (in fact, that's already the case with Mythos). The public will still be able to help themselves with models that are behind the frontier.
nine_k 14 hours ago
> 99.8% of bun’s pre-existing test suite passes on Linux x64 glibc in the rust rewrite

OK, they've got a working prototype, congrats! Now it needs to be put into shape so that all the unsafe blocks are eliminated (maybe with a few tiny exceptions), and the code is turned into maintainable, readable, reasonably idiomatic Rust.

I wonder how long is it going to take.

amarant 13 hours ago
About 2 months, or 60 days, if we go by the old 90/10 rule.

Not sure that rule is even applicable anymore, but I don't have a better heuristic to make guesses by either.

txdv 3 hours ago
maybe its tokens instead of time now? bun has access to an unlimited amount of it
mustache_kimono 12 hours ago
> Now it needs to be put into shape so that all the unsafe blocks are eliminated

All the unsafe seems to be FFI?

https://github.com/search?q=repo%3Aoven-sh%2Fbun+unsafe+lang...

> and the code is turned into maintainable, readable, reasonably idiomatic Rust. I wonder how long is it going to take.

This isn't a c2rust rewrite?

ameliaquining 11 hours ago
That GitHub search only covers the main branch, not the not-yet-merged Rust rewrite; the only Rust code in there is tests for Rust FFI (so that people can write native extension modules for Bun in Rust if they want to).

The rewrite's in https://github.com/oven-sh/bun/tree/claude/phase-a-port. By running the following command on it, I count about 14,000 unsafe blocks:

  rg --stats -g '*.rs' 'unsafe \{|unsafe impl|#!?\[unsafe\('
steveklabnik 11 hours ago
I have not had time to look at the code myself, but from when this was initially posted to Reddit, IIRC it had around a thousand global mutable variables, which are unsafe to access.

I am very curious what the numbers are once the test suite passes and after a few passes of reducing the amount of unsafe.

ameliaquining 11 hours ago
This is the kind of program that would need to have a lot of unsafe even if it had been written in Rust from the very beginning. For comparison, there are about 2600 unsafe blocks in Deno, not counting dependencies.
14 hours ago
pulsartwin 19 hours ago
At the very least, it's interesting to be a bystander observering as efforts like this progress. The first thing it makes me wonder is how comprehensive/high quality the test suite is to begin with. Not to cast aspersions, but even at 100% on all platforms I wonder how confident the Bun team would be in migrating.
matt3210 7 hours ago
Guys calm down, this is just marketing from anthropic the same as the browser and the c compiler.
lujeni_ 13 hours ago
No doubt on my side porting was "easy". What I’d find interesting is the ability to maintain and properly care for the code over time for the next iterations. Do we eventually end up with a codebase that nobody truly understands in depth anymore, where everything is generated and modified through GenAI?

Thanks for the sharing

oytis 11 hours ago
Yeah, that's my issue with llm code. If we imagine a future without human programmers - sure, go ahead, we are not there yet, but maybe it's possible.

But if you want it to coexist with humans, then it doesn't seem to work well. It gets in the way of human learning and human communication. Making professionals and teams weaker essentially

boring-human 12 hours ago
I harbor some hope that the (sad) fall of human SWEs will at least be accompanied by language defragmentation. We don't need 38 systems languages once human taste is mostly out of the picture.
arjie 13 hours ago
This is remarkable. Man, there are all those ancient things that "we've lost the source code for". One time, in a past job 10 years ago we were reimplementing something that was lost to the sands of time, using an out of date spec it had used. It was such a tedious job with verification but we got there. Amazing how easy that would be today.
thfuran 12 hours ago
I don't think this kind of thing works nearly so well without a comprehensive test suite or the ability to easily use the reference version as a test harness. The typical enterprise relic for which no specification or source remains almost surely lacks the former and probably isn't very amenable to the latter.
jedberg 12 hours ago
Obviously there is a huge trend of "rewrite X in Rust". I understand why, Rust is a huge improvement in safety and speed.

My question is, to people even older than me (and I'm certainly not young), does anyone remember this much enthusiasm about people rewriting C code into (C++/Java/Whatever was new and hot)? Because I don't, but maybe I missed it.

libria 11 hours ago
I recall C++ OOP being the new hotness when I started out and C was always contrasted as the old & busted example. Kind of the "Everything-as-an-object will simplify everything" phase. Windows MFC was the new way, then STL.

Java WORA write once, run anywhere was definitely a thing when it came out. Java Applets came out of the woodwork and were the WASM of their day. Even Cisco ran Java for their router UI for a while, which was painful.

More recently, HN went through a period about 10 years ago where every other article ended in " ... written in Go".

The mantra may not have rhymed with "rewrite X in Y" but the spirit was there.

russum 1 hour ago
> every other article ended in " ... written in Go"

What happened to that: is Go no longer considered great / popular?

claytonjy 10 hours ago
Kind of the opposite, I was deep in the R world a decade ago and there was a huge trend of replacing Java dependencies with C/++ ones because the JVM was such a pain to manage. The community eagerly adopted the replacements about as soon as they existed.
Onavo 12 hours ago
There were no good options previously. It was either C or C++. Most of the other languages were either fringe or had a GC, or had a pseudo runtime GC (Swift). The culture of Java and C# and Go didn't really support the type of low level optimizations needed, even though you could technically do system programming if you restrict yourself to a specific subset of language and cut yourself off from most of the standard library and ecosystem. Nim was unstable. OCaml had the same issues as Go and Java and C#. You simply did not have any options until Rust came along. Oberon was an academic trinket. The less said about the various lisps and forths the better.

OS and embedded programming require bare metal support and data structures that can run standalone in the absence of an OS and standard library, and the ecosystem must exist to support such a style of programming.

Currently Rust has over 10000 crates that would theoretically work just fine in an kernel environment.

https://crates.io/categories/no-std

timcobb 10 hours ago
The Ubutnu coreutils thing last week really soured me on 99.8% test compatibility Rust rewrites :|. I clicked through to the tweet linked here and it was kind of like shudder I feel quite opposite now when I see this kind of thing. I'm like *looking for exit*
Robdel12 13 hours ago
Bun is going this route because their proposed fix wasn’t great. https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...

Cannot imagine this agent rewrite had anyone review any the code (you can’t at that speed).

I’m positive this will go extremely well :p

davidatbu 7 hours ago
Fwiw, that's not the stated motivation for the rewrite experiment. In fact, the Rust rewrite is slower to compile than the zig code when compiled with their internal fork of zig (tho it is faster when OG zig is used).

I don't want to infringe upon your right to speculate. I just want to point out that your statement is at best a speculation.

jFriedensreich 4 hours ago
I still don't understand how people consider bun as a viable runtime when its owned by evil corp trying to use it for capture of the tooling layer and its the most insecure runtime on top of that. Meanwhile deno is performant and has dramatically improved node compatibility while exposing a proper permission broker api.
ec109685 12 hours ago
There is no way a port this massive will have human code reviews.

If this succeeds, there is no stopping AI given it will have crossed the rubicon of human bottlenecks.

onlyrealcuzzo 13 hours ago
And here I am trying to get an LLM to add types to a 100k line Ruby repository for 2 days, and it's not going so hot...
phamilton 10 hours ago
I have some experience in this. Reach out (email in my bio) I would love to chat.
adsharma 12 hours ago
A SMT solver may work better.
onlyrealcuzzo 11 hours ago
Will that work if my codebase is filled with nils it shouldn't be filled with, and HashMaps instead of structs with a loosely defined schema, and tuples masquerading as arrays?
AppAttestationz 2 hours ago
I suspect that the test suite isn't that great tho. Bun has so many different behaviors compared to other JS engines, sometimes just plain wrong or contradicting the spec. Test suite didnt catch those.. Not sure how much I trust the rewrite :)
akagusu 18 hours ago
What does this mean for Zig?

Few big popular projects use Zig, if they start to move away from it, what Zig's future will look like?

NewsaHackO 13 hours ago
I think the issue is that Zig lost their biggest project, which was a posterboy project for real uses of Zig. Worse, the project felt like Zig wasn't meeting their needs, to the point they abandoned Zig and rewrote their entire project in a different language. Really bad signal for anyone thinking of using Zig for a big project. It is still in beta, but has there been any situation like this, where a upcoming programming language was abandoned by its biggest external project and still was able to be considered a successful language after that?
toshinoriyagi 10 hours ago
Well they haven't lost anything yet. Somebody is vibe coding a rewrite in another language and we don't know much else. The author said he will write a blog post about it soon. So far all we know is it is passing most of the test suite.

But Bun has open issues and bugs. The test suite doesn't tell us whether it has introduced many new bugs, solved existing ones the test suite doesn't catch, or anything else. Not to mention, the rewrite is 960K lines that nobody understands. How long will it take for the Rust version to be better, and be understood as well as its current maintainers understand the Zig version?

Having a project consider a rewrite isn't so big a deal. Zig has been designed from the ground up with a vision, and isn't worried about taking a while to create a stable API to achieve that vision. The self-hosted backend shows how incredibly fast incremental compilation is when the language is built for it ground-up. Compared to other languages that implement weaker forms of incremental compilation it isn't even close.

I don't think the Zig team is concerned at all.

NewsaHackO 8 hours ago
>Having a project consider a rewrite isn't so big a deal.

I don't agree that them actually doing an entire draft rewrite can just be characterized as them considering a rewrite.

>I don't think the Zig team is concerned at all.

I wonder if that's the mentality that got them in this situation in the first place.

toshinoriyagi 5 hours ago
>I don't agree that them actually doing an entire draft rewrite can just be characterized as them considering a rewrite.

You're right, a rewrite is in existence, and whether it is good enough to be used or expand upon is what is being considered. I don't think that changes the fact that languages don't live or die by whether or not 1 large project using them continues using them. Especially a language like Zig which has taken plenty time making breaking changes. They know this is par for the course.

>I wonder if that's the mentality that got them in this situation in the first place.

I highly doubt it. To my knowledge, the only "why" Jarred has given is frustration with memory issues. Speculated reasons I see are: 1. Anthropic wants a rewrite to a language with a more favorable AI contribution policy, to avoid bad press by acquiring a framework written in a language that is skeptical of AI code quality. 2. Rust is more stable and a better target for AI-assisted programming or entire vibe coding. 3. Bun is upset Zig does not want to merge their fork into main.

Focusing on the issue Jarred gave as why he started the rewrite, I don't see how Zig got themselves into the situation at all. Zig was always upfront that it aimed to be a modern C: simple language, powerful modern features, and excellent compatibility with all things C. While it certainly has much better behavior concerning memory safety and undefined behavior, it has never aimed for Rust or GC level memory safety.

It's not like Jarred has been begging the Zig devs to implement language changes to make Bun development easier. Zig was always upfront that you will have to manage memory manually, and that allows for operator error. I think Jarred is in this situation because he wants to be, simply. He works for Anthropic, probably has no limit on how many tokens he spends, and may have access to their most powerful internal models like Mythos. I would guess he pointed agents at this problem and let them go, because why not? He has likely has no opportunity cost.

stock_toaster 7 hours ago
> I think the issue is that Zig lost their biggest project, which was a posterboy project for real uses of Zig.

Bun, Ghostty, and TigerBeetle are 3 popular projects that I have heard about using zig.

andriy_koval 9 hours ago
Is it lost already? Did antropic already say new LLM generated thing is way to go for the future?
kennykartman 11 hours ago
Nobody knows. Here's my two cents.

Zig is a very interesting LOW level language, but honestly I think it should be considered for what it is: a better C. I don't think it fits for anything that someone would have written in C++, Java, Haskell or C#. Instead, Rust is competitive with all of these languages when it comes to safety, abstractions and speed. And also C and Zig itself.

Zig has a couple very interesting ideas that make it stand out: comptime and the zig build system.

Alas, Zig is still far from being stable. Rust came out to the public in 2012 and became stable (1.0) in 2015. Zig came out to the public in 2016, and it's 10 years now and someone says it's still years away from 1.0.

So, while rust took 3 years of public development to become stable, zig is taking 10/15 years. I love the language, but TBH I don't see a great future ahead, especially with LLMs advancements that can use safer languages to do the same work. There's no point in risking more memory bugs when the effort for writing code is the same.

smj-edison 7 hours ago
Honestly I think, at least to the Zig community, But isn't the biggest name we'd think of. There's been some philosophical friction between the Zig project and Bun (Zig is pretty anti-AI and favor methodical thinking through of problems, while Bun is more move fast and break things). I think TigerBeetle is a better representation of what Zig can do. TigerBeetle is fuzzed within an inch of its life, and is absolutely rock solid. The people who work on it are brilliant programmers who care a lot about correctness. They find that Zig lets them express their ideas succinctly, while still giving them the needed power.

When I read about Bun, I get the sense that Jarron has different priorities, mainly moving quickly. Bun also implements a lot of userspace APIs, since the core engine is JavaScriptCore which is written in C++. I think Rust really shines in applications programming, so I guess it makes sense that Rust has lined up with Jarron's needs. I'd be interested to see what JavaScriptCore would look like in Zig versus Rust, I think Zig might have an edge in the core interpreter and JIT.

JCharante 4 hours ago
This is like when Aaron ported Reddit over from Lisp to Python

meaning it doesn't matter except for online discourse about X being bad for 2 days

SwellJoe 13 hours ago
It means nothing for Zig. Zig isn't even out of beta yet.
jadbox 13 hours ago
Jarred has already said on Twitter that this was only an experiment for comparisons and very, very unlikely that they'd switch to Rust.
Validark 11 hours ago
I'm a full time Zig developer, and I see this as an absolute win. I know Jarred has said in the past he feels Zig makes him more productive, but I also think it's fair to say Bun was programmed in a way that's quite cavalier towards buffer overruns. I think Jarred and the Oven team will have significantly better luck with Rust.

Some commenters have remarked they only heard of Zig because of Bun, therefore this is bad for Zig. Not so. In my opinion, there has always been a mismatch. I say with no ill will that a divorce is likely better for both parties. I genuinely believe Bun will be better software once fully converted to Rust.

hitekker 7 hours ago
Not sure why you're getting downvoted; I think you're close to right. They were successful with one technology and had a great exit. They may also be successful with another technology post-acquisition.

Lets see the fruit of their decision.

declan_roberts 13 hours ago
The Pareto principle is in play here. It might take years to get that last percentage point.
rererereferred 18 hours ago
lousken 20 hours ago
Good enough for a side project, not good enough for transferring banking system from cobol
pjmlp 19 hours ago
That is actually what companies like IBM and Unisys are already doing today, LLM assisted porting.

https://research.ibm.com/publications/enterprise-scale-cobol...

jaytaph 20 hours ago
Why not? I think we are perfectly capable on generating a test and validation environment that we can use for correctness. Most likely llms could do this better than engineers with zero to none domain and language knowledge can do these days. From that point on, rewrites would become feasable (not easy, feasable).
dangoodmanUT 12 hours ago
If this goes through, it feels like it will stoke rust on zig violence
leecommamichael 10 hours ago
I just wish the camps would stop being as tribalistic. I see a broad spectrum of fights between any "better C" language and Rust enthusiasts. There is room for both of these things. Just use what works for you. Rust is a bit more like Ada in spirit, it introduces a lot of friction compared to "C like" things which gladly accept you blowing your leg off. Each tool has unique benefits, and is uniquely suited to different problems.

If I'm building a simple GUI app, I'm not sure the friction from Rust is all that worthwhile. If I'm sending someone to space, I think I'd rather have the safeties of a Rust or an Ada, or MISRA C.

kennykartman 11 hours ago
Sadly, yes. I feel too much "violence" on both parts.

Honestly, Zig community seems the most bitter for whatever reason, while on the Rust side it seems to me that are simply overstating how great the language is and are pushy in trying to convince the other of their ideas.

If this goes through, we can all take SWE lessons from it, but I think the communities will suffer.

grigio 11 hours ago
STOP Analyzing.. Now rewrite the Linux kernel in rust. DO NOT MAKE MISTAKES, then post it on Hacker News.

---

FjordWarden 8 hours ago
I love Bun & Zig and this feels a bit like my parent are getting a divorce. I thought it was a bit strange that Bun did not sponsor the Zig foundation while others much smaller companies have.
Validark 7 hours ago
Are you kidding? IIRC Oven gave $5k/month to Zig for years. And btw that was before they got acquired for billions, when they had no income at all.
hitekker 7 hours ago
Yeah, that tracks according to the numbers.

https://ziglang.org/news/300k-from-mitchellh/

https://ziglang.org/news/2024-financials/#income

https://ziglang.org/news/2025-financials/#income

I had a bit of trouble finding it myself but Claude proved a better Googler than I

FjordWarden 7 hours ago
Alright my bad I did not find any info about this, but still they are no longer mentioned as a sponsor.
kombine 1 hour ago
They could also do a rewrite of CC itself to Rust.
torben-friis 14 hours ago
>this is a 960,000 LOC rewrite, the code truly works, passing the test suite on Linux and soon other platforms

I wonder how much of this is original size vs rust requiring verbosity vs the LLM being verbose in general.

Not a criticism, I do believe language translation it's the one field that AI is mature enough to near one shot projects.

mikebelanger 11 hours ago
Interesting that ports can be written so quickly with AI. But that aside, I have to ask...why? You want a super performant bundler/runtime/package manager written in rust with TS support, Deno has this already.
CrzyLngPwd 12 hours ago
I'm looking forward to the race to the bottom in the tokens-for-work-done race.
13 hours ago
taosx 11 hours ago
That's amazing, over time I got a few memory related crashes w/ bun but have deep respect for the performance work put in. Hopefully Rust's compiler will help even more.

Off: I'm wondering if now when more JS finds place on our machines and bundle size is 2nd place for most, would a revival of prepack or projects in the same vein would be worth it, especially with agents.

fastball 12 hours ago
Obviously bun having been acquired by Anthropic changes the arithmetic a bit, but I'd love to see the token cost/consumption of this initiative.
voidhorse 10 hours ago
So let me get this straight:

Developers use LLMs to migrate a million line codebase to a language that they have much less experience with in such a short amount of time that they likely do not have a good mental model of the migrates code.

At least the tests pass.

Only one person drove the migration, so the number of people that understand the new code is ~0.5 under the assumption there's no way the sole dev could build a mental model of fresh 1m code in 6 days.

This is code for a language runtime.

It's great that the tests pass but it's really hard for me to interpret this as anything other than horrible mismanagement of a promising project. When you sit this low in the stack this is grossly irresponsible and I have no idea why anyone would use Bun after this. You'd be literally adopting a runtime the devs presumably don't understand, keep in mind they now somehow need to evolve and maintain this in the future.

Hopefully this remains an experiment, or Bun has some plan for re-upping dev knowledge of the codebase. Sorry but a component with massive blast radius like a runtime isn't really a good candidate for vibe coding, no matter how good the AI is. I'd like the maintainers to actually understand their runtime, thanks.

jwpapi 9 hours ago
Thank you put my gut feeling that I had in my top comment here in words. I didn’t have the full explanation ready, why this threw me off.
slopinthebag 8 hours ago
They won't, they will continue to vibe code it until it collapses under them and the project fades into obscurity. Which it will regardless since it was acquired by Anthropic.

Node beat Deno and Bun. Pretty impressive.

Twey 10 hours ago
Were there perhaps [licensing issues](https://www.phoronix.com/news/Chardet-LLM-Rewrite-Relicense) with the original?
languid-photic 13 hours ago
would be fun to do zig -> rust -> zig and to measure the delta

(in a VAE-ish way, kl div on the embeddings?)

languid-photic 13 hours ago
also feels like a good posttraining task
pbohun 13 hours ago
How many tokens did this port consume?
allthetime 10 hours ago
Bun is owned by anthropic and so has access to Mythos & unlimited tokens.

The answer is... more than any of us could likely afford.

11 hours ago
suck-my-spez 12 hours ago
Serious question… Who’s going to want to run a vibe coded runtime in production?

I don’t see how this is a good look for Bun?

kennykartman 11 hours ago
One should care about tests more than how code was coded.

If I had a codebase with lots of tests and asked someone else to rewrite it to another language passing the same test suite, I honestly wouldn't expect a great quality job.

I say this because it happened 3 times in the company I work for: we conducted experiments by tasking different companies to rewrite the same code in another language. All of them passed (most) of the tests, but code quality was low. If the job is a black box, rely on the I/O to determine quality, not the inner workings.

hellcow 8 hours ago
I care that runtime developers know and understand their codebase deeply. 1M LOC written by 1 dev in a short time does not inspire confidence in such an important dependency.

There's no way this code is understood fully by the original author, let alone anyone else. I wouldn't accept this from an intern, let alone in code that's fundamental to my business.

fg137 7 hours ago
I have seen, many times, code that has lots of tests but don't work.

Why?

Some of the patterns that I saw:

* The code is only called from tests but never called in production

* Tests are not testing the actual application logic, or the logic that matters. In some cases, the tests have nothing to do with the application code at all, because it does not even run any application code.

* Tests repeat the same logic as in application (tautology). All the time.

* Application code is actually incorrect. But tests just end up using the wrong expected value to make tests pass, disregarding what should happen.

That's using the latest models.

To make things better, apparently people never bothered to go through the manual workflow at least once to verify the behavior.

Good luck just relying on tests.

zaptheimpaler 12 hours ago
I just see a ton of reflexive AI hate here. I don't care if it was vibe coded, if it passes the entire test suite and was vibe coded by the original authors, I trust it as much as the original Bun. These are Jarred's words about it:

> it’s basically the same codebase except now we can have the compiler enforce the lifetimes of types and we get destructors when we want them. and the ugly parts look uglier (unsafe) which encourages refactoring.

> why: I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.

This makes me trust it more, not less.

perching_aix 13 hours ago
> and crashes and stability issues

inb4 .unwrap() / slice / etc hell + livelocks & deadlocks + resource leaks & toctou bugs + larger exposure to supply chain attacks

Still, ~1M LOC ported in a work week (400 LOC/min, wtf?) and almost all of it working is pretty wild. I hope the guy managed to maintain normal function, cause I found that getting into the flow but with AI is even more self-consuming and intoxicating than without it, which was already potentially rather rough.

aabhay 9 hours ago
At 100 agents in parallel that’s 4loc/min, and 100 agents is a lower bound on what they had access to.
perching_aix 9 hours ago
It's not so much the agents' througput I'd be worried about, more meant to imply that at such speed, large parts of this are going to be pretty much just guaranteed unsupervised / unchecked completely. Like literal "LGTM + god bless + fuck it we ball" tier.
0-bad-sectors 19 hours ago
Interesting! I wonder how the performance is compared to the Zig version
mjtk 9 hours ago
The flagship product is both the cash cow (subsidizes rewrite) AND the labor (amortizes? rewrite).
m4rtink 19 hours ago
What license is this ? Let me guess, its is no GPL...
scared_together 18 hours ago
Unlike the GNU coreutils rewrite in Rust, the Bun rewrite in Rust is being undertaken by the owners of the project.

That said, yes, you’re correct that Bun isn’t GPL: https://github.com/oven-sh/bun?tab=License-1-ov-file

m4rtink 13 hours ago
Hmm, that's unfortunate - why does so much Rust stuff seem to default to MIT/BSD ? Just because Mozilla used that for most of the Rust stuff ?

Do developers using Rust even know the difference ? Like how anyone can basically take all you work & base a proprietary fork on it with maybe saying "thanks" (attribution) if they feel like it ? :P

bob001 11 hours ago
> Like how anyone can basically take all you work & base a proprietary fork on it with maybe saying "thanks" (attribution) if they feel like it ? :P

I'd assume the Bun people got a bit more than a thanks when Anthropic acquired them. :)

You also can't take your GPL code (unless you do CLAs with all contributors), convert it to closed source yourself and make a massive VC funded startup around it. Which is about the only other way anyone makes better money from open source than by just working for a big tech company.

conradludgate 13 hours ago
I'm very aware when I pick Apache-2. I want attribution for my work, but I don't care about open source purity. I respect closed source software and I put my open source code up for free because I don't care to profit off of my hobbies.
johnny22 12 hours ago
for the same reason most ruby and javascript/typescript stuff is. Heck, even most python.

Most of them never got into the GPL in the first place.

raincole 12 hours ago
Your guess is correct! Congrats. Bun itself is not GPL either by the way. Oh, rust compiler itself isn't GPL either.
pdhborges 14 hours ago
Curious how the test suite was applied. Was it ported from Zig to Rust beforehand?
190n 14 hours ago
Almost all of Bun's tests are written in JavaScript run in Bun itself.
dlenski 13 hours ago
Deleted
logicprog 13 hours ago
@simonw explains how hilariously misguided that paper is in one of the top comments, and how it doesn't apply remotely to a real agent harness. Plus it's not even clearly relevant here, because the model isn't trying to regurgitate the original document, but generate a new one, and there are guardrails to put it back on track in the form or a compiler and tests. Also, the test suite is very thorough, and pre-existing, and the vast majority passes already. This is skepticism for the sake of it.
raincole 12 hours ago
Perhaps you can elaborate on how your comment is relevant to the Bun's experiment here.
timetraveller26 14 hours ago
3 years from now: Linux ported to Rust in 6 days.

And on the seventh day Claude ended His work which He had done, and He rested on the seventh day from all His work which He had done

kennykartman 11 hours ago
That's a fun point. I honestly don't think it will happen in 3 years, but I think it will surely doable in 10.

More interestingly: will we need to care about the code at all, at that point?

Amber-chen 8 hours ago
This is a good reminder that tooling choices compound over time. The short-term speedup matters less than whether the next maintainer can still reason about the system.
arto 13 hours ago
The fastest large-scale rewrite in the history of software engineering, likely
hacker_88 11 hours ago
Merge with Deno
matrix12 12 hours ago
will this mean opencode is finally portable?
jauntywundrkind 10 hours ago
There is some really cool work to port opencode's underlying opentui to Node.js, including some new FFI work in node itself that got merged (called... drum roll please... node:Ffi! Really cool stuff. https://github.com/anomalyco/opentui/pull/939 https://github.com/nodejs/node/pull/62762

Also worth noting that opentui is... Zig!

Very unclear what it's going to take to get this reviewed and shipped, but some very high potential. I've seen some other changes going by in opencode for node.js compatibility; I'm not sure what besides the tui has Ffi needs that might be gating; maybe nothing!

born-jre 14 hours ago
Being anthropic accuired project does he have access to mythos or it’s normal Claude we plebs have access to
tempest_ 14 hours ago
This is entirely possible with Claude as it existed even last year.

The LLMs are quite good at re-writes and even better when provided an 'oracle' like a well rounded test suite or existing implementation to work against.

Its part of the reason we keep seeing "I rewrote <library> in <language>" posts on hackernews and when you look at the repo its more like I prompted claude to rewrite this repo in rust or whatever.

bel8 14 hours ago
As an Anthropic acquihire, not only does he have access to every model and service but he probably has infinite tokens available.

Bun powers Claude.

rishabhaiover 13 hours ago
Also, isn't it a great ad for Anthropic itself? One wonders
nine_k 14 hours ago
Indeed, knowing the amount of tokens spent would be very interesting.
sourcegrift 10 hours ago
Do scala.js next
AtNightWeCode 12 hours ago
Kinda crazy to use AI to switch from zig to rust in a tool that runs js. Bin bun and use a real lang to begin with. No reason to have that extra layer anymore.
dare944 11 hours ago
Lol, I had a similar thought as well, but more along the lines of "We're coming for you next, JavaScript!"

But the effort is certainly an exquisite rearrangement of the deck chairs, no?

bel8 10 hours ago
Bun runs TypeScript directly without external tooling.

bun script.ts just works.

Otherwise I bet it wouldn't even be a blip in our radar.

amai 12 hours ago
Bunner
the__alchemist 11 hours ago
Bun alert!
ekjhgkejhgk 13 hours ago
Explain it for dummies. Isn't Zig a programming language? Why are they re-writting a programming language in another programming language?
conradludgate 13 hours ago
They're not rewriting zig. They're rewriting bun, which is currently written in zig
up2isomorphism 9 hours ago
best way to kill an open source project in 2025 - use AI to port it to Rust.
sergiotapia 12 hours ago
jared's post is singlehandedly shitting on Zig's reputation. not good juju for him to post like that.

"I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues"

bun was zig's poster child. if it moves away, it becomes yet another random language like nim or crystal.

iExploder 1 hour ago
I'd feel better to have that kind of person out of my community.

First of all, did he not pick the language for Bun himself? Then introduced bunch of memory bugs, sound like skill issue cascade.

I remember some years ago in podcast touting how amazing Zig is to allow them being so performant which was the claim to fame for Bun, now to turn around and shit on the thing. Interesting persona.

BLACKCRAB 1 hour ago
[dead]
chenzhekl 1 hour ago
[dead]
Ati985 3 hours ago
[dead]
marsven_422 3 hours ago
[dead]
Jimmy0252 9 hours ago
[dead]
lerp-io 13 hours ago
[flagged]
redsocksfan45 13 hours ago
[dead]
jdw64 12 hours ago
[dead]
black_13 22 hours ago
[dead]
rvz 22 hours ago
[flagged]
vintagedave 22 hours ago
> absolute position of hating something such as AI and progress

Most takes I've seen are far more nuanced.

Key is that 'progress' has a positive connotation. It is different from change. Mere change - such as new inventions - may not necessarily be aligned with progress in a field, society, etc.

Change may be inevitable, but it's up to us humans to sculpt it into progress.

rvz 22 hours ago
But I am talking about Zig and others who have the same stance. Zig has a very strict No LLM / AI contribution policy and it likely got in the way of the Bun maintainers at Anthropic. From [0]

>> No LLMs for issues.

>> No LLMs for patches / pull requests.

>> No LLMs for comments on the bug tracker, including translation.

[0] https://codeberg.org/ziglang/zig#strict-no-llm-no-ai-policy

vintagedave 20 hours ago
They don't hate it. There's no antagonism that I know of there. I believe they want it to be fully human-authored and want low-hanging fruit items to be good onboarding for developers, not targeted by AI contributions. Simon Willison wrote a good blog post on it: https://simonwillison.net/2026/Apr/30/zig-anti-ai/

The Bun pull request was refused for additional reasons: 'AI is entirely beside the point here...': https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...

None of this is, in the original comment's text, "hating... AI".

heldrida 21 hours ago
Thats true, but the author might have decided on its own. Not everything is a marketing plan.
roschdal 12 hours ago
Meh. I prefer Java, all hours of the day, every day of the week.
parliament32 14 hours ago
Ew
pjmlp 14 hours ago
> why: I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.

As expected, Modula-2 / Objective Pascal like safety was great during the last century, before automatic resource management, and improved type system became common in this century.

Naturally also have to note, wasn't this supposed to be only an experiment, nothing serious?

heldrida 23 hours ago
An update on Bun’s experimental migration from Zig to Rust:

The Rust rewrite now passes 99.8% of Bun’s pre-existing Linux x64 glibc test suite.