Will I ever own a zettaflop?(geohot.github.io)
102 points by surprisetalk 3 days ago | 16 comments
throw0101d 10 hours ago
Somewhat related, why the creators of Zettabyte File System (ZFS) decided to make it 128 bits (writing in 2004):

> Some customers already have datasets on the order of a petabyte, or 2^50 bytes. Thus the 64-bit capacity limit of 2^64 bytes is only 14 doublings away. Moore's Law for storage predicts that capacity will continue to double every 9-12 months, which means we'll start to hit the 64-bit limit in about a decade. Storage systems tend to live for several decades, so it would be foolish to create a new one without anticipating the needs that will surely arise within its projected lifetime.

* https://web.archive.org/web/20061112032835/http://blogs.sun....

And some math on what that means 'physically':

> Although we'd all like Moore's Law to continue forever, quantum mechanics imposes some fundamental limits on the computation rate and information capacity of any physical device. In particular, it has been shown that 1 kilogram of matter confined to 1 liter of space can perform at most 10^51 operations per second on at most 10^31 bits of information [see Seth Lloyd, "Ultimate physical limits to computation." Nature 406, 1047-1054 (2000)]. A fully-populated 128-bit storage pool would contain 2^128 blocks = 2^137 bytes = 2^140 bits; therefore the minimum mass required to hold the bits would be (2^140 bits) / (10^31 bits/kg) = 136 billion kg.

> To operate at the 10^31 bits/kg limit, however, the entire mass of the computer must be in the form of pure energy. By E=mc^2, the rest energy of 136 billion kg is 1.2x10^28 J. The mass of the oceans is about 1.4x10^21 kg. It takes about 4,000 J to raise the temperature of 1 kg of water by 1 degree Celcius, and thus about 400,000 J to heat 1 kg of water from freezing to boiling. The latent heat of vaporization adds another 2 million J/kg. Thus the energy required to boil the oceans is about 2.4x10^6 J/kg 1.4x10^21 kg = 3.4x10^27 J. Thus, fully populating a 128-bit storage pool would, literally, require more energy than boiling the oceans.*

* Ibid.

Nevermark 2 hours ago
> In particular, it has been shown that 1 kilogram of matter confined to 1 liter of space can perform at most 10^51 operations per second on at most 10^31 bits of information

I believe the Bekenstein bound for holographic information on a 1 liter sphere, using space at the Planck scale for encoding, instead of matter, is about 6.7×10^67.

I confess I got that number by taking round trips through multiple models to ensure there was a clear consensus, as my form of "homework", as this is not my area of expertise.

As far as figuring out energy or speed limits for operations over post-Einsteinian twisted space, that will require new physics, so I am just going to wait until I have a 1 liter Planckspace Neo and just measure the draw while it counts to a very big number for a second. (Parallel incrementing with aggregation obviously allowed.)

Point being, there is still a lot of room at the bottom.

Interesting thought. Space can expand faster than the speed of light over significant distances, without breaking the speed limit locally.

But what happens if complex living space begins absorbing all the essentially flat local space around it? Is there a speed limit to space absorption? If space itself is shrinking, due to post-Einsteinian structures/packing, then effective speed limits go away. As traversal distances, and perhaps even the meaning of distance, disappear. So, perhaps not. I call this the "AI Crunch" end-of-the-universe scenario.

That is the computer I want. And I believe that sets a new upper bound for AI maximalism.

limbicsystem 1 hour ago
I think you would very much enjoy this book: https://share.google/boWcVLRiYz0c7EmKh

They talk quite a bit about this sort of thing at the end...

jandrewrogers 9 hours ago
Single data sets surpassed 2^64 bytes over a decade ago. This creates fun challenges since just the metadata structures can't fit in the RAM of the largest machines we build today.
jasonwatkinspdx 9 hours ago
Virtualization has pushed back the need for a while, but we are going to have to look at pointers larger than 64 bit at some point. It's also not just about the raw size of datasets, but how we get a lot of utility out of various memory mapping tricks, so we consume more address space than the strict minimum required by the dataset. Also if we move up to 128 bit a lot more security mitigations become possible.
eru 7 hours ago
Please keep in mind that doubling isn't the only option. There's lots of numbers between 64 and 128.
jandrewrogers 8 hours ago
By virtualization are you referring to virtual memory? We haven't even been able to mmap() the direct-attached storage on some AWS instances for years due to limitations on virtual memory.

With larger virtual memory addresses there is still the issue that the ratio between storage and physical memory in large systems would be so high that cache replacement algorithms don't work for most applications. You can switch to cache admission for locality at scale (strictly better at the limit albeit much more difficult to implement) but that is effectively segmenting the data model into chunks that won't get close to overflowing 64-bit addressing. 128-bit addresses would be convenient but a lot of space is saved by keeping it 64-bit.

Space considerations aside, 128-bit addresses would open up a lot of pointer tagging possibilities e.g. the security features you allude to.

jasonwatkinspdx 8 hours ago
> By virtualization are you referring to virtual memory?

No, I mean k8s style architecture, where you take physical boxes and slice them into smaller partitions, hence the dataset on each partition is smaller than the raw hardware capability. That reduces the pressure towards the limit.

jandrewrogers 8 hours ago
Ah yeah, that makes sense. With a good enough scheduler that starts to look a lot like a cache admission architecture.
jasonwatkinspdx 7 hours ago
I'd never thought of it that way, and it's an interesting perspective.
popol12 9 hours ago
Very interesting, could someone please do the same computation for filling 64 bit storage?
tbrownaw 9 hours ago
16 million terablocks, or 8 billion terabytes.

Or a third of a billion 24 TB drives, which is one of the larger sizes currently available.

Some random search results say the global hard drive market is around an eighth of a billion units, but of course much of that will be smaller sizes.

So that should be physically realizable today (well, with today's commercial technology), with only a few years of global production.

throw0101d 4 minutes ago
> Or a third of a billion 24 TB drives, which is one of the larger sizes currently available.

For the record, 44TB drives have been announced in March 2026:

* https://www.seagate.com/ca/en/stories/articles/seagate-deliv...

Dylan16807 9 hours ago
> 16 million terablocks, or 8 billion terabytes.

To be clear, the first quote was talking about 2^64 bytes, and you're talking about 2^64 blocks.

Edit: Though confusingly the second part talked about 2^128 blocks.

Also these days I'd assume 4KB blocks instead of 512 bytes.

tbrownaw 8 hours ago
> To be clear, the first quote was talking about 2^64 bytes

That's 16 exabytes. Wikipedia cites a re:invent video to say that Amazon S3 has "100s of exabytes" in it.

So it not only could theoretically be done, but has been done.

https://en.wikipedia.org/wiki/Amazon_S3

jandrewrogers 8 hours ago
Storage densities can be extremely high. Filling 2^64 of storage is very doable and people have been doing it for a while. It all moves downstream; I remember when a 2^32 was an unimaginable amount of storage.

Many petabytes fit in a single rack and many data sources generate several petabytes per day. I'm aware of sources that in aggregate store exabytes per day. Most of which gets promptly deleted because platforms that can efficiently analyze data at that scale are severely lacking.

I've never heard of anyone actually storing zettabytes but it isn't beyond the realm of possibility in the not too distant future.

Dylan16807 9 hours ago
You want someone to put "3.4*10^27 / 2^64" into a calculator? 200 million joules, using all the same assumptions. 50kWh. Though that leaves the question of how the energy requirements change when we're not going for extreme density (half a nanogram??).

If we instead consider a million 18TB hard drives, and estimate they each need 8 watts for 20 hours to fill up, 2^64 bytes take 160MWh to write on modern hardware. And they'll weigh 700 tons.

Edit: The quote is inconsistent about whether it wants to talk about bytes or blocks, so add or subtract a factor of about a thousand depending on what you want.

kubb 3 hours ago
Human appetite for more knows no bounds. Imagine what we’d have to do for everyone to have a zettaflop. We won’t have the resources for it. So guys like this one are in competition with normal people who just need a little bit of compute, so that he can feel powerful with a million Claudes. Sad.
noduerme 3 hours ago
For one thing, most news websites would have to load at least 10,000x as much useless javascript to achieve the same performance.
kombine 3 hours ago
Poor mother Earth, this race is unsustainable. In order to satiate guys like geohot we are pillaging the natural resources, destroying ecosystems, fucking up the climate.
ramon156 2 hours ago
I think i know two or three other people who are much, much more to blame... putting it on one dude is weird.
kombine 1 hour ago
I apologise for not being very clear, it's on all of us - including myself.
chuzz 1 hour ago
who's talking about owning a zettaflop on earth? The second part of sustainable growth is growth after all
throwaway198846 2 hours ago
I kind of agree with you there is only enough area for solar for 150 million people (even if we assume all land is solarable) but there is no reasons we couldn't eventually have everyone with a milion Claudes with fusion.

Edit: There is a problem with that - 10MW * 8 billion is something like half the solar power from the sun which implies wasted heat will heat earth considerably.

exe34 2 hours ago
Electron apps will expand to fill the available space!
nl 10 hours ago
Firstly, True Names is an awesome read, and the real origin of cyberpunk. I much prefer it to Neuromancer or Diamond Age.

Secondly, I recently tried to work out what year on the Top500 list[1] I could reasonably be for around US$5000. It's surprisingly difficult to work out mostly because they use 64 bit flops and few other systems quote that number.

[1] https://top500.org/lists/top500/2025/11/

bee_rider 7 hours ago
I looked at something kinda similar a little bit ago.

https://news.ycombinator.com/item?id=45303483

Jeff Geerling made a $3000 raspberry pi cluster and shared the linpack scores, so I looked at when it’d hit different spots in the top500 list. He’d have won from ‘93 to June ‘96, and then been knocked out of the top 10 in November ‘97.

That’s with a pretty substantial constraint, making it out of raspberry pi’s, and a lower budget. With $5000, and your pick of chips… I bet you could hit the turn of the century…

eru 7 hours ago
Isn't the Diamond Age something like post-cyberpunk already?

It came out three years after Snow Crash, which already ironically referenced "The sky above the port was the color of television, tuned to a dead channel".

I agree that Neuromancer wasn't a great novel, though it obviously had vibes that resonated with many people. The novel being otherwise a bit of a dud actually speaks to how strong the vibes were to overcome that.

ehnto 2 hours ago
I feel that's a bit uncharitable, it wasn't just vibes, it was imaginative world building, with some truly interesting and novel concepts tied into a decent enough story to enjoy the world within.

As with much from this thread of cyberpunk writing, the cities and world are the most important characters, and the storyline is just an excuse to wander through their streets.

eru 1 hour ago
'Vibes' was probably the wrong word. I agree with you.

Though about the world building: he threw out a lot of neologisms on the page, and later other writers gave them meaning.

arthurjj 9 hours ago
I just want to thank the submitter. This is the type of internet that I really miss. A very smart person who's a good writer, proud of their interests and obsessions.
danpalmer 2 hours ago
Geohot is famous for not being as smart as he makes out. Famously said he'd go to Twitter when Musk bought it and help Musk fix search, because "how hard can it be". Then left in shame 3 months later having achieved nothing except figuring out that It's A Bit More Complicated Than That(tm).

Comma does some cool stuff, if relatively entry-level, and this post is good napkin-maths and was a fun read, but there is so much more depth and a hundred ways in which this post is wrong or over-simplified to the point of near irrelevance.

petesergeant 1 hour ago
> Geohot is famous for not being as smart as he makes out

That someone isn't as intelligent as they think they are doesn't place an upper bound on their intelligence.

ks2048 8 hours ago
I disagree. Comes off as an arrogant guy rather than a curious scientist.

What will it take to get this before you die? What are physical limitations to shrink things more and more and to speed things up more and more? He talks about solar, but what are the physical limits and how can we get there?

I think there's interesting physics here, but this sounds like just a rich guy craving more power.

ipnon 9 hours ago
Yes, to paraphrase Jobs, I'm only interested in the intersection of Technology Avenue and Liberal Arts Street.
svantana 3 hours ago
Hedonic treadmill. Once he's approaching that zettaflops, he'll want a yottaflops.
cushychicken 44 minutes ago
geohot definitely ticks the box for “so ambitious he occasionally sounds unhinged”.
randomtoast 1 hour ago
There are two ways to be unhappy. Not getting what you want and getting what you want.
attentive 2 hours ago
better question - will zettaflop ever own me?
Havoc 3 hours ago
Fun hypothetical

I’d say there odd a bit of a flaw in the read 50,000 books part though. The LLM reading that much doesn’t really get you 50k books of value as a person. You’re the bottleneck not the flops

Sprotch 9 hours ago
And when it comes, people will use it for porn, memes, and to argue with each other in bad faith
mememememememo 9 hours ago
People?
etothepii 8 hours ago
Some people.
supermdguy 10 hours ago
If all LLM advancements stopped today, but compute + energy got to the price where the $30 million zettaflop was possible, I wonder what outcomes would be possible? Would 1000 claudes be able to coordinate in meaningful ways? How much human intervention would be needed?
mysecretaccount 7 hours ago
Fun post, but I find the industry's obsession with compute to be rather vapid, and this is a good example:

> One million Claudes. To be able to search every book in history, solve math problems, write novels, read every comment, watch every reel, iterate over and over on a piece of code until it’s perfect – spend a human year in 10 minutes. 50,000 people working for you, all aligned with you, all answering as one.

We are already near the limits of what we can do if we throw compute at Claude without improving the underlying models, and it is not clear how we can get big improvements on the underlying models at this point. Surely geohot knows this, so I am surprised he thinks that "one million Claudes" would be able to e.g. write a better novel than one hundred Claudes, or even one Claude.

emaadm 7 hours ago
> We are already near the limits of what we can do

Hard disagree. If I had a million Claudes worth of compute I'd be livestreaming my entire reality feed to a local server 24/7 and having it organize my observations and thoughts, synthesize new ideas, implement prototypes and discard infeasible ones while I sleep. If you're in the business of knowledge creation, a million Claudes isn't enough. Text is an easy modality, I want foundation models that operate on text, images, audio, video, streaming point clouds, ...

mysecretaccount 3 hours ago
Using a single Claude agent, ask it to generate "new" ideas and it will generate an immense list. Ask it to rank those ideas by novelty and it will comply.

The results will be lackluster. Additional agents will not improve the result.

TheOtherHobbes 49 minutes ago
I call this the Laurie Anderson fallacy, from a line in one of her songs:

> "Heaven is exactly like where you are right now, but much, much better."

If a million Claudes of compute were accessible, people would not be doing the same things they are now, but more so. They'd be doing very different things we likely can't imagine - in the same way that Alan Turing imagined machines learning from experience, but didn't imagine downstream products like Sora, ad tech, or social media, or their cultural and economic effects.

anentropic 2 hours ago
I doubt you create that much knowledge
bitwize 3 hours ago
Agentic development allows one Claude, multiplexed a few times, to vastly improve its output and tackle much bigger problems than just prompting the one instance. If you had a million Claudes in layered networks like we do with matmuls to form Claude, you'd be really cooking with gas.

(Maybe that's why they call it Gas Town?)

mysecretaccount 2 hours ago
The output undoubtedly improves when looping LLM output back into the model at inference-time, but there is a limit to this and it is still bounded by the acumen of the underlying model. You cannot just recurse these models with tooling and compute to e.g. solve new physics.
JackYoustra 10 hours ago
nit: it's a zettaflops, not a zettaflop
latentframe 7 hours ago
Maybe the bottleneck is shifting from compute to energy and capital ; at some point it stops being a software problem and starts looking like infrastructure power land cooling... Just feels like the constraint is moving down the stack
androiddrew 10 hours ago
Not with the price of silicon being what it is
ge96 10 hours ago
Where are we at with the rat brain CPUs
Avicebron 10 hours ago
We keep losing people to the sewers..some in the organization are speculating they might be building a human brain CPU to retaliate. Progress is slow.
gurjeet 8 hours ago
s/people/cpus/
jmyeet 10 hours ago
I'm a big believer that humanity's future is in space in a Dyson Swarm. There are simply too many advantages. It's estimated that humanity currently uses ~10^11 Watts of power. About 10^16 Watts of solar energy hits the Earth but the Earth's cross-section is less than a billionth of the Sun's total energy output. A Dyson Swarm would give us access to ~10^25 Watts of power. With our current population that would give every person on Earth living space about equivalent to Africa and access to more energy than our entire civilization currently uses by orders of magnitude.

I bring this up to present an alternate view of the future that a lot of thought has gone into: the Matrioshka Brain. This is basically a Dyson Swarm but the entire thing operates as one giant computer. Some of the heat from inner layers is captured by outer layers for greater efficiency. That's the Matrioshka part.

How much computing power would this be?

It's hard to say but estimates range from 10^40 to 10^50 FLOPS (eg [1]). At 10^45 FLOPS that would give each person on Earth access to roughly 100 trillion zettaflops.

[1]: https://www.reddit.com/r/IsaacArthur/comments/1nzbhxj/matrio...

PxldLtd 35 minutes ago
I very much disagree, there's just too many engineering hurdles for us to surmount for this to be a reasonable solution. When you actually break down the physics, the scale works against you.

You can't have "one giant computer" when the speed of light is a 16-minute ping time from one side of the swarm to the other. Also cooling. Space is a vacuum, you can't just use convection. The inner layers would melt before they could radiate it away.

Even maintenance and power distribution, you're talking about trillions of nodes that need active course correction to avoid a chain-reaction of collisions.

There's so many reasons this is not feasible and more of a whimsical thought experiment. I've barely even touched on most of the issues.

ninkendo 9 hours ago
It makes me wonder about what it would take to actually create one.

You’d need self-replicating machines to build it, naturally. You’d need some ability for them to mine from asteroids and process the materials right there on the spot. And they’d need to be able to build both the processor “swarmlets” (probably some stamped-out solar/engine/CPU package) and more builders, so that the growth can be exponential. Oh, and the ability to turn solar energy into thrust somehow using only fuel you can get from the mined asteroids. Maybe a prerequisite is finding a solar system that has a huge and extremely uranium-rich asteroid belt.

You would need a CPU design that can be built using the kind of fidelity that a self-replicating machine in space under constant solar radiation can achieve. But if you can get the scale high enough, maybe you can just brute force it and make machines on the computational scale of a Pentium 3, but there’s 10^40 of them so who cares. Maybe there’s a novel way of designing a durable computing machine out of hydrocarbons we have yet to discover.

The machines would have to self replicate, and you’d need to store the instructions somewhere hardened. And that can be built out of materials commonly found in asteroids. Maybe hydrocarbons. Hell, may as well use RNA. These things need to be as good as humans at building stuff, so really this is just creating artificial “life” that self has DNA and is made of cells that build proteins needed to create the machine. Maybe they reproduce by spreading as little DNA seeds that can attach to an asteroid with the right chemistry, and we just spew them into the cosmos at a candidate star and hope the process gets kickstarted. Hell, we could make it spew its own DNA at the next stars over as soon as it’s done. We’d have a whole galaxy computing for us, all we’d need is the right DNA instructions, the right capsule for them, and a way to launch them.

Maybe another civilization has already done this…

voidUpdate 3 hours ago
If we absorb all of the sun's energy using a dyson swarm, the earth is going to get very cold and dark
userbinator 9 hours ago
Dyson Swarm sounds like the name of an aggressive cleaning machine.
danw1979 2 hours ago
It's 2071. James Dyson, now 124 and in better health than ever, thanks to the AI-fuelled nanorobotic revolution, has just lost control of the last of the company's Dyson Swarms. What started as a fleet of cleaning-nanites, a dirt and dust-eating squadron-for-hire, has gone rogue; all of Earth's organic matter is now on the menu. People still haven't forgiven him about Brexit.
mememememememo 9 hours ago
Saw zetaflop in the title. Knew it would be that guy!
echelon 10 hours ago
There's no way we're not living in a historical simulation.

This is all just such crazy coincidence.

Everything is coming together so quickly.

Dylan16807 6 hours ago
For the "coincidence" part: While technology has been advancing very fast, the human population also ballooned alongside that advancement, so the odds of any particular intelligent Earthling being born in such an era of growth are pretty high.
ks2048 8 hours ago
What's a "a historical simulation" and why is it all such a coincidence?

The "simulation argument" to me is ridiculous.

echelon 8 hours ago
We don't know if life is rare, but intelligence seems rare given our sky surveys to date.

Life started on earth shortly after the solar system formed. It's a quarter the age of the universe that it took for intelligence and civilization to arise. A long, long, looooong time.

From a numbers perspective, our minds are all shockingly rare. The universe probably doesn't produce many of us through stellar and then biological evolution.

Taking that into consideration, contrast that with the flip side.

If high technological simulation exists that becomes indistinguishable from reality, it could simulate quintillions of minds. It seems like we're on the path to that technology, with a great degree of probability.

Given that, and the fact that historical simulation is likely easy for the future, it seems more probable to me that we're one of the quintillions of simulations rather than the origin timeline.

I'm half joking, but I'm half not. It's a fun thought experiment with absolutely no basis in science.

Do you even really know if you were alive a second ago? Perhaps you were just instanced into existence with a set of memories - memories you are not randomly accessing. A very deliberate version of a Boltzmann Brain.

Again, pseudoscientific tomfoolery, but fun to ponder.

defrost 8 hours ago
> but intelligence seems rare given our sky surveys to date.

Most of the universe, by a large percentage (or so I'm told), is further away than 200 light years.

What signs of intelligence would we see or detect in our own solar system from a distance of 200 light years should we scan it with all our latest and best tech?

trhway 10 hours ago
look at the history of technology, and before that to the biological history - how long it took from single cells to multi-cells vs. for example how long it took from lizard brain to human brain - the things are naturally going exponential (my thinking why - https://news.ycombinator.com/item?id=9418811) at least until they hit some wall, yet so far hitting walls mostly only stimulated even more advanced development.

There is an issue of the "non-uniformity of the spread of the future" though with fast development, and the faster the development the stronger the non-uniformity and the tensions it creates. Strong non-uniformity and resulting tensions have tendency to resolve catastrophically on their own at some point if not solved/smoothed by the other ways before.