184 points by spenvo 4 hours ago | 16 comments
reissbaker 1 hour ago
I run a small open source LLM inference company, Synthetic.new. As far as I can tell, CNBC isn't reporting this accurately: the problem isn't that Oracle is building "yesterday's data centers": they're building Blackwell DCs! Those are today's DCs.

The problem appears to be that Oracle is building today's DCs... Tomorrow. And by the time they come online, Vera Rubins will be out, with 5x efficiency gains. And Oracle is unlikely to want to drop the price of Blackwells 5x, despite them being 5x less efficient.

It's a little unclear to me how bad this is. Nvidia's "rack scale" machines like GB200-NVL72s and GB300-NVL72s are basically a fully built rack you roll into a DC and plug into power and network. In that case, Oracle should probably just buy the rack-scale Vera Rubins when they come out instead of Blackwells and roll them into their new DCs. Tada! Tomorrow's DCs, tomorrow.

OTOH it's possible someone at Oracle screwed up and committed to buying Blackwells at today's prices, delivered tomorrow. Or maybe construction of the physical DCs is behind schedule, so today's Blackwells are sitting around unused, waiting for power and networking tomorrow. Then they're in a bit of trouble.

Regardless, CNBC's reporting seems pretty unclear on what actually happened and whether this is actually bad or not.

dchftcs 20 minutes ago
5x improvement of energy efficiency in just GPUs translates to more like 50% reduction of power usage, with is significant but doesn't warrant a 80% reduction in pricing. Especially since Nvidia will charge more for the same card - they have been pricing things pretty aggressively.
zedlasso 55 minutes ago
they are saying what you are saying. At least Deirde Bosa did. I think there is a lot of folks internally who don't understand the gravity of it and keep questioning it.

You are right about the building of today's DC's. There is a small part of me that feels Oracle might be a bit toxic long term with all this debt him and his kid have taken on. And this could be the first reaction to it.

okasaki 38 minutes ago
Next servers might need more power or different cooling. Then your DCs are just big concrete rooms.
dboreham 31 minutes ago
All DCs are big concrete rooms that can supply so much power per sq area and remove so much heat per sq area (the two related of course since the heat comes from dissipating the power). Variation is just in density of whatever sort of fancy resistor you plan to put in the concrete room.
mikelitoris 1 hour ago
I hope the lawnmower goes bankrupt with this and the hostile WB take over.
bayarearefugee 18 minutes ago
> I hope the lawnmower goes bankrupt with this and the hostile WB take over.

Unfortunately there is no chance of that happening.

At his level of personal wealth there is no realistic scenario that leads to personal bankruptcy. In our current capitalist society once you're into the billions you're "too big to fail" and you have unlocked the infinite money glitch.

The only consolation is the lawnmower is 81 and thus is going to be dead soon (even the mega-wealthy can't plastic surgery themselves out of this outcome, at least not yet) and he can't take any of it with him. But all indications point to his progeny having aspirations to be even more damaging to society than he has been.

gruez 15 minutes ago
> In our current capitalist society once you're into the billions you're "too big to fail" and you have unlocked the infinite money glitch.

That's not how any of this works. "Too big to fail" can be applied to companies, but I don't know of any examples of it being applied to people.

bayarearefugee 9 minutes ago
Please provide a list of all multi-billionaires who have somehow managed to lose any significant portion of their wealth outside of a divorce combined with bad marriage planning. And even in those rare cases, they don't approach bankruptcy.

It isn't that they get bailed out by the government (like the banks in 2008), it is that at the scale of their wealth there is no realistic way to lose it fast enough to make any significant negative difference when the neutral state of wealth at that scale is to snowball ever larger (mostly because we refuse to tax it appropriately).

7 minutes ago
wilkystyle 46 minutes ago
Don't anthropomorphize the lawnmower.
thefounder 21 minutes ago
From the consumer perspective the last thing I want is a Netfli-xation of WB…
wmf 2 hours ago
I don't believe that Stargate is "yesterday's data center". It's being built in multiple phases and Oracle has access to Nvidia's roadmap. They know 200 kW/rack is coming. The newer phases could easily be built out to support Rubin and Feynman.
HolyLampshade 1 hour ago
200 kW/rack is absolutely insane to me. The power consumption of these facilities is just...ridiculous.
ineedasername 46 minutes ago
With respect to consumption, it’s pretty efficient vs older traditional servers, though I know workloads like that aren’t completely fungible. Nonetheless is bears keeping in mind that a single GB200 NVL72 rack provides 1.4 ExaFLOPS of AI compute (at FP4 precision, ideal circumstances, but this is envelope math all around). So it’s power efficient, for what it is.
HolyLampshade 31 minutes ago
Oh, I have no doubt it is functionally efficient. I'm just amazed given the system deployments I've been party to, and the tiny amount of per rack energy usage comparatively speaking given the functionality of those systems.

Like, what in the good god damn are we using all this energy for?

Bombthecat 1 hour ago
Since you and me and everyone else will foot the elecricy bill. Energy consumption or efficiency is not a concern.
harry8 2 hours ago
So what's the theory that goes with this about why cnbc are reporting that openai are walking because they want newer nvidia hardware? CNBC are clueleess? People at openai are lying to cnbc? cnbc are fabricating stories while drunk?

There has to be some theory to explain the story to be consistent with this comment.

wmf 2 hours ago
Something is probably happening but I don't know what it is. Maybe this is really a negotiation over price.
cyanydeez 1 hour ago
OpenAI is a unreliable narrator as long as Sam is in charge. Full stop. EM_DASH.
reilly3000 1 hour ago
Yes and CNBC is comically rife with payola content. I just want to know who’s buying.
tiahura 32 minutes ago
Diedra is a solid reporter with pretty good access and understanding.
collabs 2 hours ago
I agree with you more than I agree with the parent comment.

To use the hit HBO TV show silicon valley analogy, it is far more likely that "the bear is sticky with honey" will happen at Oracle than at Open AI. Some kind of game of telephone gone wrong at some point and now the people responsible at Oracle must double down in order to kick the can to the next quarter and not appear clueless.

Statutory disclaimer: I am not affiliated with either Open AI or Oracle and have no insider information. All of this is mere conjecture and has no basis in reality.

leptons 1 hour ago
>cnbc are fabricating stories while drunk?

Don't forget the possibility that it's AI slop.

tiahura 31 minutes ago
Diedra Bosa is a good journalist.
TacticalCoder 2 hours ago
> CNBC are clueleess?

That sounds about right.

> People at openai are lying to cnbc?

Remove "to cnbc" and that's a yes.

> cnbc are fabricating stories while drunk?

Maybe not drunk but likely high.

mgilroy 26 minutes ago
I think the more interesting question is how much longer does oracle have and at what point does a hostile takeover make sense.

Their databases are heavily used in government, banking and other large industries which have been slower to adapt to change and strugglyto migrate away. At what point does purchasing oracle to gain customer share, existing data centres and the opportunity to migrate to your cloud platform make more sense than competing?

They still have a high market value. However, the debt they will need to service will result in ongoing price increases which will encourage people to migrate away. Over time they will struggle to service the debt and a buyout will be the best of the bad options.

sowbug 3 hours ago
What happens to older datacenter GPUs? Do they have a second life somewhere outside of datacenters?

I could see Nvidia adding terms of sale requiring disposal rather than resale.

paxys 2 hours ago
Plenty of enterprise server hardware (racks, servers, RAM, disks) does have an active secondhand market after 3-5 years of use, but I think GPUs are too specialized for it to be viable. I doubt anyone has the setup to run a H200 in their home rig.

I also don't think companies are going to have mandatory replacement cycles for GPU hardware the same way they do for everything else, because:

1. It is an order or magnitude (or more) more expensive.

2. It isn't clear whether Moore's law will apply to the AI GPU space the same way it has for everything else.

Unless Nvidia can launch a new chip every 2-3 years with massively improved performance-per-watt at a lower price no one is going to rush to recycle the old one.

epolanski 1 hour ago
> Unless Nvidia can launch a new chip every 2-3 years with massively improved performance-per-watt at a lower price no one is going to rush to recycle the old one.

That's exactly the point.

Performance/watt is increasing so much gen-to-gen that it makes no longer sense to run older hardware.

Not my words, Jensen's.

danpalmer 1 hour ago
Are you saying that the person selling shovels thinks you should buy a new shovel? I guess they must be the expert.
kwanbix 1 hour ago
So you are saying that the ceo of the company that builds the chips is saying that it makes sense to change them each generation?
cluckindan 1 hour ago
When the warbirds are on the wing, sell anti-aircraft systems to both sides.
tryauuum 2 hours ago
you can absolutely run e.g. datacenter-level A100 at home, there are adapters from the SXM to the PCIe socket. Haven't seen people running SXM versions of H100s this way but this could be due to the price factor only
thefounder 17 minutes ago
Well by the time the become obsolete you can run that computing on a Mac with no special cooling so I really doubt they will be of any use. Maybe in some parts of the world where electricity is cheap. If someone wants to really find out perhaps watching the crypto ASICs stories could help.
baby_souffle 1 hour ago
Well technically true, I would wager that the home lab is going to require increasingly distinct and unusual adaptations to retrofit the hardware to home use.

New stuff is all liquid cooled by default and that's a paradigm shift for your average home lab.

I'm less aware of exactly what's happening on the power side of things but I think some of the architectures are now moving to relatively high voltage DC throughout and then down converting it to low voltage right before it's used. So not exactly just plug-and-play with your average nema15 outlet.

TacticalCoder 1 hour ago
> I doubt anyone has the setup to run a H200 in their home rig.

There are PCIe versions of these right? And another comment is saying there are PCI adapters too. It "only" requires 600 to 700W. It's not out of reach for everybody.

If the used regular server market is any indication, you can find, after a few years, a lot of enterprise gear at totally discounted prices. CPU costing $4K brand new for $100 after a few years: stuff like that.

A friend has got a 42U rack and so do some homelab'ers. People have been running GPU farms mining cryptocurrencies or doing "transcoding" (for money).

It's not just CPUs at 1/40th of their brand new price: network gear too. And ECC RAM (before the recent RAM craze).

I'm pretty sure that if H200 begin to flood the used market, people shall quickly adapt.

> Unless Nvidia can launch a new chip every 2-3 years with massively improved performance-per-watt at a lower price no one is going to rush to recycle the old one.

I agree with that. But if they resell old H200s, people are resourceful and shall find a way to run these.

fc417fc802 38 minutes ago
Would it even require a particularly high level of resourcefulness? Purchase the GPU along with the mobo that slots it. It's not as though companies typically swap out CPU and GPU while keeping the rest of the box.
drivebyhooting 1 hour ago
Where do you find such deals
bombcar 1 hour ago
Start on eBay and learn the off-lease companies, and start watching them directly.
throwup238 2 hours ago
Last I checked AWS is still offering g4dn instances that run on NVIDIA T4 GPUs, which were first released in 2018. I think most people underestimate how long superscalers can keep these things running profitably after they depreciate, and you probably don’t want anything they throw away.

My last employer is still running a bunch of otherwise discontinued g3 instances with 2015 era GPUs.

33 minutes ago
MisterTea 2 hours ago
It's likely the GPU boards are designed for water cooled data center racks and might not fit in a regular PC case. It's also possible the PCB the GPU's are mounted to might not be standard PCIe cards that fit into an ATX case.

I bought a used NEC SX Aurora TSUBASA (PCIe x16 board that looks like a GPU board) and realized it has no fans. The server case it is designed to fit into is pressurized by fans forcing air through eight cards on a special 4 + 4 slot motherboard. I have to stack and mount three 40mm fans on the back.

mbesto 10 minutes ago
We literally do not know because it hasn't been 5 years yet...
u1hcw9nx 2 hours ago
They are build to physically last 5-7 years in 24/7 datacenter use, but they have effective lifetime just 3-4 years, then their value has deprecated and electricity and infrastructure cost dominates. Meta did a benchmark where 9% of the chips failed every year, 'infant mortality' is much higher in the first 3 months of use.
fc417fc802 33 minutes ago
9% is an absurd failure rate for solid state electronics. Particularly considering the profit margins. I assume it's related to the power densities involved. Would you happen to recall the source?
observationist 2 hours ago
Depending on the elemental composition, it could definitely be worthwhile to recycle wherever scale is practical. For giant datacenters and companies using hundreds of thousands or millions of gpus, that adds up to a lot of gold and other valuable elements.

In order to take advantage of that, someone needs to be positioned to process all that material economically, and to make the logistics achievable by the big players. If it costs Facebook $10million to store and transport phased out gpus vs just sending them to a landfil, they're not going to do it. If they get $100k for recycling - probably not going to do it. If they pocket $5 million, they will definitely contract that out, especially if it costs $50 million to build out the infrastructure to handle it.

Probably a good company idea - transport, disposal, refurbishment of out of cycle GPUs and datacenter assets, creating a massive recycling pipeline for recapturing all the valuable elements is a pretty good niche.

fc417fc802 35 minutes ago
We already have that. It's called ebay.
jdiez17 2 hours ago
I've written about this elsewhere but I predict there will be a significant secondary market for repurposing parts of datacenter GPUs (for example, RAM chips) by desoldering them and soldering them onto new PCBs that fit PC/consumer use cases.
wtallis 1 hour ago
That might work to some extent with the DDR5 DIMMs connected to the CPUs, but is thoroughly impossible for the HBM DRAM stacks packaged with the GPUs.
Avicebron 2 hours ago
I wish there to be an active market like what Gamer's Nexus covered in China:

https://www.youtube.com/watch?v=1H3xQaf7BFI&t=1577s

in the States.

Gigachad 2 hours ago
It's all about the cost of labor. In the US you could not find someone capable of soldering BGA chips for a price that makes sense doing.
exikyut 1 hour ago
No affiliation (I wish), but: https://gptshop.ai

This site apparently sources ex-enterprise(-only) systems and puts them into desktop style enclosures.

zasz 2 hours ago
It seems like GPUs with a high utilization rate (60%+) degrade after 1-3 years: https://www.tomshardware.com/pc-components/gpus/datacenter-g...

Would be interested to know if others have takes on this.

latchkey 1 hour ago
I previously ran 150,000 AMD gpus in all conditions at 100% utilization for years. I currently have a multi-million $ cluster of enterprise AMD GPUs.

A couple real world points:

1. They generally don't just fail. More likely a repairable component on a board fails and you can send it out to be repaired.

2. For my current stuff, I have a 3 year pro support contract that can be extended. Anything happens, Dell goes and fixes it. We also haven't had someone in our cage at the DC in over 6 months now.

AlotOfReading 2 hours ago
You send them back to Nvidia or a third party e-waste recycler at end of life. Sometimes they're resold and reused, but my understanding is that most are eventually processed for materials.
h4kunamata 2 hours ago
Bin!!

Why would them sell it cheaper to the 2nd market??

It will hurt the sales of new ones. This is the way even with food, let alone technology. Don't expect to buy cheaper 2nd GPU any century soon.

Gigachad 2 hours ago
The data center owners aren't the ones selling new GPUs.
alphager 2 hours ago
They are selling access to GPU computation. Seeking their used GPUs would flood the market with cheap competitors using their old GPUs.
Gigachad 1 hour ago
If the GPUs were competitive with their own, they wouldn't be selling them off.
latchkey 1 hour ago
This article misses the point that people are still actively running older compute.
chb 3 hours ago
Is it possible that the supply of used GPUs available to home builders will somehow increase as the result of this?
llm_nerd 2 hours ago
Data centres are actually prohibited from using consumer level GPUs via license restrictions. The GPUs they use are largely SXM (server connector) and if you did somehow get one of the PCIe variants (with enormous power and cooling needs) most don't even support gaming APIs.
ykl 59 minutes ago
Yeah, it used to be true that server GPUs at least somewhat resembled their gaming counterparts (i.e. Nvidia Tesla server components from 12+ years ago); they were still PCIe cards, just with server-optimized coolers, and fundamentally shared the same dies that the gaming and professional cards used.

That stopped being true many years ago though, and the divergence has only accelerated with the advent of AI datacenter usage. The form factor is now fundamentally different (SXM instead of PCIe); you can adapt an SXM card to PCIe with some effort [1], but that may not even be worthwhile because 1. the power and cooling requirements for the SXM cards are radically different than a desktop part and more importantly 2. the dies are no longer even close to being the same. IIRC, Blackwell AI chips straight up don't have rasterization hardware onboard at all; internally they look like a moderate number of general SMs attached to a huge number of tensor core. Modern AI GPUs are fundamentally optimized for, well, mat-mults, which is not at all what you want for gaming or really any non-AI application.

[1] https://l4rz.net/running-nvidia-sxm-gpus-in-consumer-pcs/

yalogin 1 hour ago
This is a pretty damning headline and we are still talking about Blackwell. I guess that is how fast the whole segment is moving but OpenAI and only looking for the most advanced chips feels more like an excuse to walk away from this deal rather than a problem with the stack and oracle. Feels to me that OpenAI is cutting down on commitments and cost as it doesn’t see the revenue pipeline building. May be someone with more knowledge of the reality can comment and correct me
maxdo 1 hour ago
The missing part is that current gpus are already money making machine in 2026 , and you need just to serve that . I’m sure this is a procurement take between nvidia and such a big vendor as oracle
paxys 1 hour ago
> The missing part is that current gpus are already money making machine in 2026

Are they? Unless you are Nvidia that is very far from the case.

OpenAI's current revenue is $25 billion a year. They are expected to spend $600 billion on infrastructure in the next 4 years to sustain and grow that revenue.

Amazon, Google, Microsoft and Meta are spending a combined $650 billion on infrastructure in 2026 alone.

The story is the same across the rest of the industry.

None of these investments are immediately profitable. And it remains to be seen whether they eventually will be or not.

mcs5280 3 hours ago
The only thing that matters is stonk++
john_strinlai 3 hours ago
too bad stonk is down 23% this year. i think they are doing it wrong
0cf8612b2e1e 1 hour ago
I never thought I would see the day, but my stodgy, lumbering company just banned new Oracle databases. Everyone hates Oracle, and only does business out of necessity. I think more and more companies are trying to extricate themselves from Oracle legal, so Oracle needs a new way to leech onto corporations for the coming decades. AI is the best play in sight.
hinkley 40 minutes ago
Did you guys go out to celebrate? It's not too late for Ding Dong, the Bitch is Dead.

If you're Oracle it's not necessary a bad thing if you build an antiquated data center. Isn't much of their customer base legacy customers they are rent-seeking from in perpetuity? Those people are never going to be doing cutting edge AI. They will do what they have always done: adopt new technologies right at the nadir of the Trough of Disillusionment.

driftnet 11 minutes ago
"The North Korea of the computer industry" indeed
conductr 2 hours ago
It’s a huge gamble but they have no choice but to take it. Most their software will be rendered obsolete by AI (I’ve vibecoded replacements saving millions already, companies everywhere are doing this right now).

So they have to hope they’re a part of the future in the AI capacity because their SaaS business is going to take a big hit.

YTD performance didn’t fully bake this reality in. It was seen as them having 2 huge revenue streams, the market is realizing that AI is a threat to SaaS and baking that into stonks

munk-a 3 hours ago
The actions of oracle lately seem extremely misaligned to maximize stonks - it's extremely political, more than is necessary to merely keep in the good graces of the current administration.
dzonga 1 hour ago
>> Oracle is the only one using debt to build the data center

Stargate is backed by the US gvt hence why they're comfortable to put that under debt financing

hyperbovine 1 hour ago
Learn from the best!
christkv 2 hours ago
This is general compute hardware as I understand it. It will not go unused no matter what happens. If new algorithms appear that reduce the number of calculations needed per token for an llm they are probably still good. It's not like silicon advances are accelerating.

If it's built in stages each state will have never variants of hardware I imagine.

advisedwang 3 hours ago
Perhaps oracle going bust can be the silver lining to an AI bubble bursting
hristov 2 hours ago
What the article did not mention is that oracle founder, executive chairman and biggest stockholder larry ellison is currently bankrolling his kid David's bid to monopolize the entire US news industry so that they are more friendly to Trump, Netanyahu and various other right wing ideologists.

David Ellison is fueling his buying spree with debt guaranteed by his dad's oracle shares. The various assets David has bought are already suffering losses of viewership because viewers are turned off by their new ideological slant.

Usually debt investors are not worried if the stock price is high. Debt has precedence over equity, so if the stock price is riding high, the CEO can always be convinced to print more shares to service the debt. The Oracle stock price has not been doing that hot lately, however. As the article said, it is 50% down. Still ORCL has 430 Billion market cap in comparison with 130 Billion of debt. It seems manageable. But stock prices can move very fast. Ironically, the war in Iran, which David's new news sources keep supporting is causing ORCL stock to go down which can bring down David's new media empire.

David just purchased Warner Bros for about 110. A lot of that (40 billion) is also guaranteed by daddy's ORCL shares. Warner Bros owns Comedy Central, which sadly has been one of Americas most dependable news sources.

The house of cards is still standing but its getting awfully wobbly.

jmclnx 3 hours ago
to me, seems the page is gone. This could be a related item:

https://www.msn.com/en-us/money/general/as-oracle-plans-thou...

motbus3 3 hours ago
Omg. Oracle taking greedy bad decisions with tax payer money? No way!
happyopossum 3 hours ago
TFA says nothing about taxpayer money - this is about Oracle taking on debt...
keeganpoppen 3 hours ago
what taxpayer money?
slopinthebag 2 hours ago
The inevitable bailout.
coliveira 3 hours ago
While Trump is in power, the bail out is a sure thing.
nine_zeros 3 hours ago
[dead]
sega_sai 1 hour ago
Wow, I just saw in the article that NVIDIA called a new chip Vera Rubin (https://en.wikipedia.org/wiki/Vera_Rubin, also https://en.wikipedia.org/wiki/Vera_C._Rubin_Observatory). How is it allowed for a commercial company to assign a name of a known person to a product ?
Polizeiposaune 53 minutes ago
People have sued over this sort of thing. Apple's Power Macintosh 7100 was originally codenamed "Carl Sagan":

https://en.wikipedia.org/wiki/Power_Macintosh_7100

Sagan sued. Engineers at Apple changed the name to BHA: "Butt-Head Astronomer".

He sued again. The final codename was "LAW: Lawyers are Wimps".

CamouflagedKiwi 1 hour ago
It's a codename. The product will be called "R100" or "R200" etc. (And "RTX6090" etc for the consumer versions).
consz 1 hour ago
different vera rubin, common mistake
semiquaver 1 hour ago
...why wouldn’t that be “allowed”?
sega_sai 48 minutes ago
Would you want the commercial company use your name, or the name of your relative?