> We had a budget alert (€80) and a cost anomaly alert, both of which triggered with a delay of a few hours
> By the time we reacted, costs were already around €28,000
> The final amount settled at €54,000+ due to delayed cost reporting
So much for the folks defending these three companies that refused to provide hard spending cap ("but you can set the budget", "you are doing it wrong if you worry about billing", "hard cap it's technically impossible" etc.)
Yeah, that the main reason I never use services like Google Cloud if I don't have to, it's impossible to have a hard cap, and anyone pretending to be an expert, is just off.
Google says that they can't provide a hard cap because that would mean shutting down all your services..bla bla, but at least give users the option.
It shouldnt mean shutting down all your services, it should mean not letting you provision new ones and limiting the scope of what you can continue doing.
If I budget enough to store 1TB of data for 1 month, then on the first day of the month I store 2TB of data - what should the behaviour be after 15 days?
Read/write access should be frozen, data should be saved for 1 month so you have time to react to warning emails. If you didn't upgrade in that time, it should be deleted.
Nuke the data. It’s gone forever if you didn’t back it up elsewhere. This should be a meaningful risk mitigation that I can employ to avoid having a catastrophic financial disaster.
This isn’t a limit I’m setting at some percentage above expected costs, it’s: “I don’t want to take out a HELOC if something goes wrong”
If you have a lambda set up that normally runs a hundred times a day, and suddenly it tries to spin up 10 million instances, it should block that unless you specifically enable it.
You know that's not how the cloud works. If you're build by the hour for compute and that compute is powering a server, the only way to stop that is by shutting off the compute, breaking the server.
I would love to have a “if the bill for this hobby project becomes a threat to my ability to pay my mortgage, nuke it.” If I cared about the data enough. I’d have backed it up.
We have spend caps at the billing account level and the project level (developer set) in the Gemini API now. There is up to a 10 minute delay in processing everything but this should significantly mitigate the risk here: https://ai.google.dev/gemini-api/docs/billing#tier-spend-cap...
By default, new Tier 1 paid accounts can only spend $250 in a given month.
I just find it extraordinary that the biggest tech company in the world can do cutting edge real time AI for millions of people, run Youtube and of course all the other google services with having literally the smartest people in the world and unlimited resources on board, but still can't keep real time track of the user's current billing and their spending limits, it's all best effort still. Somehow it doesn't add up. (Pun not intended, but I'm happy to have it)
I'm sure it's me being an idiot, but once again I spent 20m trying to figure how to do a specific thing in google-land and still haven't figured it out. Even if I did set it somewhere, I see things like "Setting a budget does not cap resource or API consumption" with a link to a bunch of documentation I have to analyze.
That's actually crazy. So I can build a project I love, that does good, but somehow get in a situation where I'm accidentally paying 30.000€ (or 50.000€) to a big tech company? How is that fair? I mean yes, as a software engineer, you ought to reflect on all possible weaknesses, but there was a time when overlooking something meant something completely different than being down 30/50k. That is actually life-altering.
Your kid can do this in a smartphone game designated suitable for children, heavily optimized to exacerbate the possibility, and depending on where you live they can just choose not to refund you.
When the FTC went investigating a decade-ish ago they found Facebook saying the quiet parts out loud: it was all extremely deliberate.
you cannot earn billions a year and not be cheating your users out of their money. its that simple. they dont care for people, otherwise they wouldnt be putting so much effort in making them poor.
If that happens, you create a support ticket and AWS/GCP/Azure wave it, especially the first time. They're aware that billing per usage can have surprise effects, but at the same time they don't want to kill their customers' workloads and delete their data, so it is what it is.
It's quite easy to check responses to other customers in other threads there, and somehow I see quite a lot of "oh, go to that other support" and ghosting.
If you create support ticket on hacker news, then yes, you will probably get it waved. It's somewhat sad that HN is their support forum now.
Exactly! I know, some of those companies sometimes refund you, but if your livelihood depends on it..? That's a crazy situation to be in as a mere developer.
Google has specifically said that certain API keys like Firebase are not secrets (since people will find them)... though Gemini then ended up changing stuff. https://news.ycombinator.com/item?id=47156925
This should be illegal. If a contractor your hired to swap out a tile on your bathroom floor billed you for remodelling your back garden, you would obviously have the legal right to refuse that.
Not if your contractor had you first sign a 15 page contract that commits you to whatever costs they dream up and requires forced arbitration by a corporate friendly firm when any dispute arises.
Because that's somehow normal in today's tech world.
Slightly OT, but I've always taken a dim view of this sort of thing for consumers because the parties are never at equal parity, either in ability to understand the legalese they're agreeing to, or the ability to seek alternatives.
Legal contracts for consumers should be written at whatever the prevailing reading level is, and the government should step in the more monopolistic position a company is in.
It infuriates me to no end how preferential government is towards corporations vs individuals.
In jurisdictions where beastiality is legal, then yes, from the libertarian perspective, that's all freedom of contract, baby. I'm not defending either beastiality or libertarianism, but the logic is that you don't want the government deciding what two private entities can and can't freely agree to.
We're pretty far from the Lochner era in the US, where even minimum wage laws were held to be unconstitutional violations of a very broad view of freedom to contract. But it is still a principle in most legal system.
My guess is that at least in Europe they would have a good chance fighting this in court and getting their money back, but it’s a pain having to go through such a lawsuit.
"We can either charge per tile, per job or on demand. Or you can have us on call for a year and get any of the former at a discounted rate."
"Per tile. Lay tiles until I say stop"
>you fall asleep
"Wtf why are you still laying tile"
"You said per tile and lay until you say stop. That'll be 50k please"
The cloud services wrote the contract and the UI for their console. They then encourage young developers to try out their tools and encourage a market environment where those skills are needed to secure employment. Some kid goes and tries to build their first web app, they follow instructions and tutorials but miss that a single default selection on a menu three nested layers down is going to cost $2,000 per month. This isn’t disclosed on the page. Sure, it can be determined by reading several different documents, but the provider chose to not show estimates for costs in the setup.
You hire a contractor and agree they'll bill you per tile, regardless of how many tiles there are. They bill you per tile. End of story.
For a more acurate comparison, consider a utility. You agree to pay for your electic bill. It's not the utility's fault you invited all your friends who decided to run a crypto mining LAN party, and they can't cut you off lightly because it might literally kill you (e.g. you live in a hot place and rely on AC to stay alive).
> The Gemini API supports monthly spend caps at both the billing account tier and project levels. These controls are designed to protect your account from unexpected overages, and the ecosystem to ensure service availability
The problem is it's specific to that API and defaults to uncapped so people who aren't using it and haven't heard about the issues with the Firebase API keys probably won't have set them.
Spend caps exist for Gemini (Maxious linked them) - they just default to OFF. For an API that can bill four figures per hour, opt-in safety by default isn't a UX choice, it's a billing strategy
Except that Google's own statements are extremely clear that "leaked" (i.e. public) API keys should not be able to access the Gemini API in the first place: "We have identified a vulnerability where some API keys may have been publicly exposed. To protect your data and prevent unauthorized access, we have proactively blocked these known leaked keys from accessing the Gemini API. ... We are defaulting to blocking API keys that are leaked and used with the Gemini API, helping prevent abuse of cost and your application data." https://ai.google.dev/gemini-api/docs/troubleshooting#google...
For extra clarity on the exact so-called "vulnerability" that Google identified, see: https://news.ycombinator.com/item?id=47156925 This describes the very issue where some API keys were public by design (used for client-side web access), so the term "leaked" should be read in that unusually broad sense. Firebase keys are obviously covered, since they're also public by design.
(As for "Firebase AI Logic", it is explicitly very different: it's supposed to be implemented via a proxy service so the Gemini API key is never seen by the client: https://firebase.google.com/docs/ai-logic Clearly, just casually "enabling" something - which is what OP says they did! - should never result in abuse of cost on the scale OP describes.)
As a manager I avoid Google Cloud for this kind of customer-service disasters; but as someone who has dealt with large-scale billing systems in the telecom world, probably similar to that of Google Cloud, I am not surprised that it takes 10 minutes to consolidate all the usage logs of a customer for billing.
For telephony, it sometimes takes days when roaming is involved.
You have to imagine TB/sec of data, if not more, coming from thousand of potential sources, and queuing for aggregation to the proper company account, all having to be auditable. This is not a small engineering feat and it can't be real-time.
With that said, telcos usually include in their business model around 2-3% of bad debt (i.e. revenue that won't get paid), which accounts for frauds like this one. Given that the customer seems in good faith and has taken measures upon being notified, Google should manage this bill shock a bit more elegantly.
Moreover, the fact that this happened immediately after this key opened the AI gates means that pirates permanently scan for the permissions of all the keys they could gathers. Google could and should detect that and act upon it.
I was selling a house in a state I no longer lived in, and was under contract to close the sale, when I got an email from the water company. It told me they suspected based on my water usage that there was a leak on the properly.
There had been a very cold February night (like -15F) and a pipe froze inside the walls, and it was just absolutely gushing out. They sent me the email after it had been leaking for a WEEK. I asked a friend to check it out and she said that the laminate floor went "squish" when she stepped in the front door.
Fortunately I was covered by homeowner's insurance since I could prove that my heat had been on, but that was a very unpleasant "warning" to receive!
Real time spend limits are probably never going to happen. Actual $ amounts are calculated by a centralized billing system offline in batch.
It sounds easy but it’s bonkers complicated, because of things like discounts, free tiers, committed usage, currency conversions and having to support every payment and deal structure in GCP.
Individual eng teams rarely actually think in dollar amounts, they think in the abstraction which is quotas.
These companies can sell your personal information in a microsecond in an advertising auction, but somehow can't figure out how to give you timely alerts that stop their cash flow.
The funny thing is that the website only has firebase auth, without any ai features.
The default api key that was created (before the ai was even released a few years back), someone got it from the website and started using the gemini api with the key.
This is clearly setup for VC backed companies where shareholders don't care about spend as long as they can brag about investing in this cool start up at dinner parties. Normal and true business should stay away.
You mean openrouter.ai. And yes, on reading this blog post, I immediately reviewed my API keys in OpenRouter to make sure that they were capped. My prod key was capped at $20/day (phew!) but my dev key had no cap, which I just updated. What a horrible story.
You can set it to auto top up if it drops below a certain amount. If you do that, then it would definitely be wise to add a cap. They let you add daily/weekly caps, which is convenient.
> So much for the folks defending these three companies that refused to provide hard spending cap ("but you can set the budget", "you are doing it wrong if you worry about billing", "hard cap it's technically impossible" etc.)
Yes, it's technically+business impossible. To implement a hard cap, a bill never to go over, they'd have to cut your service, but also delete all your data in databases, object storage, data lake, etc. This is simply not an option, so they take the different option of authorising support to wave surprise surcharges / billing DDoSes.
Even if you manage to get your microservices to synch every penny spent to your payment account at realtime (impossible) you still have to waiver the excess, losing some money every time someone goes past their quota.
Sure, but 80 -> 28,000 -> 54,000 is a hell of a lot of slippage.
Trading platforms can guarantee a maximum slippage on stops, and often even offer guaranteed stops (with an attached premium), so I don’t see why Google and Firebase can’t do similar.
Yep. And cloud providers could eat any slippage cost (enforcing, say, every 5 minutes by stopping service) without even a rounding error on their balance sheets.
The fact that they don’t indicates that there’s no market reason to support small spenders who get mad about runaway overages, not that it’s technically or financially hard to do so.
> Trading platforms can guarantee a maximum slippage on stops
Yeah no, physically impossible. If nobody is selling at that price, there is no guarantee your sell stop will execute near that price. They can sweep the market, find the best seller price and execute.
There might be a costly way to do it with microservices as I indicated, but your example easily falls apart.
They can take the other side of your other themselves, lose money sometimes, but make it up in the premium they charged you in the first place (or in the old days, from your other trading fees or your monthly subscription payment).
Cloud providers would be taking way less risk interacting with their own services than a broker does interacting with the market. Perhaps they would be more at risk from bad actors, but it shouldn't be significant: they could reserve this behaviour for people who have already spent, say, $100 with them so you can't abuse it at scale.
If they are a market maker, they can buy/sell at or near your stop. It might be a bad idea for them, but if they have a guarantee, this is how they will do it. Or, it will be like the Amazon guarantee (refunding free shipping on your late order).
Not impossible to do: they can hedge and/or absorb the cost, hence the premium. They usually also specify a (fairly large) minimum distance for such stops.
That's exactly what I proposed in my response. Big corp can waiver the extra costs to match your limit. Glad we finally got to that part of my response. The question is: will they? Probably not. Do brokers do it? I haven't seen any. Maybe you know more.
I'm with you. And what do you even do when the quota is breached, nuke the resources? People will complain about that just as much as overspends.
I don't buy the 'evil corp screwing people' angle either. They are making farrr too much legit money to care about occasionally screwing people out of 20k and 50k.
If I set a limit, and you cut off my service because I reached the limit, I would definitely not "complain just as much" as if I set a limit and you allowed me to spend past it.
We're not talking about an EC2 or EBS volume here, this is access to an API.
> We had a budget alert (€80) and a cost anomaly alert, both of which triggered with a delay of a few hours. By the time we reacted, costs were already around €28,000.
I had a similar experience with GCP where I set a budget of $100 and was only emailed 5 hours after exceeding the budget by which time I was well over it.
It's mind boggling that features like this aren't prioritized. Sure it would probably make Google less money short term, but surely that's more preferable to providing devs with such a poor experience that they'd never recommend your platform to anyone else again.
I get furious every time this comes up and somehow there are bootlickers ready to defend big tech on it.
My ~2 person small business was almost put out of business due to a runaway job. I had instrumented everything perfectly according to the GCP instructions - as soon as billing went over the cap the notification was hooked up to a kill switch, which it did instantly.
GCP sent the notification they offered as best practice 6 HOURS late. They did everything they could to not credit my account until they realized I had the receipts. They said an investigation revealed their pipeline was overwhelmed by the number of line items and that was the reason for the lag. ... The exact scenario it is supposed to function in. JFC.
Almost wish the people defending it were paid. Almost more intelligent to rush to the defense if there were a direct financial benefit.
Part of it is possibly the curse of knowledge. Someone in the 99th percentile of cloud configuration experts simply can't recall their junior dev days.
In my junior dev days I always paid for the resources I used. Just because you consume a lot of resources by accident that doesn't mean you shouldn't have to pay for it. Accidents do not absolve you from liability.
I know software is special. That's why software defects are acceptable while a crumbling bridge is not.
With that said, should this apply to other industries? If I clip a warehouse shelf on my first day driving a forklift, should my wages be garnished for life to cover the inventory? Or is the inherent nature of the logistics industry such that an accident does not always imply liability? (Or other)
Exactly my thoughts, can not really understand how delayed alerts are acceptable... Have you managed to settle the cost with Google, what was the outcome?
Back in 2020 I had a similar situation. Ended up charging $500 due to an overnight TPU training run using egress bandwidth across zones.
Google support was surprisingly understanding, after I explained the issue. They asked some clarifying questions. Then they said that they can offer a one time refund for this case.
Since then I was paranoid not to accidentally do it again. I don't know whether GCP would refund a second time.
GCP charging for interzone traffic is an interesting financial choice. They own all the infra and in many cases this is literally moving from building to building.
There's cross-region, and cross-zone. If both boxes are located within the same zone (e.g. us-east1) then the bandwidth is free, since it's intrazone traffic. Cross-zone egress traffic (e.g. us-east1 to us-central1) is billed at a certain rate, and cross-region egress traffic (e.g. us-east1 to europe-west8) is billed at a significantly higher rate.
Amusingly enough, ingress traffic seems to always be free. So you can upload as much data as you want into their cloud, but good luck if you need to get it out.
I am referring to cross-zone within in the same region, so like us-central1-a to us-central1-b. These are building to building and often never cross public land.
> Sure it would probably make Google less money short term, but surely that's more preferable to providing devs with such a poor experience that they'd never recommend your platform to anyone else again.
Welcome to late-stage capitalism, where there is no long-term thinking, only short-term profit stealing, and Fuck You I Got Mine.
Considering the amount of repositories on public GitHub with hard-coded Gemini API tokens inside the shared source code (https://github.com/search?q=gemini+%22AIza%22&type=code), this hardly comes as a surprise. Google also has historically treated API keys as non-secrets, except with the introduction of the keys for LLM inference, then users are supposed to treat those secretly, but I'm not sure everyone got that memo yet.
Considering that the author didn't share what website this is about, I'd wager they either leaked it accidentally themselves via their frontend, or they've shared their source code with credentials together with it.
> Google also has historically treated API keys as non-secrets, except with the introduction of the keys for LLM inference, then users are supposed to treat those secretly
This was reported a long time ago, and was supposed to be fixed by Google via making sure that these legacy public keys would not be usable for Gemini or AI. https://news.ycombinator.com/item?id=47156925https://ai.google.dev/gemini-api/docs/troubleshooting#google... "We are defaulting to blocking API keys that are leaked and used with the Gemini API, helping prevent abuse of cost and your application data." Why are we hearing about this again?
I think brand new stuff is probably safe, but old keys that currently being used for AI and non-AI stuff - if Google disables them for AI and it turns out it was actually not being exposed publicly, could disrupt a user's production service relying on AI.
They messed up by allowing old keys to be used for both private and public APIs in the first place, but now it's difficult for them to undo that for existing keys.
A reply on OP's post states: "... We now generate Auth keys by default for new users (more secure key which didn’t exist when the Gemini API was originally created a few years ago) and will have more to share there soon. ..." So there is something new in that exact area but the details are forthcoming.
I know you're well within your rights to post this, but would you consider replacing your comment with something like "It's easy to find working keys on github if you search the appropriate terms"?
Think of it this way: although you're not to blame, HN drives a lot of traffic to your preconfigured github search. There are also bad actors who browse HN; I had a Firebase charge of $1k from someone who set up an automated script to hammer my endpoint as hard as possible, just to drive the price up. Point being, HN readers are motivated to exploit things like what you posted.
It's true that the github search is a "wall of shame", and perhaps the users deserve to learn the hard way why it's a good idea to secure API keys. But there's also no benefit in doing that. The world before and after your comment will be exactly the same, except some random Gemini users are harmed. (It's very unlikely that Google or Github would see your comment and go "Oh, it's time we do something about this right now".)
EDIT: I went through the search results and confirmed that the first several dozen keys don't work. They report as error code 403 "Your API key was reported as leaked. Please use another API key." or "Permission denied: Consumer 'api_key:xxx' has been suspended." So at least HN readers will need to work hard(er) to find a valid key.
I'm not opposed to even removing the comment outright.
That being said, GitHub does not even offer a time sorted search. Meaning that most of the results are going to be quite old and useless.
Second, API keys being shared on GitHub is quite an old problem. People setup automated scans for this sort of stuff. Me removing my comment isn't going to help anyone who already posted their API key online.
Google API keys have been used for ages on the frontend. For example on Google Maps embeds. Those are not possible without exposing a key to the frontend. They weren't secret, until Gemini arrived.
If one ignores 70% of the documentation, it makes for a demonizing blog post about it, sure.
"
API keys for Firebase services are not secret
API keys for Firebase services only identify your Firebase project and app to those services. Authorization is handled through Google Cloud IAM permissions, Firebase Security Rules, and Firebase App Check.
All Firebase-provisioned API keys are automatically restricted to Firebase-related APIs. If your app's setup follows the guidelines in this page, then API keys restricted to Firebase services do not need to be treated as secrets, and it's safe to include them in your code or configuration files.
Set up API key restrictions
If you use API keys for other Google services, make sure that you apply API key restrictions to scope your API keys to your app clients and the APIs you use.
Use your Firebase-provisioned API keys only for Firebase-related APIs. If your app uses any other APIs (for example, the Places API for Maps or the Gemini Developer API), use a separate API key and restrict it to the applicable API."
The only reasonable design is to have two kinds of API keys that cannot be used interchangeably: public API keys, that cannot be configured to use private APIs, and private API keys, that cannot be configured to use public APIs. There's no one who must use a single API key for both purposes, and almost all cases in which someone does configure an API key like that will be a mistake. It would be even better if the API keys started with a different prefix or had some other easy way to distinguish between the two types so that I can stop getting warnings about my Firebase keys being "public".
It'd be much better to call them something like "API usernames" or "API Client IDs". Though I also dislike the naming of "public keys" in asymmetric cryptography, for the same reasons, and I'm definitely not winning that fight!
Public by design: API keys for Firebase services only identify your Firebase project and app to those services. Authorization is handled through Google Cloud IAM permissions, Firebase Security Rules, and Firebase App Check.
I'm absolutely not defending Google here, to be clear: Retroactively expanding the scope of an API "key" explicitly designated as "public/non-sensitive" is very bad.
But the concept itself does make some sense, and I'm just noting that there's precedent both across Google and other companies.
In the frontend world where you have client-side API keys talking directly to 3rd party services from the client. Think things like Google Maps and similar.
Which is a stupid idea for something where there is billing involved... Anyone on the internet can take that key and scrape the Google maps API (faking the referer header) and cost you $$$$$.
Google should have simply done with by origin URL if they wanted stuff to be open like that.
Public API keys are a thing. Arguably they are poorly named (it's really more of a client identifier), and modeling them as primarily a key instead of primarily as a non-secret identifier can go very wrong, as evidenced here.
As others have said, this is a "feature" for Google, not a bug. There is no easy way to set a hard cap on billing on a project. I spent the better time of an hour trying to find it in the billing settings in GCP, only to land on reddit and figuring out that you could set a budget alert to trigger a Pub/Sub message, which triggers a Cloud Function to disable billing for the project. Insanity.
This is presumably by design: How can it be the vendor's fault if your custom billing protection implementation failed you at a critical time? Much harder to defend against a switch on their dashboard allowing billing overshoot.
having to glue pub/sub to a cloud function just to approximate a hard cap is the whole indictment. that's not a safety feature. that's you building your own brakes.
This is from my experience the same in AWS and Azure. I would love for a kill-switch if the usage goes above a critical threshold. 5 hours down time will not kill my app but a huge cloud bill might.
It's been a year since I last looked at this, but when I did you could get near-realtime cost metrics for AWS Bedrock via CloudWatch (you get input & output token counts and have to generate the actual price yourself)
I read the following [0] and immediately went to my firebase project to downgrade my plan. This is horrific.
> Yes, I’m looking at a bill of $6,909 for calls to GenerativeLanguage.GenerateContent over about a month, none of which I made. I had quickly created an API key during a live Google training session. I never shared it with anyone and it’s not pushed to any public (or private) repo or website.
It is scary building on the public cloud as a solo dev or small team. No real safety net, possibly unbounded costs, etc. A large portion of each personal project I do is spent thinking about how to prevent unexpected costs, detect and limit them, and react to them. I used to just chuck everything onto a droplet or VPS, but a lot of the projects I am doing lately need services from Google or AWS. I tend to prefer GCP at this point because at least I can programmatically disconnect the billing account when they get around to tripping the alert.
There are very few countries where consumer rights apply to B2B transactions, especially if it’s multiple people operating as a “small team”.
A solo dev however might be able to present themselves as a retail consumer, and leverage some trading standards related rules for unclear pricing or something similar.
These are all poorly designed systems from a CX perspective (the billing systems).
Billing is usually event driven. Each spending instance (e.g. API call) generates an event.
Events go to queues/logs, aggregation is delayed.
You get alerts when aggregation happens, which if the aggregation service has a hiccup, can be many hours later (the service SLA and the billing aggregator SLA are different).
Even if you have hard limits, the limits trigger on the last known good aggregate, so a spike can make you overshoot the limit.
All of these protect the company, but not the customer.
If they really cared about customer experience, once a hard limit hits, that limit sets how much the customer pays until it is reset, period, regardless of any lags in billing event processing.
That pushes the incentive to build a good billing system. Any delays in aggregation potentially cost the provider money, so they will make it good (it's in their own best interest).
It's not typically a problem that usage is event driven. At least not for prepaid phone plans. Or debit cards. Or mailboxes. Or any myriad of prepaid or quota'd services. It's not rocket science, just a bad business practice on the part of Google.
For personal projects, is there a cloud service that has actual working spend caps? I would perhaps try using a cloud service if I wasn't exposing myself to a risk of losing my yearly income by a small mistake. Or is renting a VPS the only sensible option?
Hey folks, I just wanted to drop a quick note here that there’s a way to stop billing in an emergency that’s officially documented on the Google Cloud documentation site: https://docs.cloud.google.com/billing/docs/how-to/disable-bi... . You can see the big red warning that this could destroy resources that you can’t get back even if you reconnect a billing account, but this is a way to stop things before they get out of control. This billing account disconnect goes all the way to implement a full on “emergency hand brake” that “unplugs the thing from the wall” (or whatever analogy you prefer) without you having to affirmatively do it yourself.
The billing account disconnect obviously shouldn’t be used for any production apps or workloads you’re using to serve your own customers or users, since it could interrupt them without warning, but it’s a great option for internal workloads or test apps or proof of concept explorations.
>There's a delay between incurring costs and receiving budget notifications, so you might incur additional costs for usage that hasn't arrived at the time that all services are stopped.
This delay may be hours or days. I managed to spend $400 in 5 minutes.
Forgive my ignorance - but what's the payoff for fraudsters in getting access to a generative AI service for a short-ish period of time, before they get cut off?
With EC2 / GCC credentials, I could understand going all out on bitcoin mining - but what are they asking the AI to do here that's worth setting up some kind of botnet or automation to sift the internet for compromised keys?
Early Generative AI was popular with spammers before it became mainstream because it could be used to write infinite variations of spam messages. Making each message unique is more likely to bypass spam filters.
There are also a lot of AI use cases that require a lot of token spend to brute force a problem. Someone might want to search for security exploits in a codebase but they don’t want to spend the $50,000 in tokens from their own money. Finding someone’s key and using it as hard as possible until getting locked out could move these projects forward.
Totally speculating here, but maybe they provide some sort of LLM as a service, and they rotate stolen API keys in the background so they don't have to pay anything ?
Or they use the LLMs for criminal purposes (like automated social engineering) and so the API key can't be traced to their personal info (but they could also use a local model for this, so I don't know).
There are plenty of services offering AI inference at a discount. Some of these will be using your data for future distillation; others might be making use of bulk discounts and passing these through to a number of individual users (while taking on billing, support etc. risk) – and maybe some are just selling tokens falling off the back of a truck?
Surprised they don’t have usage limits. E.g. you can’t get many IPs from AWS for your region until you request a limit increase. The UX for these kinds of things seems like it should default to low and allow easy increasing.
Slightly off-topic, but Backblaze B2 has usage caps that actually work. I have $0 cap on API requests, and yesterday when litestream burned through the free tier (defaults to replicating every second), I got a notice and requests stopped working until I upped my cap.
Two things that should be default on any GCP project touching generative-AI APIs:
1 API-key restrictions by HTTP referrer AND by API (`generativelanguage.googleapis.com` only),
2 a billing budget with a Pub/Sub "cap" action, not just an email alert. Neither is on by default, and almost nobody sets them before shipping. 13 hours is actually fast for detection. most teams find out at end-of-month reconciliation.
The spend-cap discussion is the right instinct but misses a more fundamental fix available to Firebase projects: restricting the API key itself. In Google Cloud Console → APIs & Services → Credentials, you can edit your Firebase browser key and set API restrictions to only allow specific Firebase services (Firestore, Authentication, Storage, etc.). This prevents the key from being usable with Gemini or any other GCP API entirely—so even if the key is exposed, it can't incur AI billing costs.
Most Firebase 'add AI to your app' tutorials skip this step because Firebase's initialization flow doesn't prompt you to configure it, and Firebase Security Rules only gate Firebase-specific services, not the key's broader GCP API access scope.
It's "implied" throughout the whole post (or more like assumed that the reader understands this, because it's the basic premise of the problem). It's why they link to a post that explains the basic concept after a remark that "This describes our issue in more detail".
> tl;dr Google spent over a decade telling developers that Google API keys (like those used in Maps, Firebase, etc.) are not secrets. But that's no longer true: Gemini accepts the same keys to access your private data. We scanned millions of websites and found nearly 3,000 Google API keys, originally deployed for public services like Google Maps, that now also authenticate to Gemini even though they were never intended for it. With a valid key, an attacker can access uploaded files, cached data, and charge LLM-usage to your account. Even Google themselves had old public API keys, which they thought were non-sensitive, that we could use to access Google’s internal Gemini.
From Google themselves, in the Firebase docs:
> API keys for Firebase services are not secret. Firebase uses API keys only to identify your app's Firebase project to Firebase services, and not to control access to database or Cloud Storage data, which is done using Firebase Security Rules. For this reason, you do not need to treat API keys for Firebase services as secrets, and you can safely embed them in client code.
... or at least that's what it used to say, until they quietly updated the docs to say this:
> API keys for Firebase services are not secret. API keys for Firebase services only identify your Firebase project and app to those services. Authorization is handled through Google Cloud IAM permissions, Firebase Security Rules, and Firebase App Check.
> All Firebase-provisioned API keys are automatically restricted to Firebase-related APIs. If your app's setup follows the guidelines in this page, then API keys restricted to Firebase services do not need to be treated as secrets, and it's safe to include them in your code or configuration files.
Followed later by (in different section):
> Use your Firebase-provisioned API keys only for Firebase-related APIs. If your app uses any other APIs (for example, the Places API for Maps or the Gemini Developer API), use a separate API key and restrict it to the applicable API.
Yeah, the amount of people creating, running and maintaining websites yet don't understand how websites actually work in practice is very high and seems we haven't even come close to the ceiling yet.
This story is almost quaint. The version we're about to see is a coding agent running in CI with an API key, hitting a transient 429, retrying in a tight loop because the prompt told it to "be persistent." Firebase had at least a human typing the query. Caps aren't a nice-to-have once the caller is autonomous.
I think the logistics of calculating cost in real time is something that is extremely hard. I don't think there is one big cloud service provider that has hard limits instead of alerts.
As long as they revert the charge when notified of scenarios like this , and they have historically done so for many cases, it's fine. It's an acceptable workaround for a hard problem and the cost of doing business ( just like Credit Cards accept a certain amount of loss to fraud as part of business)
Cutting off at the exact cent is difficult, but a hard limit that triggers within one dollar of the actual limit should really be possible
If for some resources you can't sample measurements fast enough you could weaken it to "triggers within one dollar or five minutes after cost overrun, whichever comes later". But LLM APIs are one of those cases where time isn't a factor, your only issue is that if you only check quota before each inference a given query might bring you over
Why would it be hard to calculate cost? Multiply a fixed price * requests/time ? It doesn't have to be exact in real time, it just has to report something approximately useful in realtime.
It's absolutely not fine to be at the mercy of other people, that's what we buy cloud products or really any products for: So that we are not at the mercy of hardware faults, bad weather, bad teeth, hunger, thirst, [insert anything]
I'm guessing the answer is simply money. It's less expensive to deal with people like this this than it probably was to prevent it. Right now, they seem to run very sparsely, so ramp that up (if it's every 3 hours and they want to change to 5 minutes that's like a 6000% increase) and they're probably paying more than it costs to employ people to return credits or fears of people leaving.
It sucks, but that's unfortunately the world we live in until something changes.
The US could rely on an agency like the CFPB to prevent this, but that was gutted under the current admin.
Ridiculous. They are clearly not trying at all. A hard wall preventing going over budget by 100x in a couple hours is not some devilishly complicated decentralized system problem.
Don't tote the party line.
Same reason why Azure AI only has easy rate limits by minute, not by day or week or month. Open source proxy projects do it easily tho. Think about the incentives.
Going over a hard cap by 3% would be a reasonable failure to make, not by 30000%.
Google responded to your post so that’s good news. We all know the nature of APIs, but a secure transaction system is non-negotiable from Google and its peers for LLM API use. Right now LLM APIs are like unencrypted credit card numbers floating around.
Does the blog post explain how this happened exactly? Did he leak his API key in frontend code somehow, or was his project itself vulnerable to misuse? I'm curious how someone racked up 30k in a few hours.
Unfortunately, yet just another story like this. One of these unexpected usage charges in the thousands appears every month, and with the same automatic denied too. This is one of the reasons I just stopped using these kinds of pay-per-usage cloud services long ago. At best, I still use services that have hard-bounded usage limits, like EC2 from AWS, where one instance can never go beyond 24h/day usage and is always capped, with shutdowns when exceeded, and limited credit cards, too.
It's super frustrating that this is the only option to realistically deal with this issue, since all stories end up the same way: The cloud company just saying "f* you, we don't care, pay up." and legal fees are always expensive :(
> At best, I still use services that have hard-bounded usage limits, like EC2 from AWS, where one instance can never go beyond 24h/day usage and is always capped, with shutdowns when exceeded, and limited credit cards, too.
Is this possible on AWS today? I'm the same way, if I cannot set a hard-limit for the billing so I can know for a fact how much it'll maximum cost in a month, I'm not interested in using that service for anything. Which is one of the top reasons I've stayed clear of AWS, they used to have only billing-alerts, but you couldn't actually set limits, guess one step forward that they've finally implemented that now.
It's incredible that in 2026 your best bet for getting support from Google is still posting to HN and hoping a Product Owner at Google takes pity on you (or feels shamed...)
Not if its publicly called from Javascript, as your user's browser will make those requests. You neither know their IP addresses, nor is the referer or origin header a safe choice as it can be spoofed outside of a browser.
there are plenty of API keys distributed like this by design. For example, google maps requires this, else your (anonymous) users can't use an embedded google map on your website. And a public firebase app needs some kind of API key, too.
on the one hand if you play with petrol you cant complain about burning down your garage
on the other hand hetzner sell ipv4 instance with no security on by default, just raw ubuntu 24.x
within 3-4 days of deploying one, it will be hacked and have crypto miners installed unless additional special config is added. i do wonder what % of hetzner vps instances are compromised
There's a brand-new, Gemini-specific feature for that (as new as March 23), but historically the answer has tended to be "no" from all the cloud providers. Most giants and indies alike have always been strongly opposed to implementing this feature for business reasons. (When you run across something that does let you do things that way, it's one of a handful of exceptions.) Their response is to tell you to set up budget alerts, which is not a solution, as described in this post.
Does Google allow a privacy card that you can control whether an account is connected to it or not? That wouldn't help if someone racked up a ton of charges and Google bills daily, though.
A failure to pay does not extinguish the underlying debt owed. While the US seems pretty dysfunctional (or customer friendly, depending on how you see it) when it comes to collecting on debts, this is not the case globally.
And even in the US, you could presumably easily find all your Google accounts (including personal ones) locked until you pay the outstanding sum. Not something I'd risk, personally.
I doubt most cloud providers are even technically ready for true prepaid billing (which requires things such as estimating and reserving funds prior to paid operations, corresponding real-time two-way interfaces instead of just eventually consistent billing event aggregation etc).
In early mobile networks, the feature set for prepaid used to always lag behind, since real-time billing wasn't really a design consideration from the beginning.
I suppose rather than taking on that extra work or offering a reduced feature set or by building something best-effort and taking financial responsibility for its failures, if cloud providers can just get away with making this the user's problem, why wouldn't they?
> When your Prepay credit balance on the billing account hits $0, all API keys in all projects linked to that billing account will stop working simultaneously. Prepay credits apply only to Gemini API usage costs; you can't use them to pay for other Google Cloud services.
That's fucking bonkers that nothing in the system could see this as unusual and worthy of throttling. The embarrassment of this -- that a company LITERALLY SELLING machine learning services and expertise -- cannot spot such a thing... This should have led them to deal with this internally and refund it. Just... Wow Google.
and the notifications can be delayed because the spending system is not updated in real-time, so even if you have a Cloud Task triggering on spending to disable the project it may be too slow and several thousands may already be spent.
I thought the pricing model was meant to be a benefit of the cloud? All of a sudden, shock horror, paying by the minute turns out to be no cheaper and maybe even more expensive than just doing it yourself
Anthropic and Claude are running circles around Google / Gemini for me these days. Anthropic was quite helpful for a while but strange limit issues started popping up. The final thread was a bug that essentially broke my ability to develop. I moved over to Claude Code full time and haven't looked back. Opus 4.6 is awesome for accelerating probabilistic programming!
It's terrible that giant cloud providers such as Google or AWS doesn't allow for hard cap at project levels or prepaid. And that especially because alerts are delayed as author stated "We had a budget alert (€80) and a cost anomaly alert, both of which triggered with a delay of a few hours. By the time we reacted, costs were already around €28,000.".
I said this when this finding was originally posted and I'll say it again: This is by far the worst security incident Google has ever had, and that's why they aren't publicly or loudly responding to it. It's deeply embarrassing. They can't fix it without breaking customer workflows. They really, really want it to just go away and six months from now they'll complete their warning period to their enterprise contracts and then they can turn off this automated grant. Until then they want as few people to know about it as possible, and that means if you aren't on anyone's big & important customer list internally, and you missed the single 40px blurb they put on a buried developer documentation site, you're vulnerable and this will happen to you.
It's actually much more than a billing leak [1]; again, most people don't know how bad this is, because Google is trying to keep it hush-hush. These keys don't just grant access to Gemini completions; they grant access to any endpoint on the generative AI google cloud product. This includes: seeing all of the files that google cloud project has uploaded to gemini, and interacting with the gemini token cache.
Billing control is security, to be clear, but beyond that: The key permissions that enable anyone to generate text also grant access to all GCP Generative AI endpoints in the project they were provisioned in. That includes things like Files that your system might have uploaded to Gemini for processing, and querying the Gemini context caches for recent Gemini completions your system did. Both of these are likely to contain customer-facing data, if your organization & systems use them.
If you're hearing this and your gut reaction is This can't be real; We're on the same page. Its a staggering issue that Google has categorically failed to respond to. They automatically added this permission to existing keys that they knew their customers were publishing publicly on the internet, because the keys are legitimately supposed to be public for things like client-side Firebase access & Google Maps tile rendering.
They did not notify customers that they were doing this. They did not notify customers after this issue was reported to them months later by Truffle. They did not automatically remove the additional key grants for customers. They continue to push guidance targeted at novices like "just put the Gemini key behind a proxy (that's also publicly exposed on the internet)", which might solve the unintentional files and caching endpoint leaks but doesn't solve the billing issue. They denied that Truffle's initial report was even valid, until Truffle used the Internet Archive to find a Google internal key from 2023, published for a Google Maps widget or something, before Gemini was even released, that was still active, and used it to demonstrate to Google that, hey, anyone can use this key to get Gemini completions on the house, is there anyone driving this ship??" Google fixed the permissions on that specific key. And did nothing else.
At this point its much more polite to write badly than use an LLM to rewrite your content. The form tells me you do not care to interact with me in a genuine way.
I am going to ignore the form comments -- I guess I am not sure how I feel about being called an LLM (good or bad ?), not sure only time will tell. If LLMs turn out to be the turd of the universe -- bad or maybe good ?
-- Emphasis on the '--' for comic interlude
Important to note that, even to this day, Google's AI Studio Build Mode still recommends getting around this "client visible by design with very low enforceable protections" by publicly exposing an AI proxy with zero protection [1]. They don't care.
You are getting down voted but the first thing I thought when I read the above comment you replied to was that it was written by an LLM as well. It has all the stylings of it. Word choice, sentence structure, phrasing, metaphors, etc.
Implementing this in any meaningful manner quickly begins to look like every read becoming a globally synchronised write. Of course it doesn't have to be perfect, but even approximating perfection doesn't look much different. Also, can you imagine the kind of downtimes and complaints that would inevitably originate from a fully synchronous billing architecture?
Prepaid only is a fantastic idea, until your site goes (desirably) viral and then gets shut off right as traffic is picking up, or you grow steadily and forget to increase your deposit amount and suddenly production is down. Billing alerts are a much better solution IMHO.
Let me choose. This common point seems more like a rationalization for the default behavior of hyperscalers. AWS isn't avoiding prepaid due to concern about my site's virality, just that prepaid = less money.
Oh please no. And the "alternatives" to API keys aren't going to help much either, they'll just add friction to getting started (as reference: see the pain involved in writing a script that hits gmail or calendar API)
With AI there is NO justification in NOT DOING IT BY YOURSELF. Why use firebase or <technology-x> if you can generate <the-thing> by yourself and deploy to hardware you own or rent.