Cat's out of the bag now, and it seems they'll probably patch it, but:
Use other flows under standard billing to do iterative planning, spec building, and resource loading for a substantive change set. EG, something 5k+ loc, 10+ file.
Then throw that spec document as your single prompt to the copilot per-request-billed agent. Include in the prompt a caveat that We are being billed per user request. Try to go as far as possible given the prompt. If you encounter difficult underspecified decision points, as far as possible, implement multiple options and indicate in the completion document where selections must be made by the user. Implement specified test structures, and run against your implementation until full passing.
Most of my major chunks of code are written this way, and I never manage to use up the 100 available prompts.
This is basically my workflow. Claude Code for short edits/repairs, VSCode for long generations from spec. Subagents can work for literally days, generation tens of thousands of lines of code with one prompt that costs 12 cents. There's even a summary of tokens used per session in Copilot CLI, telling me I've used hundreds of millions of tokens. You can calculate the eventual API value of that.
For $10 flat per request up to 128k tokens they’re losing money. 100 * 100k is 10m tokens. At current api pricing that’s $50 input tokens, not even accounting for output!
It might be a gym-type situation, where the average of all users just ends up being profitable. Of course it could be bait-and-switch to get people committed to their platform.
Having worked some time in huge businesses, I can assure that there are many corporate copilot subscribers that never use it, that's where they earn money.
In the past we had to buy an expensive license of some niche software, used by a small team, for a VP "in case he wanted to look".
Worse in many gov agencies, whenever they buy software, if it's relatively cheap, everyone gets it.
I've had single prompt to Opus consume as many as 13 premium messages. The Copilot harness is so gimped so they can abstract tokens from messages. Every person that started with Copilot that I know that tried CC were amazed at the power difference. Stepping out of a golf cart and into <your favorite fast car>.
It hasn't done that to me. It's worked according to their docs:
> Copilot Chat uses one premium request per user prompt, multiplied by the model's rate.
> Each prompt to Copilot CLI uses one premium request with the default model. For other models, this is multiplied by the model's rate.
> Copilot coding agent uses one premium request per session, multiplied by the model's rate. A session begins when you ask Copilot to create a pull request or make one or more changes to an existing pull request.
> Note: Initially submitted this to MSRC (VULN-172488), MSRC insisted bypassing billing is outside of MSRC scope and instructed me multiple times to file as a public bug report.
We use a “Managed Azure DevOps Pool”. This allows you to use Azure VM types of your choosing for build agents, but they can also still use the exact same images as the regular managed build agents which works well for us since we have no desire to manage the OS of our agent (doing updates, etc), but we get to choose beefier hardware specs.
An annoying limitation though is that Microsoft’s images only work on “Gen 1” VMs, which limits available VM types.
Someone posted on one of Microsoft’s forums or GitHub repositories to please update the images to also work on Gen 2 VMs, I can’t remember for sure right now which forum, was probably the “Azure Managed DecOps Pools” forum.
Reply was “we can’t do anything about this, go post in forum for other team, issue closed”.
As far as I’m concerned, they’re all Microsoft Azure, why should people have to make another post, at the very least move the issue to the correct place, or even better, internally take it up with the other team since it’s severely crippling your own “product”.
The "premium request" billing model where you pay per invocation and not for usage is very obviously not a sustainable approach and creates skewed incentives (e.g. for microsoft to degrade response quality), especially with the shift towards longer running agentic sessions as opposed to simple oneshot chat questions, which the system was presumably designed for. Its just a very obvious fundamental incompatibility and the system is in increasing need of replacement. Usage linked (pay per token) is probably the way to go, as is industry standard.
The laat comment is a person pretending to be a maintainer of Microsoft. I have a gut feeling that these kind of people will only increase, and we'll have vibe engineers scouring popular repositories to ""contribute"" (note that the suggested fix is vague).
I completely understand why some projects are in whitelist-contributors-only mode. It's becoming a mess.
On the other hand ... I recently had to deal with official Microsoft Support for an Azure service degradation / silent failure.
Their email responses were broadly all like this -- fully drafted by GPT. The only thing i liked about that whole exchange was that GPT was readily willing to concede that all the details and observations I included point to a service degradation and failure on Microsoft side. A purely human mind would not have so readily conceded the point without some hedging or dilly-dallying or keeping some options open to avoid accepting blame.
> The only thing i liked about that whole exchange was that GPT was readily willing to concede that all the details and observations I included point to a service degradation and failure on Microsoft side.
Reminds me of an interaction I was forced to have with a chatbot over the phone for “customer service”. It kept apologizing, saying “I’m sorry to hear that.” in response to my issues.
The thing is, it wasn’t sorry to hear that. AI is incapable of feeling “sorry” about anything. It’s anthropomorphisizing itself and aping politeness. I might as well have a “Sorry” button on my desk that I smash every time a corporation worth $TRILL wrongs me. Insert South Park “We’re sorry” meme.
Are you sure “readily willing to concede” is worth absolutely anything as a user or consumer?
Better than actual human customer agents who give an obviously scripted “I’m sorry about that” when you explain a problem. At least the computer isn’t being forced to lie to me.
We need a law that forces management to be regularly exposed to their own customer service.
I knew someone would respond with this. HN is rampant with this sort of contrarian defeatism, and I just responded the other day to a nearly identical comment on a different topic, so:
No, it is not better. I have spent $AGE years of my life developing the ability to determine whether someone is authentically providing me sympathy, and when they are, I actually appreciate it. When they aren’t, I realize that that person is probably being mistreated by some corporate monstrosity or they’re having a shit day, and I provide them benefit of the doubt.
> At least the computer isn’t being forced to lie to me.
Isn’t it though?
> We need a law that forces management to be regularly exposed to their own customer service.
Yeah we need something. I joke about with my friends creating an AI concierge service that deals with these chatbots and alerts you when a human is finally somehow involved in the chain of communication. What a beautiful world where we’ll be burning absurd amounts of carbon in some sort of antisocial AI arms race to try to maximize shareholder profit.
The world would not actually be improved by having 1000s of customer service reps genuinely authentically feel sorry. You're literally demanding real people to experience real negative emotions over some IT problem you have.
They don't have to be but they at least can try to help. When dealing with automated response units the outcome is the same: much talk, no solution. With a rep you can at lease see what's available within their means and if you are nice to them they might actually be able to help you or at least make you feel less bad about it.
Lying means to make a statement that you believe to be untrue. LLMs don’t believe things, so they can’t lie.
I haven’t had the pleasure of one of these phone systems yet. I think I’d still be more irritated by a human fake apology because the company is abusing two people for that.
At any rate, I didn’t mean for it to be some sort of contest, more of a lament that modern customer service is a garbage fire in many ways and I dream of forcing the sociopaths who design these systems to suffer their own handiwork.
I wholly agree, the response screams “copied from ChatGPT” to me. “Contributions” like these comments and drive by PRs are a curse on open source and software development in general.
As someone who takes pride in being thorough and detail oriented, I cannot stand when people provide the bare minimum of effort in response. Earlier this week I created a bug report for an internal software project on another team. It was a bizarre behavior, so out of curiosity and a desire to be truly helpful, I spent a couple hours whittling the issue down to a small, reproducible test case. I even had someone on my team run through the reproduction steps to confirm it was reproducible on at least one other environment.
The next day, the PM of the other team responded with a _screenshot of an AI conversation_ saying the issue was on my end for misusing a standard CLI tool. I was offended on so many levels. For one, I wasn’t using the CLI tool in the way it describes, and even if I was it wouldn’t affect the bug. But the bigger problem is that this person thinks a screenshot of an AI conversation is an acceptable response. Is this what talking to semi technical roles is going to be like from now on? I get to argue with an LLM by proxy of another human? Fuck that.
>> The next day, the PM of the other team responded with a _screenshot of an AI conversation_ saying the issue was on my end for misusing a standard CLI tool.
You are still on time, to coach a model to create a reply saying the are completely wrong, and send back a print screen of that reply :-)) Bonus points for having the model include disparaging comments...
Yes, of course I think they lied, because a trustworthy person would never consider 0-effort regurgitated LLM boilerplate as a useful contribution to an issue thread. It's that simple.
Let me slop an affirmative comment on this HIGH TRAFFIC issue so I get ENGAGEMENT on it and EYEBALLS on my vibed GitHub PROFILE and get STARS on my repos.
Exactly I have seen these know it all comments on my own repos and also tldraw's issues when adding issues. They add nothing to the conversation, they just paste the conversation into some coding tool and spit out the info.
> issues auto-close after 1 week of inactivity, meanwhile PRs submitted 10 years ago remains open.
It's definitely a mess, but based on the massive decline in signal vs noise of public comments and issues on open source recently, that's not a bad heuristic for filtering quality.
Everyone is a maintainer of Microsoft. Everyone is testing their buggy products, as they leak information like a wire only umbrella. It is sad that more people who use co-pilot know that they are training it at a cost of millions of gallons of fresh drinking water.
It was a mess before, and it will only get worse, but at least I can get some work done 4 times a day.
Copilot fairly recently added support for running sub-agents using different models to the model that invoked them.
If this report is to be believed, they didn't implement billing correctly for the sub-agents allowing more costly models to be run for free as sub-agents.
Some part of me says, let their vibing have a cost, since clearly "overall product quality going to shit" hasn't had a visible effect on their trajectory
They don't care, they would rather let you use pirated MS software than move to Linux. There is a repo on GH with powershell scripts for activating windows/office and they let it sit there. Just checked, repo has 165K stars.
This could be the same, they know devs mostly prefer to use cursor and/or claude than copilot.
Home users are icing on the cake. Suing them for privacy is a bad look (see the RIAA), and using Windows and Office at home reinforces using at work.
On the other hand, since they own GitHub they can (in theory) monitor the downloads, check for IPs belonging to businesses, and use it as evidence in piracy cases.
Vibes all the way down. "Please check out this other slop issue with 5-600 other tickets pointed to it" -- I was going to ask, how is anyone supposed to make sense of such a mess, but I guess the answer is "no human is supposed to"
Microsoft notoriously tolerated pirated Windows and Office installations for about a decade and a half, to solidify their usage as de facto standard and expected. Tolerating unofficial free usage of their latest products is standard procedure for MS.
I think C# and .Net are objectively better to use than Java or C++.
But the tooling and documentation is kind of a mess. Do you build with the "dotnet" command, or the "msbuild" command? When should you prefer "nuget restore" over "dotnet restore"? Should you put "<RestorePackagesConfig>true</RestorePackagesConfig>" in the .csproj instead? What's the difference between a reference and using Nuget to install a package? What's the difference between "Framework" and "Core"? Why, in 2026, do I still need to tell it not to prefer 32-bit binaries?
It's getting better, but there's still 20 years of documentation, how-to articles, StackOverflow Q&A, blogs, and books telling you to do old, broken, and out of date stuff, and finding good information about the specific version you're using can be difficult.
Admittedly, my perspective is skewed because I had never used C# and .Net before jumping in to a large .Net Framework project with hundreds of sub-projects developed over 15-20 years.
Thinking back, you're probably correct, but it seems like they where actively trying to create something good back then. That might just be me only seeing the good parts, with .Net and SQLServer. Azure was never good, and we've know why for over a decade, their working conditions suck and people don't stay long, resulting things being held together by duct tape.
I do think some things in Microsoft ecosystem are salvageable, they just aren't trendy. The Windows kernel can still work, .Net and their C++ runtime, Win32 / Winforms, ActiveDirectory, Exchange (on-prem) and Office are all still fixable and will last Microsoft a long time. It's just boring, and Microsoft apparently won't do it, because: No subscription.
Every time I see something about trying to control an LLM by sending instructions to the LLM, I wonder: have we really learned nothing of the pitfalls of in-band signaling since the days of phreaking?