I really dislike these AI middleman plans. The value-add that Microsoft brings to Github Copilot is near zero compared to directly buying from Anthropic or OpenAI, where 99% of the value is being delivered from. I don't understand why anyone would want to deal with Microsoft as a vendor if they don't have to. The short period of discounted usage was always the obvious rug pull.
I would also add that the models they supply through Azure Foundry are covered under my employer's existing customer agreement, by which MS is not allowed to train models on our data (which might include IP of the company or its clients). For organizations worried about that, it's nice & cozy.
Bingo. Github Copilot is mostly for organizations that have an existing Azure bill and would rather see that go up then get a new vendor bill. Professional middlemen.
If you’ve ever had to be part of the frankly batshit insane procurement process that some organizations force you to gauntlet through, it becomes a very obvious and appealing option to do this
It technically does indeed matter, because "then" means a totally different thing in that sentence, but using "then" in that way would be an odd enough way to construct that sentence that it's blindingly obvious that they meant "than".
What reasonable interpretation of the sentence is there if "then" is applied literally? I can only find validity using "than", and therefore the use of "then" doesn't matter as the author's intent isn't lost. That said, carrying the assumption that it does matter forward, how are you certain "then" isn't the correct interpretation of the author's intent?
Ah, the AWS Marketplace procurement model, where products mostly exist so that you can line item things through Amazon rather than going through a lengthy procurement process
I exclusively use prepaid OAI tokens when doing copilot work in visual studio. It's really easy to set up a "custom" model. The consistency is hard to beat and I can use the latest model on day one. I also get to see how the magic happens in my provider logs. Every token accounted for.
I disagree. I like the standard interface, being able to easily switch models as things invariably change from week to week, and having a relationship with one company. That's why I'm a big fan of openrouter and Cursor. Not too much experience with Copilot, but I think there's a huge value add in AI middlemen.
Because if you’re a vscode user up until a couple days ago you could hammer Opus 4.6 all day every day and pay nowhere close to the Claude Max plan. Many people exploited this and the subsidy is closing.
A suggestion: Don't invest in any new hardware to run an LLM locally until you've tried the model for a while through OpenRouter.
The Qwen models are cool, but if you're coming from Opus you will be somewhere between mildly to very disappointed depending on the complexity of your work.
Been having a ton of fun with copilot cli directed to local qwen 3.6. If you’re willing to increase the amount of specificity in your prompts then delegating from a GPT-5.4 or Opus to local qwen has been great so far.
The Anthropic Pro plan cost double and gave you, I don't know, a tenth the usage, depending on how efficiently you used Copilot requests, and no access to a large set of models including GPT and Gemini and free ones.
Well they charge per prompt, but with usage limits it is a mix of token and prompt. If prompt multiplier is higher, tokens are also multiplied, so limit is reached sooner.
It is basically a token based pricing, but you get alos a limitation of prompts (you can't just randomly ask questions to models, you have to optimize to make them do the most work for e.g. hour(s) without you replying - or ask them to use the question tool).
Yes, I loved my $10 a month person subscription for light coding tasks, it worked great. I'd use claude code max for heavy lifting, but the $10 a month copilot plan kept me off cursor for the IDE centric things.
Opus 4.6 is no longer available and Opus 4.7 chews through monthly limits with reckless abandon. The value-add of GH Copilot is basically gone (at least for individuals on the Pro or Pro+ plans.)
Copilot was there in AI based development first with tab completions.
Now, it may be the right call to immediately give up and shutdown after Opus 4.5, but models and subscriptions are in flux right now, so the right call is not at all obvious to me.
The agentic AI models could be commoditized, some model may excel in one area of SWE, while others are good for another area, local models may be at least good enough for 80%, and cloud usage could fall to 20%, etc. etc.
Staying in the market and providing multi-model and harness options (Claude and Codex usable in Copilot) is good for the market, even if you don't use it.
I found the Copilot harness generally more buggy/disfunctional. After seeing a "long" agent response get dropped (still counts against usage of course) too many times I gave up on the product.
It doesn't matter how competent the actual model is, or how long it's able to operate independently, if the harness can't handle it and drops responses. Made me think are they even using their own harness?
At least Anthropic is obviously dogfooding on Claude Code which keeps it mostly functional.
I don't know what they have done to Claude, but when using through copilot it's truly awful compared to using it straight from the API.
I have always just used the API, but I decided to give copilot a go on the weekend because of the cheap price. And I am seeing weird behavior like I have never seen before... It will somehow fail to use the file editing tool and then spend an absolutely huge amount of time/tokens building a python script to apply the edit in a sub process... And it will spin it's wheels on stuff the API routinely just gets right in one shot.
It was so much cheaper! I subscribed with the monthly plan instead of the yearly one thinking that the deal won’t last. It has last a bit longer than expected.
1. They heavily subsidized their plans vs. paying for API.
2. They allowed me to use the subscription in every tool I wanted.
3. It covered both Anthropic and OpenAI.
Except Copilot doesnt bill you per token like all those companies do, they bill you per prompt, at least Copilot in Visual Studio 2026 which is insane to me, are they just hosting all those models and able to reduce costs of doing so?
No they are taking the massive L. Thats why they paused new sign ups.
Just for context to the insanity, they allow recursive subagents to I believe its 5 levels deep.
You can make a prompt and tell copilot to dig through a code base, have one sub agent per file, and one Recursive subagent per function, to do some complex codebase wide audit. If you use Opus 4.7 to do this it consumes a grand total of 0.5% of a Pro+ plan.
Thats why this paragraph is here:
> it’s now common for a handful of requests to incur costs that exceed the plan price
I was accounting for that in the 1% of value. I don't see a ton of value in this for development, you end up just always using the smartest model, with maybe tuning subagents to slightly dumber but much faster model. You really only need one subscription to the provider of the smartest model, with maybe 30 minutes of setup time to switch over if SOTA ever switches back to OpenAI.
I have thought about making a product out of something I'm building and trying to make the cost of my product a percentage on top of whatever I could resell Anthropic or OpenAI (or whatever) tokens for. I get this may be unpopular, maybe I should just stick with BYO-key.
Great. I, a small consultancy, have just spent the last month working out a workload that uses Opus 4.6 via VS Code to prep horrible, inconsistent, survey data for upload to a proprietary platform. Worked a treat with some light babysitting.
It's the sort of messy job that agents excel at. Decisions need to be made on free text data, translations done into multiple languages, ambiguity handled.
I now need to recheck it still works with another model, which involves a lot of manual verification; and potentially move to Claude Code and pay more money I can ill afford right now.
I'm not even clear from the post when this comes in, I'm guessing effective immediately.
This really hammers home for me the point that we should not be renting our tools.
My own dumb fault for trusting them, I will make sure to learn from this.
I have a GitHub Pro subscription, renewed for the 2nd year, and I just found out I can no longer use Opus with it. Opus was one of the reasons I had a subscription in the first place.
Opus 4.6 had a 3x multiplier in Pro. Now the new Opus 4.7 model has 7.5x in Pro+, which offers 5x more requests, but costs 4x more than Pro. So now Opus is essentially 2x the price it used to be.
Reading the comments here drives home an industry wide problem with these tools: people are just using the latest and most expensive models because they can, and because they’re cargo-culting. This is perhaps the first time that software has had this kind of problem, and coders are not exactly demonstrating great discretionary decision making.
I’ve been using Anthropic models exclusively for the last month on a large, realistic codebase, and I can count the number of times I needed to use Opus on one hand. Most of the time, Haiku is fine. About 10% of the time I splurge for Sonnet, and honestly, even some of those are unnecessary.
Folks are complaining because they lost unlimited access to a Ferrari, when a bicycle is fine for 95% of trips.
Haiku is most definitely not fine for the code bases that I work on. Sonnet is probably fine for most daily tasks, but Opus is still needed to find that pesky bug you've been chasing, or to thoroughly review your PR.
Most of the people using these models aren't skilled enough to make that determination. Seems rough trying to sell yourself as the thing that means you don't need to understand what you're doing but also insist that you understand what you're doing well enough to select an appropriate model.
> Haiku is most definitely not fine for the code bases that I work on. Sonnet is probably fine for most daily tasks, but Opus is still needed to find that pesky bug you've been chasing, or to thoroughly review your PR.
Yeah, I hear that a lot, but it never comes with proof. Everyone is special.
I’m sure you’d find that Haiku is pretty functional if there were a constraint on your use.
I use models from Opus through Haiku and down into Qwen locally hosted models.
I don't know how anyone could believe that Haiku is useful for most engineering tasks. I often try to have it take on small tasks in the codebase with well defined boundaries to try to conserve my plan limits, but half the time I end up disappointed and feeling like I wasted more time than I should have.
The differences between the models is vast. I'm not even sure how you could conclude that Haiku is usable for most work, unless you have a very different type of workload than what I work on.
More information required. What are you working on? What languages? How do you define “small tasks”? What are “well-defined boundaries”? What is your workflow?
Most importantly, define your acceptance criteria. What do you mean by “disappointed” - this word is doing most of the heavy lifting in your anecdote. (i.e. I know plenty of coders who are “disappointed” by any code that they didn’t personally write, and become reflexively snobby about LLM code quality. Not saying that’s you, but I can’t rule it out, either.)
The models are not the same, but Haiku is definitely not useless, and without a lot more detail, I just ignore anecdotal statements with this sort of hyperbole. Just to illustrate the larger point, I find something wrong with nearly everything Haiku writes, but then again, I don’t expect perfection. I’d probably get a “better” end result for most individual runs with the more expensive models, but at vastly higher cost that doesn’t justify the difference.
> I don't think it's really helpful to tell people they're holding it wrong
I’m not saying that. If anything, it really doesn’t matter much what model you use, and it’s only a case of “you’re holding it wrong” in the sense that you have to use your brain to write code, and that if you outsource your thinking to a machine, that’s the fundamental mistake.
In other words, it’s a tool, not a magic wand. So yeah, you do have to understand how to use it, but in a fairly deterministic way, not in a mysterious woo-woo way.
It’s not snarky. It’s literally the argument people are making: I am special, my use case is exceptional, therefore I need to use the special tool, even if you don’t need to.
>> Yeah, I hear that a lot, but it never comes with proof. Everyone is special.
You were the one who made the claim that Haiku is fine most of the time. To any reasonable person, the burden of proof is on you. Maybe you should share some high level details about your codebase, like its stack, size, problem domain, and so on? Maybe they are so generic that Haiku indeed does fine for you.
Of course you don't NEED the better models, but figuring out what model you need can waste a lot of time and effort.
Even when a cheap model is capable of a task it needs a lot more guidance than a more expensive one.
They are also less reliable. You can waste a lot of time cleaning up after them.
Judging whether something is good enough is hard work and rerolling with a more expensive model is painful.
Judging the difficulty of a task ahead of time is very hard. Judging how good a model is for a given task even harder, especially when models and harnesses keep changing all the time.
The real productivity boost LLMs provide is already modest and when you start tinkering with models it can easily evaporate.
AI should decide the level of model needed, and fallback if it fails.
It mostly is a UX problem. Why do I need to specify the level of model beforehand?
Many problems don't allow decision pre-implementation.
This is the approach of Auto in Cursor and I've not been impressed with it at all. I think I'm always getting Composer and while its fast it wastes my time. GLM 5.1 in OpenCode is far better and less expensive, it can do planning and implementation both very effectively. Opus is still the best but GPT 5.4 (in Codex) is good enough too, and way more affordable.
This would require LLMs being good at knowing when they are doing a bad job, which they are still terrible at. With a good testing and verification harness set up, sure, then it could just go to a more powerful model if it can't make tests pass. But not a lot of usage is like this.
That’s certainly an opinion. Not one I agree with, but sure, if you entirely outsource all of your thinking to the magic box, then you probably want the box to have the strongest possible magic.
I think it heavily depends on how you're using it. If you understand your codebase and you're using it like "build a function that does x in y file" then smaller/cheaper models are great. But if you're saying "hey build this relatively complex feature following the 30,000 foot view spec in this markdown doc" then Haiku doesn't work (unless your "complex feature" is just an api endpoint and some UI that consumes it).
I largely agree. But that goes back to my point (albeit with mixed metaphors): there are lots of people who are just hitting things with a jackhammer in lieu of understanding how to properly use a hammer.
I basically never just yolo large code changes, and use my taste and experience to guide the tools along. For this, Haiku is perfectly fine in nearly all circumstances.
> people are just using the latest and most expensive models because they can, and because they’re cargo-culting. This is perhaps the first time that software has had this kind of problem, and coders are not exactly demonstrating great discretionary decision making.
> I’ve been using Anthropic models exclusively for the last month on a large, realistic codebase, and I can count the number of times I needed to use Opus on one hand. Most of the time, Haiku is fine. About 10% of the time I splurge for Sonnet, and honestly, even some of those are unnecessary.
You and I couldn't have more different experiences. Opus 4.7 on the max setting still gets lost and chokes on a lot of my tasks.
I switch to Sonnet for simpler tasks like refactoring where I can lay out all of the expectations in detail, but even with Opus 4.7 I can often go through my entire 5-hour credit limit just trying to get it to converge on a reasonable plan. This is in a medium size codebase.
For the people putting together simple web apps using Sonnet with a mix of Haiku might be fine, but we have a long way to go with LLMs before even the SOTA models are trustworthy for complex tasks.
I don’t use Haiku for planning of big tasks, so we basically agree on that. But even just Sonnet 4.6, on a fairly large codebase, only truly goes into the weeds maybe 10% of the time for me. I also write pretty specific initial prompts, and have a good idea of how I want the code to work before I start prompting. For example, sometimes I will spend several hours writing a spec before even picking up the power tools.
I have never had the situation you describe, where Opus won’t come up with “a reasonable plan”, but your definition of “reasonable” might be very different than mine, and of course, running through your credit limit is an entirely tangential problem.
>people are just using the latest and most expensive models because they can,
While I agree with the sentiment, I think that might have been initially driven by older models being nerfed and/or newer ones were better at token/$. And there is this notion that those labs don't constraint the model on the first days after its release.
- If you pay for unlimited trips will you choose the Ferrari or the old VW? Both are waiting outside your door, ready to go.
- Providers that let you choose models don't really price much difference between lower class models. On my grandfathered Cursor plan I pay 1x request to use Composer 2 or 2x request to use Opus 4.6. Until the price is more differentiated so people can say "ok yes Opus is smarter, but paying 10x more when Haiku would do the same isn't worth it" it won't happen.
Agreed on both points. We’re dealing with a cost/benefit analysis, and to this point, coders have been subsidized, coerced…maybe even mandated into using the most expensive option as if it was a limitless resource. Clearly not true, and so of course we’re going to see nerfing of the tools over time.
Obviously we’re a long way away from being able to rationally evaluate whether the value of X tokens in model Y is better than model Z, let alone better in terms of developer cost, but that’s kind of where we need to get to, otherwise the model providers are selling magic beans rated in ineffable units of magicalness. The only rational behavior in such a world is to gorge yourself.
Model selection for day to day tasks based on vibes is not very scientific. Micromanaging the model doesn't seem like a great idea when doing real professional work with professional goals/deadlines/pressures.
> Micromanaging the model doesn't seem like a great idea when doing real professional work with professional goals/deadlines/pressures.
Remember that it's not only the cost per token, but also speed. Some tasks are done faster with simpler/less-thinking models, so it might actually make sense to micromanage the model when you have deadlines.
It’s deeply ironic that the folks who want to outsource as much thought to the model as possible are saying that my stance - use your brain to decide the right tool for the job - is tantamount to “vibes”.
You are being deeply reductive and that's against the spirit of hacker news. The issue is that models are difficult to objectively benchmark. The benchmarks don't always align with real world performance. It's not easy and clear cut to determine which model will work best in a given situation. It boils down to loose experiences/anecdotes. Do you have an objective criteria for model selection that you have tested to be effective with reproducible tests?
Claude Code doesn't have an option to use Opus 4.6 any more for me. It was great, but I guess now I have to use it half as much or upgrade my subscription again.
85% of my code tasking can be handled by either GLM or Sonnet. The truth of the matter is that most software isn't that complicated. Even more hilarious is that people were running Opus on their OpenClaw setups. I'm glad Anthropic kicked them to the curb.
> I’ve been using Anthropic models exclusively for the last month on a large, realistic codebase, and I can count the number of times I needed to use Opus on one hand. Most of the time, Haiku is fine. About 10% of the time I splurge for Sonnet, and honestly, even some of those are unnecessary.
I mean at some point some people learn...
I was doing Opus for nasty stuff or otherwise at most planning and then using Sonnet to execute.
Buuuuut I'm dealing with a lot of nonstandard use cases and/or sloppy codebases.
Also, at work, Haiku isn't an enabled model.
But also, if I or my employer are paying for premium requests, then they should be served appropriately.
As it stands this announcement smells of "We know our pricing was predatory and here is the rug pull."
My other lesser worry isn't that Opus 4.7 has a 7.5x multi, it's that the multiplier is quoted as an 'introductory' rate.
Haiku is complete crap compared to sonnet in GHCP. A basic task in Haiku takes 3 prompts with a lot of correction. 1 prompt in sonnet. It isn't worth a third of the price if I have to fix it twice.
I'm in the same boat as you. Wish I had known this before my subscription renewed. There's no longer any value in paying them for this service when I can cut them out of the equation and pay the model providers directly.
> This whole thing is a massive asshole move, probably illegal in all countries with a minimum set of consumer protections
Why would it be illegal in any country? Did you pay for an year upfront? Even if so they're offering a pro-rated refund according to the linked blog post:
> If you hit unexpected limits or these changes just don’t work for you, you can cancel your Pro or Pro+ subscription and receive a refund for the time remaining on your current subscription by visiting your Billing settings before May 20
Not sure where the expectation that a business should continue serving you at a given price till the end of time no matter what came from.
This thread is pretty quiet for what strikes me as a substantial set of changes with, presumably, more substantial changes still to come for anyone not grandfathered into a Pro plan.
I get the impression that the intersection of HN posters and Copilot users is quite small in practice; that Claude Code and Codex suck up all the oxygen in this room. But it seems plausible we’ll see similar “true costs greatly exceed our current subscription pricing” from Anthropic and OpenAI someday soon…
Using Copilot Pro with Pi, way better and smarter than using Claude Code. I haven't gotten a single e-mail and just wanted to use Opus (I use Sonnet 95% of the time with Opus for issues where Sonnet is struggling) and got an error message. No prior warning, nothing, I'm pissed. They just rugpulled all paying customers man. I liked Copilot because I can plan my usage over a whole month and I'm not "forced" to use it for a week before hitting limits unlike Claude and Codex.
Do you have a citation on this? I have a Claude Pro subscription and looked at the comparison page and it says this under Pro:
Everything in Free and:
Claude Code directly in your codebase
Power through tasks with Cowork
Higher usage limits
Deep research and analysis
Memory that carries across conversations
> If you hit unexpected limits or these changes just don’t work for you, you can cancel your Pro or Pro+ subscription and you will not be charged for April usage. Please reach out to GitHub support between April 20 and May 20 for a refund.
> But it seems plausible we’ll see similar "true costs greatly exceed our current subscription pricing" from Anthropic and OpenAI someday soon
Enterprise might stick around, but individually, I reckon the developers will flock to OpenCode + open weights (Qwen/GLM/Codestral). The problem then is, if the open weight models impress these new adopters, they will shout about it from rooftops (conferences, social media, blogs) in unison, which might result in an exodus. Especially troublesome considering developers are a major market for both frontier labs (Anthropic & OpenAI) & its IPO ambitions.
Speaking as someone where he only 'real' option we have at work is Copilot Plugin, but I also use Copilot Plugin at home....
This is a shitty shitty shitty move.
As a personal user, I can now only use Opus 4.7 at a 7.5x 'Introductory' multiplier if I upgrade to pro+, but at work I can still apparently do Opus 4.6 at a 3x Multiplier on my work 'enterprise' account.
Honestly it strikes me as though someone at Github Copilot took Palantir's manifesto to heart; Screw the individual, consolidate power to companies on every level.
>it’s now common for a handful of requests to incur costs that exceed the plan price!
I think this is really telling. The cost of AI has really been masked HUGELY to drive adoption. The true cost is likely to be unsustainable for the big complex tasks (agents running for hours+) that companies have been pushing.
I was skeptical, then quietly bullish on AI, but I'm now seeing signs the market is cracking and the availability is going to receded/costs balloon.
Claude Code is definitely token based, its been discussed extensively on Hacker News and the related Github threads. A large context cache miss can take half your usage easily in just one request... "max" just means more reasoning tokens. I've also run out of usage during a single request in CoWork. Its definitely token based.
Yesterday, Opus 4.6 cost three credits. You can no longer use 4.6 or 4.5.
Opus 4.7 is available today for 7.5 credits per prompt.
They have also suspended new signups.
After testing all of the major IDEs/tools that integrate with LLMs over the last four weeks, I was happy to settle on Copilot. I, and others, seem to be a lot confident in that decision. Especially since there seems to be no refund path for people who prepaid for a year.
In my 30+ years online, I've never seen an industry change so much in terms of pricing, service levels, etc, as I have the last two months.
I'm really curious where all of this lands, and if AI coding tools will be something that only a small percentage can genuinely afford at a competitive level.
> In my 30+ years online, I've never seen an industry change so much in terms of pricing, service levels, etc, as I have the last two months.
Warning: baseless speculation/theorizing ahead.
This is the consequence of LLM inference being really expensive to run, and LLM inference companies being really attractive to VCs. The VC silly money means their costs are totally decoupled from revenue for a while, but I guess eventually people look at incomings vs outgoings and start asking questions.
Previous big trends like SaaS apps, NFTs, blockchain etc were similarly attractive to VCs (for a period of time at least for the last two, the first one is still pretty attractive to VCs), but nowhere near as expensive to run so the behaviour of the companies running them wasn't quite the same.
AI is still in the "VCs subsidizing everything" -phase.
So:
- DO use AIs to build tools for yourself faster. If the AI goes away, the dashboard and scripts you made will still work.
- DO NOT build your business on top of 3rd party AI services with no way of swapping the backend easily. The question isn't whether there's going to be a "rug-pull", but when it happens. It might be sudden like this one or gradual where they just pump up the price like boiling a frog.
Good thing I had just finished migrating all of my workflows to OpenCode for the time being!
It's a shame because the VsCode copilot experience is quite good out of the box compared to all of the other harnesses I've used. But with typical lack of transparency, and sudden, harsh changes... What are they thinking?
After the restrictive rate limiting they've already instituted, I'm simply cancelling and continuing by using providers directly.
Note that the 7.5x multiplier is only for the promotional period (until end of April), then it'll get even worse. If I had to guess it'll be priced at 10x.
So this is pretty devastating to my general workflows [1] right now, and poorly timed to boot, with no wind-down at all.
It was clear (see the linked post from 70 days ago) that the current offering was unsustainable, but I'm a bit taken aback at how sharp the clawback is.
Yes, Github's per-request pricing was insane; anyone suggesting using CC instead or asking if any other provider is as cheap just doesn't understand the insanity. Clearly losing a lot of money on the people making good use of it.
I was actually hoping they would change it to something that more closely tracks their actual costs so that they wouldn't have to rug-pull this badly. In particular what was really bad about it was that sending prompts to agents while they were working (to give them corrections) cost extra so I stopped doing that (after initially OpenCode didn't cause billing for that, until they became official).
Microslop, 'xcuse me, Microsoft is working hard to make github less and less appealing. It's a bit weird how an initially fairly good idea, over time becomes ... worse.
I guess it makes more sense for me to just get Claude Pro instead. I was using my Copilot license only because of Opus 4.6 access as all other models seemed crippled in comparison in Copilot; does not even make sense to upgrade to Pro+ which goes from $10/mo to $40/mo and only gives you access to a model that has 7x the rate - 5x the limit at 7x the rate for 4x the price does not seem appealing at all.
A test to see if they could get away with it. I think we're really in the thick of token rationing right now and the fallout is going to be funny to watch.
I wouldn't mind this change that much if opus-4.7 worked properly in copilot cli. It keeps stopping mid-thought or task and forces me to waste more prompts for no observable reason.
Looks like I'm ending my subscription, good (likely too good, no way my account was even remotely within profitable range) access to opus-4.6 was the only reason I used this at all.
Are you using through regular copilot (the 'local' agent type), or through the separate claude agent type (which I believe you have to activate in your repository settings on github).
I had the exact same issues with the latter - randomly stops working, wipes chat history, just generally seems to be totally broken. But the former works totally fine and still lets you select sonnet/opus. My experience was before this recent 4.6 -> 4.7 change though.
Regular local agent. Seems like as soon as the context fills up (and it only has about 160k of context so that doesn't take much) it starts to fall to pieces. I even tried using opencode as a harness instead and it causes opus 4.7 to lose all memory every time I hit a compaction step.
Welp. I already added a $20 Claude Pro subscription to complement my $10 Github Copilot Pro subscription and $10 DuckDuckGo Plus. That was partly to show support for Anthropic after the OpenAI/DOD episode, but also because I've been using Opus 4.5 exclusively with Copilot and I figured I should try Claude Code eventually.
Now it's going to cost me an upgrade to $39 Github Pro+ to keep using Opus, and even then it's with much higher multipliers. I don't fully understand the extent to which this reflects actual costs for Opus versus Microsoft leveraging network effects to discourage the usage of a competitor.
I didn't really want to wander outside of VSCode just yet because I was happy with VSCode/Copilot/Opus-4.5 and I don't want to spend all my time experimenting when stuff is changing so fast. But I guess my hand has been forced.
You can also use Claude Code in a VS Code terminal window, which I much prefer for reasons I can’t quite put my finger on. Granted, I’ve moved to Zed in the past few months. I’m doing the same there.
Oh nuts, I forgot I was on Copilot. I used to use it for auto-complete and so on. I haven't used it in over a year and I'm still paying for it. If you're like me you'll find it here: https://github.com/settings/billing/licensing
And you can then cancel it. I have no idea what a premium request is and it's all just too complicated to use.
I cannot describe how disappointing it is to be switching to this insane time limit window based pricing. I absolutely abhor that I'll be subjected to 5 hour chunks of time where I'll be limited at some point in that window of time, and be told I'll have to wait. And then there is a weekly limit.
That's not how my creative energy works. I have time that I want to solve problems, and I want to solve them. I don't want a cooldown timer applied to solving a problem. Not to mention the anxiety of realizing that while I sleep I could have burned tokens in that time.
I'm incredibly disappointed when I sat down to my hobbyist programming time and realized copilot was suddenly and dramatically changed in a way that is incredibly disheartening.
Meter my token usage DON'T tell me when I can use them! ARGH.
> I'm incredibly disappointed when I sat down to my hobbyist programming time and realized copilot was suddenly and dramatically changed in a way that is incredibly disheartening.
Guess it’s time to rediscover the lost art of programming without an LLM.
I'm not surprised at all. This was one of the most generous plans out there, offering frankly ridiculous pricing based on a single prompt regardless of turns taken or tokens used. I was subscribed for a month around Christmas and got a shitload of tokens out of Opus 4.5 for a measly $10.
It's quite cheap at $10 at 1000 premium requests (1 request is like a plan mode + implementation + tests + commit & push). The only problem is I have already used it all, but was billed on the 3rd day of the month, and have to wait till next month to use it.
I cannot understand people still using anthropic models on copilot, when gpt 5.4 is better and 3 to 7 time cheaper. Anthropic quite obviously raised their licensing to the max. You probably can still have a taste of it for a few minutes before being limited on their own subscription.
Simple, for what I'm doing Opus 4.6 (and before that, Opus 4.5) are just much better at following my instructions and achieve consistently better results.
From what I've been gathering, this split in success seems to depend a lot on the types of tasks, the domains / programming languages / frameworks used, and style of prompting.
I couldn't get 5.2 to follow instructions for the life of me, even when repeating multiple times to do / not do something. 5.3-codex was an improvement and 5.4 while _usually_ decent still regularly forgets, goes on unnecessary tangents, or otherwise repeatedly stops just to ask for continuation.
Sure, I'm paying 3x more per request, but I'm also doing 5x fewer requests.
Or well, used to. Still bummed about them dropping 4.6.
My experience is similar. Opus, especially Opus 4.5, understands my intentions better even when poorly phrased, and more consistently follows my instructions to do only what's necessary and no more.
As far as I can tell, the distinctive feature of my workflow is that I'm giving it small, contained single-commit-sized tasks and limited context. For instance: "For all controller `output()` functions under `Controller/Edit/` and `Controller/Report/`, ensure that they check `Auth::userCanManage`." Others seem to be taking bigger swings.
Anecdotally, I experimented GPT-5.4 xhigh and something about the code it wrote just didn't vibe with me.
It felt like I constantly have to go back and either fix things or I just didn't like the results. Like the forward momentum/progress on my projects overally wasn't there over time. Even with tho its cheaper it just doesn't feel worth it, to the point I start to feel negative emotions.
I'm actually a bit worried that I've somehow become to feel more negative emotions with agentic coding. Quicker to feel frustrated somehow when things aren't working.
GPT's output is awful and it gets even more awful when you try to work out a solution "together" because it shits out 10 paragraphs with 20 options instead of focusing and getting things done.
Same for me. I would still be happy with my Copilot Pro subscription if I could use 5.4 with 1x coefficient (and 5.4 mini with 0.33x).
But seeing that they are stopping to get new subscriptions, and rumours/evidence that they plan to increase coefficients of remaining models, it seems they want us to see "the writing on the wall"
> it’s now common for a handful of requests to incur costs that exceed the plan price
Pricing per turn/request was/is an idiotic model and I'm glad they are paying for it. It just forces you into a workflow just to work around business model. Heck the best laugh would be to create a plan outside vscode with interactive CC/Codex then copy paste into GH copilot to do a single session burn of few M tokens.
So far they did not change it, and none of this applies to business and enterprise accounts. My idea is that it can still be viable as most businesses will have plenty minimally used licenses with just a few power users abusing the request model.
Damn it was good while it lasted, but it was obvious the previous per request pricing scheme was misaligned with their actual costs. MS's product people must be seriously detached from their technical and financial people for it to have even lasted this long (or they're willing to burn a lot of money for the typical "make customers happy and then rug pull" cycle, but hey, Hanlons razor).
Given that they've already silently had session + weekly rate limits for the past couple weeks already at least (I've hit them), I wonder if this change is just making them visible to the user, or if it's actually tightening them too.
If it's the former then I can say they're still significantly more generous than claude pro (on the pro+ plan), so this might be okay. If it's the latter, and the new limits are similar to claude pro then copilot is going to be significantly less useful to me.
Demand is increasing exponentially but supply is increasing linearly. NIMBYs inventing datacenter lies about sucking up water or noise is going to drive prices through the roof.
Noise in residential area is already a huge problem and data centers do in fact make it worse. They may be able to carve out exceptions in laws or push non-enforcement, but none of this changes the impact on human health.
This is some shit, coming with 0 notice at the start of a work week. My exposure to Claude is only via Copilot which has worked very well for my purposes. I didn't have to learn a ton for it to just start working. I guess I'll look into other options now as I really want to continue using Opus, but don't have a need to 4x my spend on Copilot quite yet.
I'm a paying customer and I did not receive ANY communication about this. Was using Opus this afternoon and then it disappeared.
Microsoft really can't stop being Microsoft. I don't dispute the need to charge more for those models, but there is a basic decency to do things and as usual the Big Tech fuckery and complete lack of morals makes them do this in a way that generates total mistrust where it could be just annoyance.
I'll see how Sonnet handles the most difficult problems but I'm foresee a subscription cancelation soon.
It's quite telling, they've paused new signups because Microsoft doesn't have enough compute, and they moved Opus to only being accessible on a higher tier because Anthropic doesn't have enough compute either.
They're all operating at a loss, enshittification is coming for us all.