So they had a security-critical header whose fields are set by their internal authentication service.
And that same field can also contain arbitrary strings passed by the end user with git push -o
I know it's easy to say after the fact but still, wtf
They hint at their AI-augmented reversing methodology, which demonstrates one of the core strengths of current LLM agents. These models, trained extensively on code, can immensely speed up the process of understanding complex system internals.
Security research historically has two difficult components that build on one another:
1. Understanding complex system internals: uncovering the inner workings hidden by abstractions or interfaces
2. Finding vulnerabilities in these uncovered mechanisms
Sometimes both steps are equally hard. But often, finding the vulnerability is trivial once the real mechanisms are uncovered, rather than relying on assumptions about inner workings.
CVE-2026-3854 is a case where the vulnerability is not plainly obvious after understanding the internals. Still, I am confident that this command injection would have been found quickly had it been exposed to a more traditional or accessible attack surface.
Yep, there was a signal to help reverse engineer c++, as it could have been good at helping c++ mass porting to plain and simple C.
But recently this signal got somewhat scrambled, or being sabotage by c++ fan boys (those coding AIs would help getting rid of dev/vendor lock-ing using c++ syntax complexity)
Anyone in here work at Wiz? Seem like they do pretty good work. Tool itself has survived extreme growth/feature bloat and still does pretty well. Security team has found some really cool stuff.
> When babeld forwards a push request, one of the internal requests includes push options in the X-Stat header. Git push options are arbitrary strings that users can pass with git push -o. They are a standard git protocol feature, intended for server-side hints. babeld encodes them as numbered fields - push_option_0, push_option_1, and so on - alongside a push_option_count.
> babeld copies git push option values directly into the X-Stat header - without sanitizing semicolons. Since ; is the X-Stat field delimiter, any semicolon in a push option value breaks out of its designated field and creates new, attacker-controlled fields.
They managed to literally do the simplest possible thing wrong. The fruit was hanging so low it might have been underground.
Why do they need to stir up needless fear by using words like "BREAKING", "unauthorized access", or "millions of repositories" about the vulnerability that they caught before it was exploited in their X.com?
> GitHub Enterprise Server customers should upgrade immediately - at the time of this writing, our data indicates that 88% of instances are still vulnerable
GHES is essentially unmaintained (perhaps “on life support” would be more charitable since they are certainly accepting payment for it) and has been so for about a decade. It requires a multi-hour downtime to apply even a patch-level release. They do not have any supported mechanism for HA upgrades. So even the most conscientious GHES customers lag the latest version because they can’t afford the downtime.
They are constantly telling all their GHES customers who complain about the severe flaws with the self-hosted appliance product to move to GitHub Enterprise Cloud, which is just regular GitHub.com, but who in their right mind would make that move nowadays??? At least GHES stays up during the daily github.com outages.
Until GHES can do zero-downtime upgrades nothing will get better. Not on their roadmap because as far as I’m aware the GHES team doesn’t actually exist or is entirely focused on KLTO. It’s a dead product that they wish didn’t exist.
It sure isn’t! GitHub Enterprise Cloud is simply an enterprise plan on the regular multitenant github.com. Your repositories are on disk right next to everyone else that uses github.com. There is no segregated storage or compute.
I wish they had a plan to literally host GHES for you because then more people in the company would be forced to reckon with how terrible GHES is from an operational perspective. It is stuck ca. 15-20 years ago conceptually.
> X-Stat header that controls whether the server operates in enterprise mode.
Perhaps this header mentioned in the article is related, maybe that's the toggle for the enterprise mode? Seems there is at least traces of "enterprise mode" on the normal github servers.
I assume a fair amount of these on-prem customers restrict access to their GHES instance to be behind corporate VPN or something similar and are planning a date to upgrade their instance that won't affect operations.
Any public instance should update immediately though, it's not very hard to put together how to repro the vulnerability on your own from what they provide in the article and the fact that GitHub Enterprise source is publicly available.
I suppose so. The company invested pretty heavily in security tooling, though I think it wouldn't have been hard to do something to bypass the security for internal servers.
If you're in the enterprise you can update something outside of the normal schedule and guarantee blow up everything (and be blamed) or you can stick with the schedule and hope for the best.
I guess I woukd say youre fortunate to have not worked in a "we cannot use github.com because we take security very seriously" environment. Because always tells me you'll be running a on prem product that might get updated once a year.
On prem beats the heck out of github post Microsoft though... At least you know how to get it working again when someone breaks it. These days with github you expect a weekly 500, a rainbow unicorn error, build failures due to unavailable errors, etc. Last I checked the third party tracker github services were barely pushing one 9 of reliability.
Question is how fragile the upgrade process is in large installations. In other enterprise software messing around with large amounts of data I've seen the smallest things break the install and leaving the OPs team rolling back. Was like SharePoint in the past, you were rolling a dice when upgrading it.
The GitHub blog had an article saying that all patches must pass for github.com before merge but the GitHub Enterprise tests have a three day window to be rectified.
A "reasonable" answer is probably a primary self-hosted Forgejo instance as the canonical forge, while using GitHub as a mirror solely to take advantage of its free CI, while that lasts, while hosting secrets with a dedicated secret-hosting provider (I don't know what the provider du jour for this is these days).
If the primary forge's only job is to host the actual Git infrastructure (the code, the MRs, the issues, maybe a wiki), it's a lot more simple than GitHub, and probably more within the scope of what people can reasonably administer themselves.
I hosted the first "java.apache.org". I was an early employee at CollabNet, and in the first discussions around starting subversion. I worked on Cloud Foundry.
This stuff isn't easy and I'm more than happy letting someone else do it at the expense of some downtime.
Will I have to patch machines, keep packages updated, deal with SSL certs, maintain action runner infra, deal with billing for the machines, add monitoring, alerts, logging, etc
No, I don't want to be in the business of running my own Github clone. That's what I pay Github for.
Why do you pay salary to employees to buy food when you can just run a farm next to the office and save money by operating the farm and giving the employees food directly? You'd save money by not having to pay as high of salaries, and farms don't even need 24/7 devops teams.
Don't you think the farm example was a bit too extreme for it to make sense? A tech company probably does not have expertise in farming but devOps is something they already know how to do and can easily manage it in-house. Also how fast do you think farms produce food that you can drip feed it to employees constantly
> solely to take advantage of its free CI, while that lasts
Eh, if you want to be able to continue working, deploy and what not as normal during weekdays, I'd suggest also moving to Forgejo Actions if you're moving anyways. Not 100% compatible, but more or less the same, and even paying the same but with dedicated hardware you'd get way faster runners.
For companies with resources for infrastructure, sure.
For OSS, the unlimited free minutes of multiplatform CI offered by GitHub are literally impossible to replace. Maintaining runners yourself to do the same things would be somewhere between a part- and full-time job.
"Codeberg is a non-profit, community-led effort that provides services to free and open-source projects, such as Git hosting (using Forgejo), Pages, CI/CD and a Weblate instance."
Never say impossible.
Github is still "new" to a lot of us. OSS existed well before it, and will continue to exist well after.
If Codeberg starts offering Mac and Windows runners alongside their Linux ones for free (or at an achievable price point) for a modest OSS project I'll certainly look at it very closely. If all I needed was a Linux runner, I'd probably be on there already.
And yes, if we make OSS just about hosting the code, things are much simpler. If you're a piece of desktop software though, and you have users, they'll typically (and reasonably) want auditable signed binaries on all the platforms you support, which requires multiplatform CI.
We moved from github to a self-hosted forgejo instance about 6 months ago, works like a charm. Still can't belive how snappy forgejo is / laggy github has become
I am personally now drawing a clear delineation between projects for my internal consumption (e.g. ansible scripts) and projects that have potential use for the general populace. For the prior, I now host a private Forgejo instance. For the latter, I'll put it on GitHub but mirror it to my Forgejo instance.
I was pleasantly shocked that Forgejo is literally a single binary with a relatively easy config. All my internal services reference my Forgejo instance so, if I need to bail on GitHub, it's low friction for me.
The all-in-docker image and a couple of gitlab runners is all small to medium sized teams need. (Don't overcomplicate it with the kubernetes version unless you really need it)
If you could only choose from github, gitlab and atlassan then I suppose.. But really anything newer that stays in existance has to be focused on quality from early enough to not be defined by path dependence problems and bad choices like those 3.
I was impressed enough by AI finding vulnerabilities in source code, but doing it in binary executables is just amazing. This has so much potential, good and bad.
And yet another lesson to not treat data as instructions. Sanitize all user input!
Transformers were literally designed for translation.
As we have known for a while, they ended up being really good at translating source to source or text to source. It shouldn't be too surprising they are also really good at understanding the asm version too.
Doesn't make it any less impressive, but maybe less surprising.
My read is that this vulnerability is exploitable by an anonymous user. They absolutely have HTTP/gitprotocol logs that would indicate whether this was exploited but if it was, they won’t have logging about what actually got accessed and who did it, since the exploit was capable of standalone execution on the git servers, which would by definition be capable of evading any logging.
It's good to add information about what the vulnerability actually was, but please don't do it in the key of putdown. We're trying for something else here.