in 2019 i saw a fortune500 tech company put in place their own vulnerability scanner internal application which included this feature for our enterprise github repos. the tool was built and deployed to an old Linux docker image that was never updated to not be the target of the attack they were preventing... they never vetted to random version they started with either. i guess one can still use zip bomb or even the xz backdoor for extra irony points when attacking that system.
anyway, the people signing github checks also get promoted by pretending to implement that feature internally.
For their fix, they disabled debug logs...but didn't answer if they changed the temp tokens permissions to something more appropriate for a code analysis engine.
as a nit; RBAC is applied to an object based permissions system rather than being one. Simply, RBAC is a simplification of permission management in any underlying auth system.
The IAM admin persona is the one who gets a bunch of additional information. Thats accessible through aws iam policy builder, access logs, etc.
And no, its not feasible to determine if the initial caller is an appropriate iam admin persona and vary the initial response.
https://docs.aws.amazon.com/cli/latest/reference/sts/decode-...
...If only we could do something like: dry run and surface all the required permissions, then grant them in one fell (granular) sweep.
Which is terrible btw. You dont "technicall" have to do that, you really cannot add roles to custom roles, you can only add permissions to custom roles. Which makes it really hard to maintain the correctness of custom roles since their permissions can and do change.
> ...If only we could do something like: dry run and surface all the required permissions, then grant them in one fell (granular) sweep.
GCP even has something like that, but I honestly think that standard roles are usually fine. Sometimes making things too fine grained is not good either. Semantics matter.
The problem with that is that it can be difficult to know what you need, and it may be impossible to simulate in any practical sense. Like, sure, I can stand up a pair of test systems and fabricate every scenario I can possibly imagine, but my employer does want me to do other things this month. And what happens when one of the systems involves a third party?
Really, the need is to be able to provision access after the relationship is established. It's weird that you need a completely new secret to change access. Imagine if this were Linux and in order to access a directory you had to provision a new user to do it? How narrow do you really think user security access would be in practical terms then?
Could you go into more detail? At a base level interpretation, this is how it works already (you need a principal to provision access for...), but you presumably mean something more interesting?
It's not difficult, but it's a much bigger pain in the ass than just changing access or changing role on a user.
(Which ok, for an external facing system is ok)
I can bet the huge prevalence of "system says no, and nothing tells you why" helps a lot with creating vulnerable systems.
System need an "let X person do Action" instead of having people waddle through 10 options like SystemAdminActionAllow that don't mean anything to an end user
Historically the only choice was permissive by default, so this is unfortunately the setting used by older organizations and repos.
When a new repo is created, the default is inherited from the parent organization, so this insecure default tends to stick around if nobody bothers to change it. (There is no user-wide setting, so new repos owned by a user will use the restricted default. I believe newly created orgs use the better default.)
[0]: https://docs.github.com/en/actions/security-for-github-actio...
> Read and write permissions
> Workflows have read and write permissions in the repository for all scopes.
If you read this line of the documentation (https://docs.github.com/en/actions/security-for-github-actio...) you might think otherwise: > If the default permissions for the GITHUB_TOKEN are restrictive, you may have to elevate the permissions to allow some actions and commands to run successfully.
But I can confirm that in our GitHub organization "Read and write permissions" was the default, and thus that line of documentation makes no sense.For their quick fix, hopefully not for their final fix.
1: https://docs.github.com/en/actions/security-for-github-actio...
fun claims: https://github.com/github/actions-oidc-debugger#readme
Edit: Success is not the absence of vulnerability, but introduction, detection, and response trends.
(Github enterprise comes out of my budget and I am responsible for appsec training and code IR, thoughts and opinions always my own)
Having your CI/CD pipeline and your git repository service be so tightly bound creates security implications that do not need to exist.
Further half the point of physical security is tamper evidence. Something entirely lost here.
Don’t forget limitation of blast radius.
When shit hits the proverbial fan, it’s helpful to limit the size of the room.
You mean not finding the vulnerability in the first place?
This would allow:
- Compromise intellectual property by exfiltrating the source code of all private repositories using CodeQL.
- Steal credentials within GitHub Actions secrets of any workflow job using CodeQL, and leverage those secrets to execute further supply chain attacks.
- Execute code on internal infrastructure running CodeQL workflows.
- Compromise GitHub Actions secrets of any workflow using the GitHub Actions Cache within a repo that uses CodeQL.
>> Success is not the absence of vulnerability, but introduction, detection, and response trends.
This isn’t a philosophy, it’s PR spin to reframe failure as progress...
As a customer, I’m not going to lose sleep over it. I’m going to document for any audits or other governance processes and carry on. I operate within "commercially reasonable" context for this work. Security is just very hard in a Sisyphus sort of way. We cannot not do it, but we also cannot be perfect, so there is always going to be vigorous debate over what enough is.
[1] _and failing_.
So my opinion is anybody who writes code that is used by others should feel a certain danger-tingle whenever a secret or real user data is put literally anywhere.
To all beginners that just means that when handling secrets, instead of pressing on, you should pause and make an exhaustive list of who would have read/write access to the secret under which conditions and whether that is intended. And with things that are world-readable like a public repo, this is especially crucial.
Another one may or may not be your shells history, the context of your environment variables, whatever you copy-paste into the browser-searchbar/application/LLM/chat/comment section of your choice etc.
If you absolutely have to store secrets/private user data in files within a repo it is a good idea to add the following to your .gitignore:
*.private
*.private.*
And then every such file has to have ".private." within the filename (e.g. credentials.private.json), this not only marks it to yourself, it also prevents you to mix up critical with mundane configuration.But better is to spend a day to think about where secrets/user data really should be stored and how to manage them properly.
¹: a non-exhaustive list of other such mistakes: mistaking XOR for encryption, storing passwords in plaintext, using hardcoded credentials, relying on obscurity for security, sending data unencrypted over HTTP, not hashing passwords, using weak hash functions like MD5 or SHA-1, no input validation to stiff thst goes into your database, trusting user input blindly, buffer overflows due to unchecked input, lack of access control, no user authentication, using default admin credentials, running all code as administrator/root without dropping priviledges, relying on client-side validation for security, using self-rolled cryptographic algorithms, mixing authentication and authorization logic, no session expiration or timeout, predictable session IDs, no patch management or updates, wide-open network shares, exposing internal services to the internet, trusting data from cookies or query strings without verification, etc
I'd put "conflating input validation with escaping" on this list, and then the list fails the list because the list conflates input validation with escaping.
Luckily it was quickly remedied at least.