Edit: Success is not the absence of vulnerability, but introduction, detection, and response trends.
(Github enterprise comes out of my budget and I am responsible for appsec training and code IR, thoughts and opinions always my own)
So my opinion is anybody who writes code that is used by others should feel a certain danger-tingle whenever a secret or real user data is put literally anywhere.
To all beginners that just means that when handling secrets, instead of pressing on, you should pause and make an exhaustive list of who would have read/write access to the secret under which conditions and whether that is intended. And with things that are world-readable like a public repo, this is especially crucial.
Another one may or may not be your shells history, the context of your environment variables, whatever you copy-paste into the browser-searchbar/application/LLM/chat/comment section of your choice etc.
If you absolutely have to store secrets/private user data in files within a repo it is a good idea to add the following to your .gitignore:
*.private
*.private.*
And then every such file has to have ".private." within the filename (e.g. credentials.private.json), this not only marks it to yourself, it also prevents you to mix up critical with mundane configuration.But better is to spend a day to think about where secrets/user data really should be stored and how to manage them properly.
¹: a non-exhaustive list of other such mistakes: mistaking XOR for encryption, storing passwords in plaintext, using hardcoded credentials, relying on obscurity for security, sending data unencrypted over HTTP, not hashing passwords, using weak hash functions like MD5 or SHA-1, no input validation to stiff thst goes into your database, trusting user input blindly, buffer overflows due to unchecked input, lack of access control, no user authentication, using default admin credentials, running all code as administrator/root without dropping priviledges, relying on client-side validation for security, using self-rolled cryptographic algorithms, mixing authentication and authorization logic, no session expiration or timeout, predictable session IDs, no patch management or updates, wide-open network shares, exposing internal services to the internet, trusting data from cookies or query strings without verification, etc
Having your CI/CD pipeline and your git repository service be so tightly bound creates security implications that do not need to exist.
Further half the point of physical security is tamper evidence. Something entirely lost here.
You mean not finding the vulnerability in the first place?
This would allow:
- Compromise intellectual property by exfiltrating the source code of all private repositories using CodeQL.
- Steal credentials within GitHub Actions secrets of any workflow job using CodeQL, and leverage those secrets to execute further supply chain attacks.
- Execute code on internal infrastructure running CodeQL workflows.
- Compromise GitHub Actions secrets of any workflow using the GitHub Actions Cache within a repo that uses CodeQL.
>> Success is not the absence of vulnerability, but introduction, detection, and response trends.
This isn’t a philosophy, it’s PR spin to reframe failure as progress...
As a customer, I’m not going to lose sleep over it. I’m going to document for any audits or other governance processes and carry on. I operate within "commercially reasonable" context for this work. Security is just very hard in a Sisyphus sort of way. We cannot not do it, but we also cannot be perfect, so there is always going to be vigorous debate over what enough is.