Most active commenters
  • toomuchtodo(3)

←back to thread

297 points cyberbender | 13 comments | | HN request time: 0.76s | source | bottom
Show context
junto ◴[] No.43527708[source]
They weren’t kidding on the response time. Very impressive from GitHub.
replies(1): >>43527835 #
1. belter ◴[] No.43527835[source]
Not very impressive to have an exposed public token with full write credentials...
replies(2): >>43527843 #>>43528012 #
2. 1a527dd5 ◴[] No.43527843[source]
Trying my best not to break the no snark rule [1], but I'm sure your code is 100% bullet proof against all current and future-yet-invented-attacks.

[1] _and failing_.

replies(2): >>43528097 #>>43528244 #
3. toomuchtodo ◴[] No.43528012[source]
Perfect security does not exist. Their security system (people, tech) operated as expected with an impressive response time. Room for improvement, certainly, but there always is.

Edit: Success is not the absence of vulnerability, but introduction, detection, and response trends.

(Github enterprise comes out of my budget and I am responsible for appsec training and code IR, thoughts and opinions always my own)

replies(3): >>43528509 #>>43528711 #>>43528803 #
4. atoav ◴[] No.43528244[source]
Nobody is immune against mistakes, but a certain class of mistakes¹ should never ever happen to anyone who should know better. And that in my book is anybody who has their code used by more people than themselves. I am not saying devs aren't allowed to make stupid mistakes, but if we let civil engineers have their bridges collapse with an "shit happens" -attitude trust in civil engineering would be questionable at best. So yeah shit happens to us devs, but we should be shamed if it was preventable by simply knowing the basics.

So my opinion is anybody who writes code that is used by others should feel a certain danger-tingle whenever a secret or real user data is put literally anywhere.

To all beginners that just means that when handling secrets, instead of pressing on, you should pause and make an exhaustive list of who would have read/write access to the secret under which conditions and whether that is intended. And with things that are world-readable like a public repo, this is especially crucial.

Another one may or may not be your shells history, the context of your environment variables, whatever you copy-paste into the browser-searchbar/application/LLM/chat/comment section of your choice etc.

If you absolutely have to store secrets/private user data in files within a repo it is a good idea to add the following to your .gitignore:

  *.private
  *.private.*
 
And then every such file has to have ".private." within the filename (e.g. credentials.private.json), this not only marks it to yourself, it also prevents you to mix up critical with mundane configuration.

But better is to spend a day to think about where secrets/user data really should be stored and how to manage them properly.

¹: a non-exhaustive list of other such mistakes: mistaking XOR for encryption, storing passwords in plaintext, using hardcoded credentials, relying on obscurity for security, sending data unencrypted over HTTP, not hashing passwords, using weak hash functions like MD5 or SHA-1, no input validation to stiff thst goes into your database, trusting user input blindly, buffer overflows due to unchecked input, lack of access control, no user authentication, using default admin credentials, running all code as administrator/root without dropping priviledges, relying on client-side validation for security, using self-rolled cryptographic algorithms, mixing authentication and authorization logic, no session expiration or timeout, predictable session IDs, no patch management or updates, wide-open network shares, exposing internal services to the internet, trusting data from cookies or query strings without verification, etc

replies(2): >>43528506 #>>43528516 #
5. koolba ◴[] No.43528509[source]
> Success is not the absence of vulnerability, but introduction, detection, and response trends.

Don’t forget limitation of blast radius.

When shit hits the proverbial fan, it’s helpful to limit the size of the room.

replies(1): >>43528615 #
6. immibis ◴[] No.43528516{3}[source]
> no input validation to stiff thst goes into your database

I'd put "conflating input validation with escaping" on this list, and then the list fails the list because the list conflates input validation with escaping.

replies(1): >>43528564 #
7. atoav ◴[] No.43528564{4}[source]
Good point, as I mentioned, this is a non-exhaustive list. Input validation and related topics like encodings, escaping, etc could fill a list single-handedly.
8. toomuchtodo ◴[] No.43528615{3}[source]
Yeah, I agree compartmentalization, least privilege, and sound architecture decisions are a component of reducing the pain when you get popped. It’s never if, just when.
9. timewizard ◴[] No.43528711[source]
> Perfect security does not exist.

Having your CI/CD pipeline and your git repository service be so tightly bound creates security implications that do not need to exist.

Further half the point of physical security is tamper evidence. Something entirely lost here.

replies(1): >>43529074 #
10. belter ◴[] No.43528803[source]
> Their security system (people, tech) operated as expected

You mean not finding the vulnerability in the first place?

This would allow:

- Compromise intellectual property by exfiltrating the source code of all private repositories using CodeQL.

- Steal credentials within GitHub Actions secrets of any workflow job using CodeQL, and leverage those secrets to execute further supply chain attacks.

- Execute code on internal infrastructure running CodeQL workflows.

- Compromise GitHub Actions secrets of any workflow using the GitHub Actions Cache within a repo that uses CodeQL.

>> Success is not the absence of vulnerability, but introduction, detection, and response trends.

This isn’t a philosophy, it’s PR spin to reframe failure as progress...

replies(1): >>43528852 #
11. toomuchtodo ◴[] No.43528852{3}[source]
This is not great based on the potential exposure, but also not the end of the world. You’re free to your opinion of course wrt severity and impact, but folks aren’t going to leave GitHub over this in any material fashion imho. They had a failure, they will recover from it and move on. It’s certainly not PR from me, I don’t work for nor have any financial interest in GH or MS. I am a security person though, these are my opinions based on doing this for ~10 years (I am consistently exposed to security gore in my work), and we likely have an expectations disconnect.

As a customer, I’m not going to lose sleep over it. I’m going to document for any audits or other governance processes and carry on. I operate within "commercially reasonable" context for this work. Security is just very hard in a Sisyphus sort of way. We cannot not do it, but we also cannot be perfect, so there is always going to be vigorous debate over what enough is.

12. Aeolun ◴[] No.43529074{3}[source]
I find that this is always easy to say from the perspective of the security team. Sure, it would be more secure to develop like that, but also tons more painful for both dev and user.
replies(1): >>43532721 #
13. timewizard ◴[] No.43532721{4}[source]
I don't code anymore. I like making devs suffer. And this is all good for the user. ;)