←back to thread

159 points botanica_labs | 1 comments | | HN request time: 0.228s | source
Show context
mmsc ◴[] No.45670037[source]
>after having received a lukewarm and laconic response from the HackerOne triage team.

A slight digression but lol, this is my experience with all of the bug bounty platforms. Reporting issues which are actually complicated or require an in depth understanding of technology are brickwalled, because reports of difficult problems are written for .. people who understand difficult problems and difficult technology. The runarounds are not worth the time for people who try to solve difficult problems because they have better things to do.

At least cloudflare has a competent security team that can step in and say "yeah, we can look into this because we actually understand our whole technology". It's sad that to get through to a human on these platforms you have to effectively write two reports: one for the triagers who don't understand the technology at all, and one for the competent people who actually know what they're doing.

replies(5): >>45670153 #>>45670225 #>>45670462 #>>45672569 #>>45672910 #
cedws ◴[] No.45670153[source]
IMO it’s no wonder companies keep getting hacked when doing the right thing is made so painful and the rewards are so meagre. And that’s assuming that the company even has a responsible disclosure program or you risk putting your ass on the line.

I don’t like bounty programs. We need Good Samaritan laws that legally protect and reward white hats. Rewards that pay the bills and not whatever big tech companies have in their couch cushions.

replies(3): >>45670437 #>>45670670 #>>45671921 #
bri3d ◴[] No.45670437[source]
> We need Good Samaritan laws that legally protect and reward white hats.

What does this even mean? How is the a government going to do a better job valuing and scoring exploits than the existing market?

I'm genuinely curious about how you suggest we achieve

> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.

So far, the industry has tried bounty programs. High-tier bugs are impossible to value and there is too much low-value noise, so the market converges to mediocrity, and I'm not sure how having a government run such a program (or set reward tiers, or something) would make this any different.

And, the industry and governments have tried punitive regulation - "if you didn't comply with XYZ standard, you're liable for getting owned." To some extent this works as it increases pay for in-house security and makes work for consulting firms. This notion might be worth expanding in some areas, but just like financial regulation, it is a double edged sword - it also leads to death-by-checkbox audit "security" and predatory nonsense "audit firms."

replies(2): >>45670517 #>>45671615 #
cedws ◴[] No.45671615[source]
For the protections part: it means creating a legal framework in which white hats can ethically test systems without companies having a responsible disclosure program. The problem with responsible disclosure programs is that the companies with the worst security don't give a shit and won't have such a program. They may even threaten such Good Samaritans for reporting issues in good faith, there have been many such cases.

For the rewards part: again, the companies who don't have a shit won't incentivise white hat pentesting. If a company has a security hole that leads to disclosure of sensitive information, it should be fined, and such fines can be used for rewards.

This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate. It also puts companies legally on the hook for issues before a security disaster occurs, not after it's already happened.

replies(3): >>45671938 #>>45671968 #>>45672142 #
bri3d ◴[] No.45671938[source]
Sure, I'm all for protection for white hats, although I don't think is at all relevant and don't see this as a particularly prominent practical problem in the modern day.

> If a company has a security hole that leads to disclosure of sensitive information, it should be fined

What's a "security hole"? How do you determine the fines? Where do you draw the line for burden of responsibility? If someone discovers a giant global issue in a common industry standard library, like Heartbleed, or the Log4J vulnerability, and uses it against you first, were you responsible for not discovering that vulnerability and mitigating it ahead of time? Why?

> such fines can be used for rewards.

So we're back to the award allocation problem.

> This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate.

Yes, if you can figure out how to determine the value of a vulnerability, the value of a breach, and the value of a reward.

replies(1): >>45672288 #
cedws ◴[] No.45672288[source]
You have correctly identified there is more complexity to this than is addressable in a HN comment. Are you asking me to write the laws and design a government-operated pentesting platform right here?

It's pretty clear whatever security 'strategy' we're using right now doesn't work. I'm subscribed to Troy Hunt's breach feed and it's basically weekly now that another 10M, 100M records are leaked. It seems foolish to continue like this. If governments want to take threats seriously a new strategy is needed that mobilises security experts and dishes out proper penalties.

replies(1): >>45672719 #
1. bri3d ◴[] No.45672719[source]
> You have correctly identified there is more complexity to this than is addressable in a HN comment. Are you asking me to write the laws and design a government-operated pentesting platform right here?

My goal was to learn whether there was an insight beyond "we should take the thing that doesn't work and move it into the government where it can continue to not work," because I'd find that interesting.