Most active commenters
  • bri3d(4)
  • akerl_(4)
  • cedws(3)

←back to thread

159 points botanica_labs | 18 comments | | HN request time: 0s | source | bottom
Show context
mmsc ◴[] No.45670037[source]
>after having received a lukewarm and laconic response from the HackerOne triage team.

A slight digression but lol, this is my experience with all of the bug bounty platforms. Reporting issues which are actually complicated or require an in depth understanding of technology are brickwalled, because reports of difficult problems are written for .. people who understand difficult problems and difficult technology. The runarounds are not worth the time for people who try to solve difficult problems because they have better things to do.

At least cloudflare has a competent security team that can step in and say "yeah, we can look into this because we actually understand our whole technology". It's sad that to get through to a human on these platforms you have to effectively write two reports: one for the triagers who don't understand the technology at all, and one for the competent people who actually know what they're doing.

replies(5): >>45670153 #>>45670225 #>>45670462 #>>45672569 #>>45672910 #
1. cedws ◴[] No.45670153[source]
IMO it’s no wonder companies keep getting hacked when doing the right thing is made so painful and the rewards are so meagre. And that’s assuming that the company even has a responsible disclosure program or you risk putting your ass on the line.

I don’t like bounty programs. We need Good Samaritan laws that legally protect and reward white hats. Rewards that pay the bills and not whatever big tech companies have in their couch cushions.

replies(3): >>45670437 #>>45670670 #>>45671921 #
2. bri3d ◴[] No.45670437[source]
> We need Good Samaritan laws that legally protect and reward white hats.

What does this even mean? How is the a government going to do a better job valuing and scoring exploits than the existing market?

I'm genuinely curious about how you suggest we achieve

> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.

So far, the industry has tried bounty programs. High-tier bugs are impossible to value and there is too much low-value noise, so the market converges to mediocrity, and I'm not sure how having a government run such a program (or set reward tiers, or something) would make this any different.

And, the industry and governments have tried punitive regulation - "if you didn't comply with XYZ standard, you're liable for getting owned." To some extent this works as it increases pay for in-house security and makes work for consulting firms. This notion might be worth expanding in some areas, but just like financial regulation, it is a double edged sword - it also leads to death-by-checkbox audit "security" and predatory nonsense "audit firms."

replies(2): >>45670517 #>>45671615 #
3. jacquesm ◴[] No.45670517[source]
Legal protections have absolutely nothing to do with 'the existing market'.
replies(1): >>45670547 #
4. bri3d ◴[] No.45670547{3}[source]
Yes, and my question is both genuine and concrete:

What proposed regulation could address a current failure to value bugs in the existing market?

The parent post suggested regulation as a solution for:

> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.

I don't know how this would work and am interested in learning.

5. lenerdenator ◴[] No.45670670[source]
> IMO it’s no wonder companies keep getting hacked when doing the right thing is made so painful and the rewards are so meagre.

Show me the incentives, and I'll show you the outcomes.

We really need to make security liabilities to be just that: liabilities. If you are running 20+ year-old code, and you get hacked, you need to be fined in a way that will make you reconsider security as a priority.

Also, you need to be liable for all of the disruption that the security breach caused for customers. No, free credit monitoring does not count as recompense.

replies(2): >>45671704 #>>45672156 #
6. cedws ◴[] No.45671615[source]
For the protections part: it means creating a legal framework in which white hats can ethically test systems without companies having a responsible disclosure program. The problem with responsible disclosure programs is that the companies with the worst security don't give a shit and won't have such a program. They may even threaten such Good Samaritans for reporting issues in good faith, there have been many such cases.

For the rewards part: again, the companies who don't have a shit won't incentivise white hat pentesting. If a company has a security hole that leads to disclosure of sensitive information, it should be fined, and such fines can be used for rewards.

This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate. It also puts companies legally on the hook for issues before a security disaster occurs, not after it's already happened.

replies(3): >>45671938 #>>45671968 #>>45672142 #
7. dpoloncsak ◴[] No.45671704[source]
I love this idea, but I feel like it just devolves into ways to classify that 'specific exploit' is/isn't technically a 0-day, so they can/can't be held liable
8. bongodongobob ◴[] No.45671921[source]
Companies get hacked because Bob in finance doesn't have MFA and got a phishing email. In my experience working for MSP's it's always been phishing and social engineering. I have never seen a company comprised from some obscure bug in software. This may be different for super large organizations that are international targets, but for the average person or business, you're better off spending time just MFAing everything you can and using common sense.
replies(1): >>45672109 #
9. bri3d ◴[] No.45671938{3}[source]
Sure, I'm all for protection for white hats, although I don't think is at all relevant and don't see this as a particularly prominent practical problem in the modern day.

> If a company has a security hole that leads to disclosure of sensitive information, it should be fined

What's a "security hole"? How do you determine the fines? Where do you draw the line for burden of responsibility? If someone discovers a giant global issue in a common industry standard library, like Heartbleed, or the Log4J vulnerability, and uses it against you first, were you responsible for not discovering that vulnerability and mitigating it ahead of time? Why?

> such fines can be used for rewards.

So we're back to the award allocation problem.

> This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate.

Yes, if you can figure out how to determine the value of a vulnerability, the value of a breach, and the value of a reward.

replies(1): >>45672288 #
10. tptacek ◴[] No.45671968{3}[source]
None of this has anything to do with the story we're commenting on; this kind of vulnerability research has never been legally risky.
11. akerl_ ◴[] No.45672109[source]
Just to clarify: if Bob in Finance doesn't have phishing-resistant MFA, that's an organizational failure that's squarely homed in the IT and Infosec world.
replies(1): >>45672566 #
12. akerl_ ◴[] No.45672142{3}[source]
You're (thankfully) never going to get a legal framework that allows "white hats" to test another person's computer without their permission.

There's a reason Good Samaritan laws are built around rendering aid to injured humans: there is no equivalent if you go down the street popping peoples' car hoods to refill their windshield wiper fluid.

13. akerl_ ◴[] No.45672156[source]
Why?

Why is it inherently desirable that society penalize companies that get hacked above and beyond people choosing not to use their services, or selling off their shares, etc?

replies(1): >>45673737 #
14. cedws ◴[] No.45672288{4}[source]
You have correctly identified there is more complexity to this than is addressable in a HN comment. Are you asking me to write the laws and design a government-operated pentesting platform right here?

It's pretty clear whatever security 'strategy' we're using right now doesn't work. I'm subscribed to Troy Hunt's breach feed and it's basically weekly now that another 10M, 100M records are leaked. It seems foolish to continue like this. If governments want to take threats seriously a new strategy is needed that mobilises security experts and dishes out proper penalties.

replies(1): >>45672719 #
15. bongodongobob ◴[] No.45672566{3}[source]
Absolutely. It's extremely common with small and midsize businesses that don't have any IT on staff.
16. bri3d ◴[] No.45672719{5}[source]
> You have correctly identified there is more complexity to this than is addressable in a HN comment. Are you asking me to write the laws and design a government-operated pentesting platform right here?

My goal was to learn whether there was an insight beyond "we should take the thing that doesn't work and move it into the government where it can continue to not work," because I'd find that interesting.

17. lenerdenator ◴[] No.45673737{3}[source]
Because they were placed in a position of trust and failed. Typically, the failure stems from a lack of willingness to expend the resources necessary to prevent the failure.

It'd be one thing if these were isolated incidents, but they're not.

Furthermore, the methods you mention simply aren't effective. Our economy is now so consolidated that many markets only have a handful of participants offering goods or services, and these players often all have data and computer security issues. As for divestiture, most people don't own shares, and those who do typically don't know they own shares of a specific company. Most shareholders in the US are retirement or pension funds, and they are run by people who would rather make it impossible for the average person to bring real consequences to their holdings for data breaches, than cause the company to spend money on fixing the issues that allow for the breaches to begin with. After all, it's "cheaper".

replies(1): >>45674122 #
18. akerl_ ◴[] No.45674122{4}[source]
I feel like this kind of justification comes up every time this topic is on HN: that the reason companies aren't being organically penalized for bad IT/infosec/privacy behavior is because the average person doesn't have leverage or alternatives.

It's never made sense to me.

I can see that being true in specific instances: many people in the US don't have great mobility for residential ISPs, or utility companies. And there's large network effects for social media platforms. But if any significant plurality of users cared about the impact of service breaches, or bad privacy policies, surely we'd see the impact somewhere in the market? We do in some related areas: Apple puts a ton of money into marketing about keeping people's data and messages private. WhatsApp does the same. But there are so many companies out there, lots of them have garbage security practices, lots of them get compromised, and I'm struggling to remember any example of a consumer company that had a breach and saw any significant impact.

To pick an example: in 2014 Home Depot had a breach of payment data. Basically everywhere that has Home Depots also has Lowes and other options that sell the same stuff. In most places, if you're pissed at Home Depot for losing your card information, you can literally drive across the street to Lowes. But it doesn't seem like that happened.

Is it possible that outside of tech circles where we care about The Principle Of The Thing, the market is actually correct in its assessment of the value for the average consumer business of putting more money into security?