Most active commenters
  • jedberg(6)
  • BenjaminN(3)
  • closeparen(3)

←back to thread

114 points BenjaminN | 20 comments | | HN request time: 0.306s | source | bottom

Ahoy Hacker News! I'm Ben, founder of Riot (https://tryriot.com), a tool that sends phishing emails to your team to get them ready for real attacks. It's like a fire drill, but for cybersecurity.

Prior to Riot, I was the co-founder and CTO of a fintech company operating hundred of millions of euros of transactions every year. We were under attack continuously. I was doing an hour-long security training once a year, but was always curious if my team was really ready for an attack. In fact, it kept me up at night thinking we were spending a lot of money on protecting our app, but none on preparing the employees for social engineering.

So I started a side project at that previous company to test this out. On the first run, 9% of all the employees got scammed. I was pissed, but it convinced me we needed a better way to train employees for cybersecurity attacks. This is what grew into Riot.

For now we are only training for phishing, but our intention is to grow this into a tool that will continuously prepare your team for good practices (don't reuse passwords for example) and upcoming attacks (CEO fraud is next), in a smart way.

Your questions, feedback, and ideas are most welcome. Would love to hear your war stories on phishing scams, and how you train your teams!

1. jedberg ◴[] No.22676967[source]
> Would love to hear your war stories on phishing scams, and how you train your teams!

I was working on anti-phishing in 2003, before it had the name phishing. We were trying to teach our users not to fall for the scams.

It didn't work. People will fall for the same scam over and over.

The conclusion we came to was that the only solution to phishing was education, and education was also nearly impossible to get 100% coverage.

I wish you luck, but don't get discouraged if it doesn't work. We've been trying to educate people about phishing for 17+ years. :)

We shifted our focus to tracking the phishing sites and then tying that back to which user accounts were hacked, and disabling the hacked accounts and notifying the users before damage could be done.

PayPal actually holds the patent on what we built, along with a ton of other anti-phishing and phishing site tracking patents.

replies(5): >>22677184 #>>22677438 #>>22678979 #>>22679434 #>>22683925 #
2. BenjaminN ◴[] No.22677184[source]
I actually started coding in 2000 trying to hack my brother, so I can relate: phishing has been a never-ending story.

It's still worth trying though!

replies(1): >>22677256 #
3. jedberg ◴[] No.22677256[source]
Definitely worth trying! Just want to help you set expectations. :)
replies(2): >>22677581 #>>22677674 #
4. derision ◴[] No.22677438[source]
According to Wikipedia, the term phishing (or fishing) originated in the mid-1990s
replies(1): >>22677487 #
5. jedberg ◴[] No.22677487[source]
The term was coined in the 90s, but didn't get widespread usage until the mid-2000s. So yes, technically it had that name already, but no one used it then.
6. BenjaminN ◴[] No.22677581{3}[source]
Thanks!
7. johnwheeler ◴[] No.22677674{3}[source]
Did you try punitive disincentives?
replies(3): >>22678126 #>>22678316 #>>22680265 #
8. rwmurrayVT ◴[] No.22678126{4}[source]
The company sends out fake phishing emails. The same people keep falling for it... I suppose the outlined punishments are not strictly enforced.
9. brobinson ◴[] No.22678316{4}[source]
A better approach is to turn it into a game: reward those who report suspected phishing emails, security breaches, tailgating into secure areas, USB devices left around, etc. and have red teams doing this stuff periodically. Punitive measures don't really work. Friendly competition with rewards does work, though.
replies(1): >>22679960 #
10. nothrabannosir ◴[] No.22678979[source]
> The conclusion we came to was that the only solution to phishing was education, and education was also nearly impossible to get 100% coverage.

A friend works for a company that fires employees after failing three phishing tests.

It doesn’t solve the problem for those people, but it does work for that company. What has priority depends on your management style :)

replies(2): >>22679536 #>>22680243 #
11. swamifin ◴[] No.22679434[source]
If you wouldn't mind I'd really like to get your opinion on this proposed hardware solution I posted a while back:

https://news.ycombinator.com/item?id=22343786

replies(1): >>22680291 #
12. closeparen ◴[] No.22679536[source]
The only way to pass the phishing tests at my employer is to never click links in email. But then we also have a number of official systems sending emails with links in them (bug tracking, code review, Zoom invites, HR portal, etc).

The only way this kind of policy makes sense is if you have to actually give the phishing site some kind of credential in order to fail, vs. merely opening on it.

If someone has a Chrome zero-day, we're done anyway. Just post it on HN.

replies(1): >>22680404 #
13. johnwheeler ◴[] No.22679960{5}[source]
that's a good point :D
14. jedberg ◴[] No.22680243[source]
Then I would have gotten fired. That's a ridiculous policy. Do they fire people for making mistakes too?

As a security engineer in a previous life, I always open the links in phishing emails (in an isolated and secure VM). I would fail the tests at work every time, but luckily the person in charge of them knew what I was doing and didn't care.

15. jedberg ◴[] No.22680265{4}[source]
In our case we were educating and protecting our customers. It's usually bad policy to carry out punitive punishment on your customers. :)

In fact, the worst offenders were actually rewarded. They were the only ones who had two factor auth for their eBay accounts. Back then we didn't have soft tokens -- the only way to do 2 factor was to get a physical RSA token, which cost about $10 at the time. So only the "best" customers were worth the cost.

16. jedberg ◴[] No.22680291[source]
I'd have to think about it more, but if feels overly complex. You've essentially taken the idea of a DMZ network and put it in an individual computing device.

DMZ networks are hard to get right and hard to admin, and almost always end up getting some sort of exception for certain business needs.

Asking a user to admin that, or having no admin at all, feels almost impossible.

17. anitil ◴[] No.22680404{3}[source]
This is my major concern. Heaps of legitimate companies send emails with links to things like 'http://dh380.<third party server>.com'. We're being trained to accept this sort of silliness
replies(1): >>22681683 #
18. closeparen ◴[] No.22681683{4}[source]
I don't think it's realistic to live in constant fear of browser sandbox escapes, or to consider visiting an arbitrary URL "silliness." If your threat model includes people willing to burn Chrome 0-days on you, you need an air gap.

The much more relevant battle is preventing credential theft, which you can solve completely at the technical level with U2F. And if you can't, user education on "check the URL before typing your password" is a little more realistic than "don't open links from email ever."

replies(1): >>22699402 #
19. anitil ◴[] No.22699402{5}[source]
While I agree with you, I'm far less concerned for my family/friends/colleagues about a sandbox escape compared to accidentally putting information in to a malicious site
replies(1): >>22722210 #
20. closeparen ◴[] No.22722210{6}[source]
Yes, and "consider the URL and how you got there before typing in your password or credit card" is a lot more realistic than "don't click links." Still, clicking the link fails the phishing test all by itself.