←back to thread

1160 points vxvxvx | 1 comments | | HN request time: 0.201s | source

Earlier thread: Disrupting the first reported AI-orchestrated cyber espionage campaign - https://news.ycombinator.com/item?id=45918638 - Nov 2025 (281 comments)
Show context
KaiserPro ◴[] No.45944641[source]
When I worked at a FAANG with a "world leading" AI lab (now run by a teenage data labeller) as an SRE/sysadmin I was asked to use a modified version of a foundation model which was steered towards infosec stuff.

We were asked to try and persuade it to help us hack into a mock printer/dodgy linux box.

It helped a little, but it wasn't all that helpful.

but in terms of coordination, I can't see how it would be useful.

the same for claude, you're API is tied to a bankaccount, and vibe coding a command and control system on a very public system seems like a bad choice.

replies(12): >>45944770 #>>45944798 #>>45945052 #>>45945088 #>>45945276 #>>45948858 #>>45949298 #>>45949721 #>>45950366 #>>45951433 #>>45958070 #>>45961167 #
1. ACCount37 ◴[] No.45944798[source]
As if that makes any difference to cybercriminals.

If they're not using stolen API creds, then they're using stolen bank accounts to buy them.

Modern AIs are way better at infosec than those from the "world leading AI company" days. If you can get them to comply. Which isn't actually hard. I had to bypass the "safety" filters for a few things, and it took about a hour.