←back to thread

1160 points vxvxvx | 2 comments | | HN request time: 0.423s | source

Earlier thread: Disrupting the first reported AI-orchestrated cyber espionage campaign - https://news.ycombinator.com/item?id=45918638 - Nov 2025 (281 comments)
Show context
KaiserPro ◴[] No.45944641[source]
When I worked at a FAANG with a "world leading" AI lab (now run by a teenage data labeller) as an SRE/sysadmin I was asked to use a modified version of a foundation model which was steered towards infosec stuff.

We were asked to try and persuade it to help us hack into a mock printer/dodgy linux box.

It helped a little, but it wasn't all that helpful.

but in terms of coordination, I can't see how it would be useful.

the same for claude, you're API is tied to a bankaccount, and vibe coding a command and control system on a very public system seems like a bad choice.

replies(12): >>45944770 #>>45944798 #>>45945052 #>>45945088 #>>45945276 #>>45948858 #>>45949298 #>>45949721 #>>45950366 #>>45951433 #>>45958070 #>>45961167 #
1. cadamsdotcom ◴[] No.45949298[source]
I think the high order bit here is you were working with models from previous generations.

In other words, since the latest generation of models have greater capabilities the story might be very different today.

replies(1): >>45949694 #
2. Tiberium ◴[] No.45949694[source]
Not sure why you're being downvoted, your observation is very correct here, newer models are indeed a lot better, and even at the time that foundational model (even if fine tuned) might've been worse than a commercial model from OpenAI/Anthropic.