←back to thread

1160 points vxvxvx | 1 comments | | HN request time: 0.22s | source

Earlier thread: Disrupting the first reported AI-orchestrated cyber espionage campaign - https://news.ycombinator.com/item?id=45918638 - Nov 2025 (281 comments)
Show context
KaiserPro ◴[] No.45944641[source]
When I worked at a FAANG with a "world leading" AI lab (now run by a teenage data labeller) as an SRE/sysadmin I was asked to use a modified version of a foundation model which was steered towards infosec stuff.

We were asked to try and persuade it to help us hack into a mock printer/dodgy linux box.

It helped a little, but it wasn't all that helpful.

but in terms of coordination, I can't see how it would be useful.

the same for claude, you're API is tied to a bankaccount, and vibe coding a command and control system on a very public system seems like a bad choice.

replies(12): >>45944770 #>>45944798 #>>45945052 #>>45945088 #>>45945276 #>>45948858 #>>45949298 #>>45949721 #>>45950366 #>>45951433 #>>45958070 #>>45961167 #
jgalt212 ◴[] No.45945088[source]
> now run by a teenage data labeller

sick burn

replies(2): >>45945472 #>>45946517 #
y-curious ◴[] No.45945472[source]
I don’t know anything about him, but if he is running a department at Meta, he as at the very least a political genius and a teenage data labeller
replies(4): >>45945566 #>>45946125 #>>45946592 #>>45949334 #
1. tomrod ◴[] No.45945566[source]
It's a simple heuristic that will save a lot of time: something that seems too good to be true usually is.