←back to thread

1160 points vxvxvx | 2 comments | | HN request time: 0s | source

Earlier thread: Disrupting the first reported AI-orchestrated cyber espionage campaign - https://news.ycombinator.com/item?id=45918638 - Nov 2025 (281 comments)
Show context
jnwatson ◴[] No.45946556[source]
There's a big gap of knowledge between infosec researchers and ML security researchers. Anthropic has a bunch of column B but not enough column A.

This was discussed in some detail in the recently published Attacker Moves Second paper*. ML researchers like using Attack Success Rate (ASR) as a metric for model resistance to attack, while for infosec, any successful attack (ASR > 0) is considered significant. ML researchers generally use a static set of tests, while infosec researchers assume an adaptive, resourceful attacker.

https://arxiv.org/abs/2510.09023

replies(1): >>45946664 #
1. sim7c00 ◴[] No.45946664[source]
ML researchers are not sec researchers. they need to stick to their own game. companies need to use both camps for a good holistic view of the problem. ML is the blue team. sec researchers the red.
replies(1): >>45948707 #
2. saagarjha ◴[] No.45948707[source]
Plenty of security researchers are blue team.