←back to thread

1160 points vxvxvx | 1 comments | | HN request time: 0s | source

Earlier thread: Disrupting the first reported AI-orchestrated cyber espionage campaign - https://news.ycombinator.com/item?id=45918638 - Nov 2025 (281 comments)
Show context
padolsey ◴[] No.45944639[source]
> PoC || GTFO

I agree so much with this. And am so sick of AI labs, who genuinely do have access to some really great engineers, putting stuff out that just doesn't pass the smell test. GPT-5's system card was pathetic. Big-talk of Microsoft doing red-teaming in ill-specified ways, entirely unreproducable. All the labs are "pro-research" but they again-and-again release whitepapers and pump headlines without producing the code and data alongside their claims. This just feeds into the shill-cycle of journalists doing 'research' and finding 'shocking thing AI told me today' and somehow being immune to the normal expectations of burden-of-proof.

replies(2): >>45944810 #>>45944843 #
mlinhares ◴[] No.45944810[source]
They're gonna say that if they explain how it was done bad people will find out how to use their models for more evil deeds. The perfect excuse.
replies(2): >>45944827 #>>45944846 #
stogot ◴[] No.45944846[source]
They can still provide indicators of compromise
replies(1): >>45946036 #
1. ACCount37 ◴[] No.45946036{3}[source]
What ARE the indicators of compromise?

It's not a piece of malware or an exploit. It's an AI hacker. It does the same things a human hacker would but faster.