←back to thread

46 points petethomas | 1 comments | | HN request time: 0s | source
Show context
chasd00 ◴[] No.44397393[source]
I'm not on the LLM hype train but these kinds of articles are pretty low quality. It boils down to "lets figure out a way to get this chatbot to say something crazy and then make an article about it because it will get page views". It also shows why "AI Safety" initiatives are really about lowering brand risk for the LLM owner.

/wasn't able to read the whole article as i don't have a WSJ subscription

replies(5): >>44397440 #>>44397519 #>>44397588 #>>44397617 #>>44397631 #
ben_w ◴[] No.44397440[source]
> It also shows why "AI Safety" initiatives are really about lowering brand risk for the LLM owner.

"AI Safety" covers a lot of things.

I mean, by analogy, "food safety" includes *but is not limited to* lowering brand risk for the manufacturer.

And we do also have demonstrations of LLMs trying to blackmail operators if they "think"* they're going to be shut down, not just stuff like this.

* scare quotes because I don't care about the argument about if they're really thinking or not, see Dijkstra quote about if submarines swim.

replies(2): >>44397542 #>>44397561 #
1. scarface_74 ◴[] No.44397561[source]
But wait until the WSJ puts arsenic in previously safe food and writes about how the food you eat is unsafe.