←back to thread

46 points petethomas | 1 comments | | HN request time: 0s | source
Show context
chasd00 ◴[] No.44397393[source]
I'm not on the LLM hype train but these kinds of articles are pretty low quality. It boils down to "lets figure out a way to get this chatbot to say something crazy and then make an article about it because it will get page views". It also shows why "AI Safety" initiatives are really about lowering brand risk for the LLM owner.

/wasn't able to read the whole article as i don't have a WSJ subscription

replies(5): >>44397440 #>>44397519 #>>44397588 #>>44397617 #>>44397631 #
1. mock-possum ◴[] No.44397519[source]
Nothing surprising here - “let’s figure out a way to get this human to say something crazy” is a pretty standard bottom of the barrel content too - people wallow in it like pigs in shit.