←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0s | source
Show context
slg ◴[] No.45032427[source]
It says a lot about HN that a story like this has so much resistance getting any real traction here.
replies(4): >>45032449 #>>45032468 #>>45032863 #>>45037578 #
dkiebd ◴[] No.45032863[source]
This sucks but the only solution is to make companies censor the models, which is a solution we all hate, so there’s that.
replies(2): >>45033001 #>>45036127 #
slg ◴[] No.45033001[source]
Thank you, “we just have to accept that these systems will occasionally kill children” is a perfect example of the type of mindset I was criticizing.
replies(8): >>45033028 #>>45033147 #>>45033587 #>>45035669 #>>45036009 #>>45036650 #>>45037010 #>>45047038 #
jackjeff ◴[] No.45036009[source]
Don’t cars and ropes and drills occasionally kill people too? Society seems to have accepted that fact long ago.

Somehow we expect the digital world to be devoid of risks.

Cryptography that only the good guys can crack is another example of this mindset.

Now I’m not saying ClosedAI look good on this, their safety layer clearly failed and the sycophantic BS did not help.

But I reckon this kind of failure more will always exist in LLMs. Society will have to learn this just like we learned cars are dangerous.

replies(2): >>45041197 #>>45041558 #
1. kunley ◴[] No.45041558{3}[source]
Cars and ropes don't "talk" and don't make an impression of a human being.