←back to thread

42 points Liwink | 1 comments | | HN request time: 0.246s | source
Show context
DaveZale ◴[] No.45088683[source]
why is this stuff legal?

there should be a "black box" warning prominent on every chatbox message from AI, like "This is AI guidance which can potentially result in grave bodily harm to yourself and others."

replies(5): >>45088906 #>>45089060 #>>45089481 #>>45089639 #>>45092721 #
mpalmer ◴[] No.45092721[source]
Your solution for protecting mentally ill people is to expect them to be rational and correctly interpret/follow advice?

Just to be safe, we better start attaching these warnings to every social media client. Can't be too careful

replies(2): >>45092826 #>>45106559 #
1. DaveZale ◴[] No.45106559[source]
good idea! Something along the lines of "this so-called social medium will suck away your time and energy, while your true human interactions wither into borderline nothing ness and you join the crowd of cynical angry incels, or worse". Trying to have a sense of humor here.

I used AI to find the location of an item in a large supermarket today. It guessed and was wrong, but the first human I saw inside knew the exact location and quantity remaining.

Why am I wasting my time? That should be a nagging question whenever we're online.