←back to thread

54 points amai | 1 comments | | HN request time: 0.22s | source
Show context
freeone3000 ◴[] No.42161812[source]
I find it very interesting that “aligning with human desires” somehow includes prevention of a human trying to bypass the safeguards to generate “objectionable” content (whatever that is). I think the “safeguards” are a bigger problem with aligning with my desires.
replies(4): >>42162124 #>>42162181 #>>42162295 #>>42162664 #
threeseed ◴[] No.42162295[source]
The safeguards stems from a desire to make tools like Claude accessible to a very wide audience as use cases such as education are very important.

And so it seems like people such as yourself who do have an issue with safeguards should seek out LLMs that are catered to adult audiences rather than trying to remove safeguards entirely.

replies(3): >>42162675 #>>42163652 #>>42165642 #
Zambyte ◴[] No.42162675[source]
How does making it harder for the user to extract information they are trying to extract make it safer for a wider audience?
replies(2): >>42162977 #>>42163153 #
Drakim ◴[] No.42163153[source]
That's like asking why we should have porn filters on school computers, after all, all it does is prevent the user from finding what they are looking for, which is bad.
replies(1): >>42172066 #
1. ◴[] No.42172066[source]