←back to thread

54 points amai | 1 comments | | HN request time: 0.212s | source
Show context
freeone3000 ◴[] No.42161812[source]
I find it very interesting that “aligning with human desires” somehow includes prevention of a human trying to bypass the safeguards to generate “objectionable” content (whatever that is). I think the “safeguards” are a bigger problem with aligning with my desires.
replies(4): >>42162124 #>>42162181 #>>42162295 #>>42162664 #
threeseed ◴[] No.42162295[source]
The safeguards stems from a desire to make tools like Claude accessible to a very wide audience as use cases such as education are very important.

And so it seems like people such as yourself who do have an issue with safeguards should seek out LLMs that are catered to adult audiences rather than trying to remove safeguards entirely.

replies(3): >>42162675 #>>42163652 #>>42165642 #
1. freeone3000 ◴[] No.42165642[source]
If you are making an LLM for children, I have no problem with that! I’m not sure kids being completely removed from the adult world until suddenly being dumped into it is a great way to build an integrated society, but sure, you do you. Build your LLM with safeguards for educational use, best of luck to you!

I do not think it should be the default. I do not think that “adults” wanting “adult things” like some ideas on how to secure a computer system against social engineering should have to seek out some detuned “jailbroken” lower-quality model.

And I don’t think that assuming everyone is a child aligns with “human desires”, or should be couched in that language.