←back to thread

586 points mizzao | 3 comments | | HN request time: 0.576s | source
Show context
rivo ◴[] No.40668263[source]
I tried the model the article links to and it was so refreshing not being denied answers to my questions. It even asked me at the end "Is this a thought experiment?", I replied with "yes", and it said "It's fun to think about these things, isn't it?"

It felt very much like hanging out with your friends, having a few drinks, and pondering big, crazy, or weird scenarios. Imagine your friend saying, "As your friend, I cannot provide you with this information." and completely ruining the night. That's not going to happen. Even my kids would ask me questions when they were younger: "Dad, how would you destroy earth?" It would be of no use to anybody to deny answering that question. And answering them does not mean they will ever attempt anything like that. There's a reason Randall Munroe's "What If?" blog became so popular.

Sure, there are dangers, as others are pointing out in this thread. But I'd rather see disclaimers ("this may be wrong information" or "do not attempt") than my own computer (or the services I pay for) straight out refusing my request.

replies(6): >>40668938 #>>40669291 #>>40669447 #>>40671323 #>>40683221 #>>40689216 #
1. bossyTeacher ◴[] No.40689216[source]
> I'd rather see disclaimers ("this may be wrong information" or "do not attempt") than my own computer (or the services I pay for) straight out refusing my request.

Are you saying that you want to pay to be provided with harmful text (see racist, sexist, homophobic, violent, all sorts of super terrible stuff)?

For you, it might be freedom for freedom sake but for 1% of the people out there, that will be lowering the barrier to commit bad stuff.

This is not the same as a super violent showing 3d limb dismemberments. It's a limitless, realistic, detailed and helpful guide to commit horrible stuff or describe horrible scenarios.

in4 you can google that, your google searches get monitored for this kind of stuff. Your convos with llms won't.

It's very disturbing to see adults people on here arguing against censorship of a public tool

replies(2): >>40714904 #>>40722902 #
2. autoexec ◴[] No.40714904[source]
> in4 you can google that, your google searches get monitored for this kind of stuff. Your convos with llms won't.

Not sure why you'd think that. Unless you run the ai locally and 100% offline you shouldn't expect any privacy at all

3. sattoshi ◴[] No.40722902[source]
> Are you saying that you want to pay to be provided with harmful text

This existence of “harmful text” is a bit silly, but lets not dwell on it.

The answer to your question is that I want to be able to generate whatever the technology is capable of. Imagine if Microsoft Word would throw an error if you tried to write something against modern dogmas.

If you wish to avoid seeing harmful text, I think that market is well-served today. I can’t imagine there not being at the very least a checkbox to enable output filtering for any ideas you think are harmful.