←back to thread

Alignment is capability

(www.off-policy.com)
106 points drctnlly_crrct | 1 comments | | HN request time: 0s | source
Show context
js8 ◴[] No.46192518[source]
I am not sure if this is what the article is saying, but the paperclip maximizer examples always struck me as extremely dumb (lacking intelligence), when even a child can understand that if I ask them to make paperclips they shouldn't go around and kill people.

I think superintelligence will turn out not to be a singularity, but as something with diminishing returns. They will be cool returns, just like a Brittanica set is nice to have at home, but strictly speaking, not required to your well-being.

replies(8): >>46192693 #>>46192721 #>>46192946 #>>46193471 #>>46193491 #>>46193694 #>>46193737 #>>46194236 #
1. theptip ◴[] No.46194236[source]
The point with clippy is just that the AGI’s goals might be completely alien to you. But for context it was first coined in the early ‘10s (if not earlier)when LLMs were not invented and RL looked like the way forward.

If you wire up RL to a goal like “maximize paperclip output” then you are likely to get inhuman desires, even if the agent also understands humans more thoroughly than we understand nematodes.