←back to thread

584 points Alifatisk | 1 comments | | HN request time: 0s | source
Show context
kgeist ◴[] No.46182122[source]
>The model uses this internal error signal (the gradient) as a mathematical equivalent of saying, "This is unexpected and important!" This allows the Titans architecture to selectively update its long-term memory only with the most novel and context-breaking information

So one can break a model by consistently feeding it with random, highly improbable junk? Everything would be registered as a surprise and get stored, impacting future interactions

replies(6): >>46182150 #>>46182410 #>>46182651 #>>46183200 #>>46183413 #>>46193429 #
idiotsecant ◴[] No.46182410[source]
The is the start of what I always thought an AI should have - a limbic system. Humans don't store memory based on novelty, they store it based on emotional content. This is where I was afraid of the tiger, this is where I smelled delicious food, this was what it felt like when I was victorious in the hunt.

AI needs an internal emotional state because that's what drives attention and memory. AI needs to want something.

replies(2): >>46182665 #>>46192465 #
luckydata ◴[] No.46182665[source]
That would be the biggest mistake anyone could do. I hope nobody goes down this route. AI "wanting" things are an enormous risk to alignment.
replies(2): >>46183135 #>>46185152 #
idiotsecant ◴[] No.46185152{3}[source]
At some point I think we'll have to face the idea that any AI more intelligent than ourselves will by definition be able to evade our alignment tricks.
replies(1): >>46185869 #
luckydata ◴[] No.46185869{4}[source]
equating more intelligent to "wanting things" is a fallacy. You can have a hyper intelligent computer that simply waits for you to ask it to do a job, or you can endow it with the digital equivalent of hunger and reproductive instincts and it will behave completely differently.

We would be INSANE to pursue giving that type of instincts to AIs.

replies(2): >>46189519 #>>46190982 #
1. sayamqazi ◴[] No.46190982{5}[source]
You are making a claim that "Intelligenece" is separable from other things found in humans and other animals. There is no proof or example supporting this.

I have come to beleive that we will only be able to truly replicate intelligence if the system was trying to preserve itself. Its the biggest incentive ever to do intelligent things.