←back to thread

301 points SerCe | 5 comments | | HN request time: 0.444s | source
1. digitaltrees ◴[] No.43111682[source]
They need to build an epistemology and theory of mind engine into models. We take it for granted when dealing with other humans that they can infer deep meaning, motivations, expectations of truth vs fiction. But these agents don’t do that and so will be awful collaborators until those behaviors are present
replies(5): >>43111785 #>>43111826 #>>43112105 #>>43112114 #>>43112482 #
2. kvirani ◴[] No.43111826[source]
We're in the 56k modem era of generative AI, so I wouldn't be surprised if we had that in the next few years, or weeks.
3. kolinko ◴[] No.43112105[source]
Did you read any research on theory on mind and models? Since gpt4 they were tested using similar metrics to humans and it seems the bigger models “have” it
4. MattGaiser ◴[] No.43112114[source]
And it causes a ton of chaos that we do take that for granted between humans. The annoying collaborator is the person who takes information for granted.
5. energy123 ◴[] No.43112482[source]
Theory of mind should naturally emerge when the models are partly trained in an adversarial simulation environment, like the Cicero model, although that's a narrow AI example.