←back to thread

301 points SerCe | 1 comments | | HN request time: 0.001s | source
Show context
digitaltrees ◴[] No.43111682[source]
They need to build an epistemology and theory of mind engine into models. We take it for granted when dealing with other humans that they can infer deep meaning, motivations, expectations of truth vs fiction. But these agents don’t do that and so will be awful collaborators until those behaviors are present
replies(5): >>43111785 #>>43111826 #>>43112105 #>>43112114 #>>43112482 #
1. energy123 ◴[] No.43112482[source]
Theory of mind should naturally emerge when the models are partly trained in an adversarial simulation environment, like the Cicero model, although that's a narrow AI example.