←back to thread

301 points SerCe | 1 comments | | HN request time: 1.108s | source
Show context
digitaltrees ◴[] No.43111682[source]
They need to build an epistemology and theory of mind engine into models. We take it for granted when dealing with other humans that they can infer deep meaning, motivations, expectations of truth vs fiction. But these agents don’t do that and so will be awful collaborators until those behaviors are present
replies(5): >>43111785 #>>43111826 #>>43112105 #>>43112114 #>>43112482 #
1. kolinko ◴[] No.43112105[source]
Did you read any research on theory on mind and models? Since gpt4 they were tested using similar metrics to humans and it seems the bigger models “have” it