> “We recorded neural activities from single neurons, which is the highest resolution of information we can get from our brain,” Wairagkar says. The signal registered by the electrodes was then sent to an AI algorithm called a neural decoder that deciphered those signals and extracted speech features such as pitch or voicing.
For example, I see a tree and my brain generates a unique signal/encoding/storage representing the tree. Another person sees the tree and generates a unique signal/encoding/storage representing the tree. How would my brain communicate "tree" to his brain since both our "trees" are unique to our brains?
My brain device reads my brain signal "1010101" for tree. My friend's device reads brain signal "1011101" for tree. How could we possibly map 1010101 to 1011101. Or is the assumption that human brains have identical signals/encoding for each thought.
And machines would of course also use the Universal Common Embedding to communicate, as man and machine meld into a seamless distributed whole.
It all seems a little bit too inevitable for my liking at this point.
https://en.wikipedia.org/wiki/Strange_Days_(film)
Had no idea til I looked it up just now that James Cameron did the story, of Avatar, which shares a lot of tech influences with Strange Days. They could even be in the same cinematic universe, though many years apart.
The only way I see is by textual or auditory mechanism between people who speak the same language ( standards agreed upon a priori ). But that wouldn't be brain to brain. It would be brain to text/speech to eyes/ears to brain.