Most active commenters

    ←back to thread

    120 points LorenDB | 12 comments | | HN request time: 0.511s | source | bottom
    Show context
    pingou ◴[] No.44441760[source]
    Telepathy is on its way. Next step they just skip the conversion of brain signals to words and just directly send the signals to another brain. But I think some conversion/translation would still be necessary.
    replies(11): >>44441794 #>>44441799 #>>44441825 #>>44442005 #>>44442059 #>>44442154 #>>44442271 #>>44442579 #>>44442793 #>>44443580 #>>44443730 #
    1. Cthulhu_ ◴[] No.44441799[source]
    I think (futurology / science fiction) that they will make some kind of brain link, but there won't be any translations happening in between, just raw brain signals from one to the other, like an extra sensory input; there won't be any encoding or data that can be translated to speech or images, but the connected brains will be able to learn to comprehend and send the signals to / from each other and learn to communicate that way.
    replies(5): >>44441927 #>>44441943 #>>44442006 #>>44445289 #>>44447353 #
    2. falcor84 ◴[] No.44441927[source]
    That sort of connection would be very susceptible to psychic attacks - I'm thinking of the telepaths in Babylon 5, being trained for offensive capabilities, as well as just plain old spam advertising. So while "defaulting to trust" is often considered societally useful, I believe that it would be better for everyone if cross-brain messages are sent in a format that can be analyzed (and entirely blocked) by a filter on the receiving side.
    3. z3t4 ◴[] No.44441943[source]
    We are so different, but I guess with a lot of training we could interpret each others thoughts. A first step would be to record your own thoughts and then replay them in order to see if you experience the same thing you did when the thoughts where recorded. It's possible that our brain is constantly re-configuring so that even your own recorded thoughts would make no sense.
    replies(1): >>44450291 #
    4. voidUpdate ◴[] No.44442006[source]
    I think the main problem with that is that different people thing in different ways. I think in full sentences and 3D images, whereas other people might think without images at all. How do you translate that?
    replies(2): >>44442101 #>>44443529 #
    5. yieldcrv ◴[] No.44442101[source]
    If statements

    for how the brain chip chooses to function

    6. suspended_state ◴[] No.44443529[source]
    It is very likely that this device works by perceiving and interpreting brain waves. Actually, from the article:

    > “We recorded neural activities from single neurons, which is the highest resolution of information we can get from our brain,” Wairagkar says. The signal registered by the electrodes was then sent to an AI algorithm called a neural decoder that deciphered those signals and extracted speech features such as pitch or voicing.

    7. hearsathought ◴[] No.44445289[source]
    I still fail to see how that's possible since it is assumed every brain "encodes" data uniquely. Communication between computers is possible because we have agreed upon standards. If every computer encoded characters differently, no communication would be possible. Without agreed upon ports or agreed upon mechanism to agree upon ports one computer could not communicate with another. So how can brain-to-brain communication work given that encoding/communication "standards" are impossible since each brain is different?

    For example, I see a tree and my brain generates a unique signal/encoding/storage representing the tree. Another person sees the tree and generates a unique signal/encoding/storage representing the tree. How would my brain communicate "tree" to his brain since both our "trees" are unique to our brains?

    My brain device reads my brain signal "1010101" for tree. My friend's device reads brain signal "1011101" for tree. How could we possibly map 1010101 to 1011101. Or is the assumption that human brains have identical signals/encoding for each thought.

    replies(1): >>44445685 #
    8. goopypoop ◴[] No.44445685[source]
    I already learned to interpret touch, taste, smision etc. when I was just a baby. How hard can a new one be?
    replies(1): >>44455354 #
    9. lambdaone ◴[] No.44447353[source]
    There would probably be a Universal Common Embedding used as an intermediate representation between people's individual private neural representations. Likely the distant descendant of our open-source neural models.

    And machines would of course also use the Universal Common Embedding to communicate, as man and machine meld into a seamless distributed whole.

    It all seems a little bit too inevitable for my liking at this point.

    10. aspenmayer ◴[] No.44450291[source]
    This is sort of explored in the film Strange Days, which is probably not very well known generally but perhaps would have a large fan base on HN.

    https://en.wikipedia.org/wiki/Strange_Days_(film)

    Had no idea til I looked it up just now that James Cameron did the story, of Avatar, which shares a lot of tech influences with Strange Days. They could even be in the same cinematic universe, though many years apart.

    11. hearsathought ◴[] No.44455354{3}[source]
    Did you even read my comment? I'm not talking about your brain's ability. I'm talking about how a device can interpret your brain signals and transfer it to another brain to have actual communication when both brains essentially have their own internal "language". How your brain "stores" the idea of tree is entirely different from how I "store" the idea of a tree. Different neurons, different location in the brain and different signals. Do you know computers, networks and communications work? It's all bound by artificial standards we agreed upon a priori.

    The only way I see is by textual or auditory mechanism between people who speak the same language ( standards agreed upon a priori ). But that wouldn't be brain to brain. It would be brain to text/speech to eyes/ears to brain.

    replies(1): >>44456722 #
    12. goopypoop ◴[] No.44456722{4}[source]
    I'm saying it's clearly possible to develop from scratch the framework with which to learn to interpret each individual remote machine's raw data stream.

    Your intermediate protocol woes are a red herring.