Most active commenters

    ←back to thread

    625 points lukebennett | 23 comments | | HN request time: 1.984s | source | bottom
    1. thousand_nights ◴[] No.42139132[source]
    not long ago these people would have you believe that a next word predictor trained on reddit posts would somehow lead to artificial general superintelligence
    replies(4): >>42139199 #>>42139241 #>>42139443 #>>42141632 #
    2. leosanchez ◴[] No.42139199[source]
    If you look around, People still believe that a next word predictor trained on reddit posts would somehow lead to artificial general superintelligence
    replies(2): >>42139530 #>>42139835 #
    3. ◴[] No.42139241[source]
    4. SpicyLemonZest ◴[] No.42139443[source]
    I don't understand why you'd be so dismissive about this. It's looking less likely that it'll end up happening, but is it any less believable than getting general intelligence by training a blob of meat?
    replies(4): >>42139778 #>>42140287 #>>42141772 #>>42142958 #
    5. esafak ◴[] No.42139530[source]
    Because the most powerful solution to that is to have intelligence; a model that can reason. People should not get hung up on the task; it's the model(s) that generates the prediction that matters.
    6. JohnMakin ◴[] No.42139778[source]
    > is it any less believable than getting general intelligence by training a blob of meat?

    Yes, because we understand the rough biological processes that cause this, and they are not remotely similar to this technology. We can also observe it. There is no evidence that current approaches can make LLM's achieve AGI, nor do we even know what processes would cause that.

    replies(1): >>42141685 #
    7. mrguyorama ◴[] No.42139835[source]
    People believed ELIZA was sentient too. I bet you could still get 10% or more people, today, to believe it is.
    replies(1): >>42141861 #
    8. namaria ◴[] No.42140287[source]
    This is a bad comparison. Intelligence didn't appear in some human brain. Intelligence appeared in a planetary ecosystem.
    replies(1): >>42140374 #
    9. aniforprez ◴[] No.42140374{3}[source]
    Also it took hundreds of millions of years to get here. We're basically living in an atomic sliver on the fabric of history. Expecting AGI with 5 of years of scraping at most 30 years of online data and the minuscule fraction of what has been written over the past couple of thousand years was always a pie-in-the-sky dream to raise obscene amounts of money.
    replies(2): >>42141514 #>>42144791 #
    10. Zopieux ◴[] No.42141514{4}[source]
    I can't believe this still needs to be laid down years after the start of the GPT hype. Still, thanks!
    11. in_a_society ◴[] No.42141632[source]
    Expecting AGI from Reddit training data is peak "pray Mr Babbage".
    replies(1): >>42145309 #
    12. kenjackson ◴[] No.42141685{3}[source]
    > because we understand the rough biological processes that cause this

    We don't have a rough understanding of the biological processes that cause this, unless you literally mean just the biological process and not how it actual impacts learning/intelligence.

    There's no evidence that we (brains) have achieved AGI, unless you tautologically define AGI as our brains.

    replies(1): >>42142139 #
    13. mvdtnz ◴[] No.42141772[source]
    I feel like accusing people of being "so dismissive" was strongly associated with NFTs and cryptocurrency a few years ago, and now it's widely deployed against anyone skeptical of very expensive, not very good word generators.
    replies(1): >>42143107 #
    14. 77pt77 ◴[] No.42141861{3}[source]
    ELIZA was probably more effective than most therapists.

    Definitely cheaper.

    15. JohnMakin ◴[] No.42142139{4}[source]
    > We don't have a rough understanding of the biological processes that cause this,

    Yes we do. We know how neurons communicate, we know how they are formed, we have great evidence and clues as to how this evolved and how our various neurological symptoms are able to interact with the world. Is it a fully solved problem? no.

    > unless you literally mean just the biological process and not how it actual impacts learning/intelligence.

    Of course we have some understanding of this as well. There's tremendous bodies of study around this. We know which regions of the brain correlate to reasoning, fear, planning, etc. We know when these regions are damaged or removed what happens, enough to point to a region of the brain and say "HERE." That's far, far beyond what we know about the innards of LLM's.

    > here's no evidence that we (brains) have achieved AGI, unless you tautologically define AGI as our brains.

    This is extremely circular because the current definition(s) of AGI always define it in terms of human intelligence. Unless you're saying that intelligence comes from somewhere other than our brains.

    Anyway, the brain is not like a LLM, in function or form, so this debate is extremely silly to me.

    replies(1): >>42143143 #
    16. BobaFloutist ◴[] No.42142958[source]
    Yes, because that already happened.
    17. SpicyLemonZest ◴[] No.42143107{3}[source]
    I'm not sure what point you're making. It's true that people, including myself, were dismissive of cryptocurrency a few years ago; I think it's clear at this point that we were wrong, and it's not actually the case that the industry is a Ponzi scheme propped up by scammers like FTX.
    18. kenjackson ◴[] No.42143143{5}[source]
    > Yes we do. We know how neurons communicate, we know how they are formed, we have great evidence and clues as to how this evolved and how our various neurological symptoms are able to interact with the world. Is it a fully solved problem? no.

    It's not even close to fully solved. We're still figuring out basic things like the purpose of dreams. We don't understand how memories are encoded or even things like how we process basic emotions like happiness. We're way closer to understanding LLMs than we are the brain, and we don't understand LLMs all that well still either. For example, look at the Golden Gate Bridge work for LLMs -- we have no equivalent for brains today. We've done much more advanced introspection work on LLMs in this short amount of time than we've done on the human brain.

    19. danielbln ◴[] No.42144791{4}[source]
    We built planes, which works quite differently from birds, in the span of what, 100 years? I think we've long left evolution behind when building machines, thinking or otherwise, so I'm not sure why the powerful but inefficient evolutionary process is held to some gold standard here.
    replies(1): >>42145542 #
    20. kreyenborgi ◴[] No.42145309[source]
    If you are one of today's ten thousand, this is a reference to the original garbage-in, garbage-out quote: https://en.wikiquote.org/wiki/Charles_Babbage#Passages_from_...
    21. namaria ◴[] No.42145542{5}[source]
    It's not a gold standard. It just shows how difficult the problem really is.

    Flying machines rest on the excess power of internal combustion. They have nothing to do with bird evolution.

    replies(1): >>42148991 #
    22. danielbln ◴[] No.42148991{6}[source]
    The fact that it has nothing to do with evolution is exactly my point. We built something that can fly but has nothing to do with how birds fly. So we might be able to build an AGI that isn't based on biological mechanism and/or evolutionary principles.
    replies(1): >>42154431 #
    23. aniforprez ◴[] No.42154431{7}[source]
    Planes don't fly radically differently than birds. Birds can flap their wings because they're light and small. Birds don't fly by flapping their wings, they flap their wings to fly. The flapping is to gain and maintain height but beyond that they use the same principle to stay afloat. Birds expend massive amounts of energy to flap too and eat a lot of food to compensate. Large predatory birds try their best to glide as much as possible as a consequence. To carry a human, you need a proportionally larger machine and the square-cubed law would stop us from being able to flap plane size wings. Aside from that, birds and planes fly on the same Bernoulli's Principle of fluid motion and to compensate for being unable to take off from rest with wings, we made engines that provide thrust.

    If AGI doesn't take the form of human-ish intelligence, then we'd never know it was intelligence. This means that the target is always a "visible" human like intelligence and that was gained through evolution and millions of years of experimentation and records. It will most certainly not take that long for human-like intelligence to form given our current progress but we would not recognise anything else.