Most active commenters

    ←back to thread

    334 points mooreds | 15 comments | | HN request time: 0.845s | source | bottom
    Show context
    dathinab ◴[] No.44484445[source]
    I _hope_ AGI is not right around the corner, for social political reasons we are absolutely not ready for it and it might push the future of humanity into a dystopia abyss.

    but also just taking what we have now with some major power usage reduction and minor improvements here and there already seems like something which can be very usable/useful in a lot of areas (and to some degree we aren't even really ready for that either, but I guess thats normal with major technological change)

    it's just that for those companies creating foundational models it's quite unclear how they can recoup their already spend cost without either major break through or forcefully (or deceptively) pushing it into a lot more places then it fits into

    replies(6): >>44484506 #>>44484517 #>>44485067 #>>44485492 #>>44485764 #>>44486142 #
    1. pbreit ◴[] No.44484517[source]
    Must "AGI" match human intelligence exactly or would outperforming in some functions and underpformin in others qualify?
    replies(6): >>44484575 #>>44484600 #>>44484769 #>>44484956 #>>44488494 #>>44489281 #
    2. saubeidl ◴[] No.44484575[source]
    Where would you draw the line? Any ol' computer outperforms me in doing basic arithmetic.
    replies(2): >>44484645 #>>44484735 #
    3. crooked-v ◴[] No.44484600[source]
    For me, "AGI" would come in with being able to reliably perform simple open-ended tasks successfully without needing any specialized aid or tooling. Not necessarily very well, just being capable of it in the first place.

    For a specific example of what I mean, there's Vending-Bench - even very 'dumb' humans could reliably succeed on that test indefinitely, at least until they got terminally bored of it. Current LLMs, by contrast, are just fundamentally incapable of that, despite seeming very 'smart' if all you pay attention to is their eloquence.

    replies(2): >>44485912 #>>44485936 #
    4. hkt ◴[] No.44484645[source]
    I'd suggest anything able to match a professional doing knowledge work. Original research from recognisably equivalent cognition, or equal abilities with a skilled practitioner of (eg) medicine.

    This sets the bar high, though. I think there's something to the idea of being able to pass for human in the workplace though. That's the real, consequential outcome here: AGI genuinely replacing humans, without need for supervision. That's what will have consequences. At the moment we aren't there (pre-first-line-support doesn't count).

    5. kulahan ◴[] No.44484735[source]
    This is a question of how we quantify intelligence, and there aren’t many great answers. Still, basic arithmetic is probably not the right guideline for intelligence. My guess has always been that it’ll lie somewhere in ability to think critically, which they still have not even attempted yet, because it doesn’t really work with LLMs as they’re structured today.
    6. root_axis ◴[] No.44484769[source]
    At the very least, it needs to be able to collate training data, design, code, train, fine tune and "RLHF" a foundational model from scratch, on its own, and have it show improvements over the current SOTA models before we can even begin to have the conversation about whether we're approaching what could be AGI at some point in the future.
    replies(1): >>44485361 #
    7. OJFord ◴[] No.44484956[source]
    That would be human; I've always understood the General to mean 'as if it's any human', i.e. perhaps not absolute mastery, but trained expertise in any domain.
    8. kadushka ◴[] No.44485361[source]
    I cannot do all that. Am I not generally intelligent?
    9. ◴[] No.44485912[source]
    10. carefulfungi ◴[] No.44485936[source]
    If someone handed you an envelope containing a hidden question, and your life depended on a correct answer, would you rather pick a random person out of the phone book or an LLM to answer it?

    On one hand, LLMs are often idiots. On the other hand, so are people.

    replies(2): >>44486310 #>>44486689 #
    11. crooked-v ◴[] No.44486310{3}[source]
    That's not at all analogous to what I'm talking about. The comparison would be picking an LLM or a random person out of the phone book to, say, operate a vending machine... and we already know LLMs are unable to do that, given the results of Vending-Bench.
    replies(1): >>44490448 #
    12. bookman117 ◴[] No.44486689{3}[source]
    I'd learn as much as I could about what the nature of the question would be beforehand and pay a human with a great track record of handing such questions.
    13. ◴[] No.44488494[source]
    14. dathinab ◴[] No.44489281[source]
    no it doesn't has to it just has to be "general"

    as in it can learn by itself to solve any kind of generic task it can practically interface it (at lest which isn't way to complicated).

    to some degree LLMs can do so theoretically but

    - learning (i.e. training them) is way to slow and costly

    - domain adoption (later learning) often has a ton of unintended side effects (like forgetting a bunch of important previously learned things)

    - it can't really learn by itself in a interactive manner

    - "learning" by e.g. retrieving data from knowledge data base and including it into answers (e.g. RAG) isn't really learning but just information retrieval, also it has issues with context windows and planing

    I could imagine OpenAI putting together multiple LLMs + RAG + planing systems etc. to create something which technically could be named AGI but which isn't really the break through people associate with AGI in the not too distant future.

    15. carefulfungi ◴[] No.44490448{4}[source]
    More than 10% of the global population is illiterate. Even in first world countries, numeracy rates are 75-80%. I think you overestimate how many people could pass the benchmark.

    Edit - rereading, my comment sounds far too combative. I mean it only as an observation that AI is catching up quickly vs what we manage to teach humans generally. Soon, if not already, LLMs will be “better educated” than the average global citizen.