Most active commenters

    ←back to thread

    204 points tdchaitanya | 12 comments | | HN request time: 0.001s | source | bottom
    Show context
    andrewflnr ◴[] No.45094933[source]
    Is this really the frontier of LLM research? I guess we really aren't getting AGI any time soon, then. It makes me a little less worried about the future, honestly.

    Edit: I never actually expected AGI from LLMs. That was snark. I just think it's notable that the fundamental gains in LLM performance seem to have dried up.

    replies(7): >>45094979 #>>45094995 #>>45095059 #>>45095198 #>>45095374 #>>45095383 #>>45095463 #
    1. kenjackson ◴[] No.45094995[source]
    First, I don't think we will ever get to AGI. Not because we won't see huge advances still, but AGI is a moving ambiguous target that we won't get consensus on.

    But why does this paper impact your thinking on it? It is about budget and recognizing that different LLMs have different cost structures. It's not really an attempt to improve LLM performance measured absolutely.

    replies(3): >>45095489 #>>45096115 #>>45099679 #
    2. _heimdall ◴[] No.45095489[source]
    So you don't expect AGI to be possible ever? Or is your concern mainly with the wildly different definitions people use for it and that we'll continue moving goal posts rather than agree we got there?
    replies(1): >>45095729 #
    3. nutjob2 ◴[] No.45095729[source]
    There's no concrete evidence AGI is possible mostly because it has no concrete definition.

    It's mostly hand waving, hype and credulity, and unproven claims of scalability right now.

    You can't move the goal posts because they don't exist.

    replies(3): >>45096070 #>>45097847 #>>45101489 #
    4. ashirviskas ◴[] No.45096070{3}[source]
    Well, if a human is GI, we just need to make it Artificial. Easy.
    replies(1): >>45098866 #
    5. ACCount37 ◴[] No.45096115[source]
    I can totally see "it's not really AGI because it doesn't consistently outperform those three top 0.000001% outlier human experts yet if they work together".

    It'll be a while until the ability to move the goalposts of "actual intelligence" is exhausted entirely.

    replies(1): >>45096696 #
    6. 9dev ◴[] No.45096696[source]
    Well right now, my niece of 7 years outperforms all LLM contenders in drawing a Pelican on a bicycle
    replies(2): >>45097149 #>>45103451 #
    7. kenjackson ◴[] No.45097149{3}[source]
    I know this was a joke, but LLMs are quite good at this now. If your niece draws better then she’s a good artist.
    8. _heimdall ◴[] No.45097847{3}[source]
    Got it, and yeah I agree with you there. I've been frustrated by a different view of it though, many people seem to have a definition and they are often wildly different.
    9. abalashov ◴[] No.45098866{4}[source]
    I like to say that it's not AI -- it's just A.
    10. baq ◴[] No.45099679[source]
    Given OpenAI definition I’d expect AGI to be around in a decade or two. I don’t expect skynet, though maybe it’s a more realistic vision outcome that just droids mixing with humans.
    11. dahcryn ◴[] No.45101489{3}[source]
    even AI does not have a concrete definition.

    Doesn't mean there aren't practical definitions depending on the context.

    In essence, teaching an AI using recources meant for humans, and nothing more, would be considered AGI. That could be a practical definition, without needing much more rigour.

    There is indeed no evidence we'll get there. But there is also no evidence LLM's should work as well as they do

    12. neuronexmachina ◴[] No.45103451{3}[source]
    I tried it in Gemini just now, it seems to have done a decent job: https://g.co/gemini/share/b6fef8398c01