←back to thread

265 points ctoth | 6 comments | | HN request time: 0.001s | source | bottom
Show context
mellosouls ◴[] No.43745240[source]
The capabilities of AI post gpt3 have become extraordinary and clearly in many cases superhuman.

However (as the article admits) there is still no general agreement of what AGI is, or how we (or even if we can) get there from here.

What there is is a growing and often naïve excitement that anticipates it as coming into view, and unfortunately that will be accompanied by the hype-merchants desperate to be first to "call it".

This article seems reasonable in some ways but unfortunately falls into the latter category with its title and sloganeering.

"AGI" in the title of any article should be seen as a cautionary flag. On HN - if anywhere - we need to be on the alert for this.

replies(13): >>43745398 #>>43745959 #>>43746159 #>>43746204 #>>43746319 #>>43746355 #>>43746427 #>>43746447 #>>43746522 #>>43746657 #>>43746801 #>>43749837 #>>43795216 #
jjeaff ◴[] No.43745959[source]
I suspect AGI will be one of those things that you can't describe it exactly, but you'll know it when you see it.
replies(7): >>43746043 #>>43746058 #>>43746080 #>>43746093 #>>43746651 #>>43746728 #>>43746951 #
NitpickLawyer ◴[] No.43746058[source]
> but you'll know it when you see it.

I agree, but with the caveat that it's getting harder and harder with all the hype / doom cycles and all the goalpost moving that's happening in this space.

IMO if you took gemini2.5 / claude / o3 and showed it to people from ten / twenty years ago, they'd say that it is unmistakably AGI.

replies(4): >>43746116 #>>43746460 #>>43746560 #>>43746705 #
Jensson ◴[] No.43746116[source]
> IMO if you took gemini2.5 / claude / o3 and showed it to people from ten / twenty years ago, they'd say that it is unmistakably AGI.

No they wouldn't, since those still can't replace human white collar workers even at many very basic tasks.

Once AGI is here most white collar jobs are gone, you'd only need to hire geniuses at most.

replies(1): >>43746249 #
zaptrem ◴[] No.43746249[source]
Which part of "General Intelligence" requires replacing white collar workers? A middle schooler has general intelligence (they know about and can do a lot of things across a lot of different areas) but they likely can't replace white collar workers either. IMO GPT-3 was AGI, just a pretty crappy one.
replies(2): >>43746254 #>>43746322 #
Jensson ◴[] No.43746254[source]
> A middle schooler has general intelligence (they know about and can do a lot of things across a lot of different areas) but they likely can't replace white collar workers either.

Middle schoolers replace white collars workers all the time, it takes 10 years for them to do it but they can do it.

No current model can do the same since they aren't able to learn over time like a middle schooler.

replies(1): >>43746692 #
sebastiennight ◴[] No.43746692[source]
Compared to someone who graduated middle school on November 30th, 2022 (2.5 years ago, would you say that today's gemini 2.5 pro has NOT gained intelligence faster?

I mean, if you're a CEO or middle manager and you have the choice of hiring this middle schooler for general office work, or today's gemini-2.5-pro, are you 100% saying the ex-middle-schooler is definitely going to give you best bang for your buck?

Assuming you can either pay them $100k a year, or spend the $100k on gemini inference.

replies(1): >>43746742 #
Jensson ◴[] No.43746742{3}[source]
> would you say that today's gemini 2.5 pro has NOT gained intelligence faster?

Gemini 2.5 pro the model has not gained any intelligence since it is a static model.

New models are not the models learning, it is humans creating new models. The models trained has access to all the same material and knowledge a middle schooler has as they go on to learn how to do a job, yet they fail to learn the job while the kid succeeds.

replies(3): >>43747033 #>>43747197 #>>43749355 #
ben_w ◴[] No.43747033{4}[source]
> Gemini 2.5 pro the model has not gained any intelligence since it is a static model.

Surely that's an irrelevant distinction, from the point of view of a hiring manager?

If a kid takes ten years from middle school to being worth hiring, then the question is "what new AI do you expect will exist in 10 years?"

How the model comes to be, doesn't matter. Is it a fine tune on more training data from your company docs and/or an extra decade of the internet? A different architecture? A different lab in a different country?

Doesn't matter.

Doesn't matter for the same reason you didn't hire the kid immediately out of middle school, and hired someone else who had already had another decade to learn more in the meantime.

Doesn't matter for the same reason that different flesh humans aren't perfectly substitutable.

You pay to solve a problem, not to specifically have a human solve it. Today, not in ten years when today's middle schooler graduates from university.

And that's even though I agree that AI today doesn't learn effectively from as few examples as humans need.

replies(1): >>43749385 #
1. Jensson ◴[] No.43749385{5}[source]
> Surely that's an irrelevant distinction, from the point of view of a hiring manager?

Stop moving the goalposts closer, that you think humans might make an AGI in the future doesn't mean the current AI is an AGI just because it uses the same interface.

replies(1): >>43749669 #
2. ben_w ◴[] No.43749669[source]
Your own comment was a movement of the goalposts.

Preceding quotation to which you objected:

> A middle schooler has general intelligence (they know about and can do a lot of things across a lot of different areas) but they likely can't replace white collar workers either.

Your response:

> Middle schoolers replace white collars workers all the time, it takes 10 years for them to do it but they can do it.

So I could rephrase your own words here as "Stop moving the goalposts closer, that you think a middle schooler might become a General Intelligence in the future doesn't mean the current middle schooler is a General Intelligence just because they use the same name".

replies(1): >>43749800 #
3. Jensson ◴[] No.43749800[source]
Its the same middle schooler, nobody gave a time limit for how long it takes the middle schooler to solve the problem. These AI models wont solve it no matter how much time spent, you have to make new models, like making new kids.

Put one of these models in a classroom with middle schoolers, and make it go through all the same experiences, they still wont replace a white collar worker.

> a middle schooler might become a General Intelligence in the future

Being able to learn anything a human can means you are a general intelligence now. Having a skill is narrow intelligence, being able to learn is what we mean with general intelligence. No current model has demonstrated the ability to learn arbitrary white collar jobs, so no current model has done what it takes to be considered a general intelligence. The biological model homo sapiens have demonstrated that ability, thus we call homo sapiens generally intelligent.

replies(1): >>43756845 #
4. ben_w ◴[] No.43756845{3}[source]
> Its the same middle schooler, nobody gave a time limit for how long it takes the middle schooler to solve the problem.

Yeah they do. If a middle schooler take 40 hours to solve a maths exam, they fail the exam.

> These AI models wont solve it no matter how much time spent, you have to make new models, like making new kids.

First: doesn't matter, "white collar jobs" aren't companies aren't paying for seat warmers, they're paying for problems solved, and not the kinds of problems 11 year olds can do.

Second: So far as I can tell, every written exam that not only 11 year olds but even 16 year olds take, and in many cases 21 year olds take, LLMs ace — the problem is coming up with new tests that describe the stuff we want that models can't do which humans can. This means that while I even agree these models have gaps, I can't actually describe those gaps in a systematic way, they just "vibe" like my own experience of continuing to misunderstand German as a Brit living in Berlin.

Third: going from 11 years old to adulthood, most or all atoms in your body will be replaced, and your brain architecture changes significantly. IIRC something like half of synapses get pruned by puberty.

Fourth: Taking a snapshot of a model and saying that snapshot can't learn, is like taking a sufficiently detailed MRI scan of a human brain and saying the same thing about the human you've imaged — training cut-offs are kinda arbitrary.

> No current model has demonstrated the ability to learn arbitrary white collar jobs, so no current model has done what it takes to be considered a general intelligence.

Both "intelligence" and "generality" are continuums, not booleans. It's famously hard for humans to learn new languages as they get older, for example.

All AI (not just LLMs) need a lot more experience than me, which means my intelligence is higher. When sufficient traing data exists, that doesn't matter because the AI can just make up for being stupid by being stupid really fast — which is how they can read and write in more languages than I know the names of.

On the other hand, LLMs so far have demonstrated — at the junior level of a fresh graduate of 21, let alone an 11 year old — demonstrated algebra, physics, chemistry, literature, coding, a hundred or so languages, medicine, law, politics, marketing, economics, and customer support. That's pretty general. Even if "fresh graduate" isn't a high standard for employment.

It took reading a significant fraction of the internet to get to that level because of their inefficiency, but they're superhumanly general, "Jack of all trades, master of none".

Well, superhuman compared to any individual. LLM generality only seems mediocre when compared to the entire human species at once, these models vastly exceed any single human because no single human speaks as many languages as these things let alone all the other stuff.

replies(1): >>43756967 #
5. Jensson ◴[] No.43756967{4}[source]
I think you are off topic here. You agree these models can't replace those humans, hence you agree they aren't AGI, the rest of your post somehow got into whether companies would hire 11 year olds or not.

Point is if we had models as smart as a 10 year old, we could put that model through school and then it would be able to do white collar jobs like a 25 year old. But no model can do that, hence the models aren't as smart as 10 year olds, since the biggest part to being smart is being able to learn.

So until we have a model that can do those white collar jobs, we know they aren't as generally smart as 10 year olds since they can't replicate the same learning process. If they could replicate the learning process then we would and we would have that white collar worker.

replies(1): >>43757118 #
6. ben_w ◴[] No.43757118{5}[source]
Reread it, I edit stuff while composing, and hadn't finished until at least 13 minutes after your comment.

Employability is core issue, as you brought up white collar worker comparison:

"""No they wouldn't, since those still can't replace human white collar workers even at many very basic tasks.

Once AGI is here most white collar jobs are gone, you'd only need to hire geniuses at most.""" - https://news.ycombinator.com/item?id=43746116

Key thing you likely didn't have in comment you replied to: G and I are not bool.