←back to thread

265 points ctoth | 7 comments | | HN request time: 2.763s | source | bottom
Show context
mellosouls ◴[] No.43745240[source]
The capabilities of AI post gpt3 have become extraordinary and clearly in many cases superhuman.

However (as the article admits) there is still no general agreement of what AGI is, or how we (or even if we can) get there from here.

What there is is a growing and often naïve excitement that anticipates it as coming into view, and unfortunately that will be accompanied by the hype-merchants desperate to be first to "call it".

This article seems reasonable in some ways but unfortunately falls into the latter category with its title and sloganeering.

"AGI" in the title of any article should be seen as a cautionary flag. On HN - if anywhere - we need to be on the alert for this.

replies(13): >>43745398 #>>43745959 #>>43746159 #>>43746204 #>>43746319 #>>43746355 #>>43746427 #>>43746447 #>>43746522 #>>43746657 #>>43746801 #>>43749837 #>>43795216 #
jjeaff ◴[] No.43745959[source]
I suspect AGI will be one of those things that you can't describe it exactly, but you'll know it when you see it.
replies(7): >>43746043 #>>43746058 #>>43746080 #>>43746093 #>>43746651 #>>43746728 #>>43746951 #
NitpickLawyer ◴[] No.43746058[source]
> but you'll know it when you see it.

I agree, but with the caveat that it's getting harder and harder with all the hype / doom cycles and all the goalpost moving that's happening in this space.

IMO if you took gemini2.5 / claude / o3 and showed it to people from ten / twenty years ago, they'd say that it is unmistakably AGI.

replies(4): >>43746116 #>>43746460 #>>43746560 #>>43746705 #
Jensson ◴[] No.43746116[source]
> IMO if you took gemini2.5 / claude / o3 and showed it to people from ten / twenty years ago, they'd say that it is unmistakably AGI.

No they wouldn't, since those still can't replace human white collar workers even at many very basic tasks.

Once AGI is here most white collar jobs are gone, you'd only need to hire geniuses at most.

replies(1): >>43746249 #
zaptrem ◴[] No.43746249[source]
Which part of "General Intelligence" requires replacing white collar workers? A middle schooler has general intelligence (they know about and can do a lot of things across a lot of different areas) but they likely can't replace white collar workers either. IMO GPT-3 was AGI, just a pretty crappy one.
replies(2): >>43746254 #>>43746322 #
Jensson ◴[] No.43746254[source]
> A middle schooler has general intelligence (they know about and can do a lot of things across a lot of different areas) but they likely can't replace white collar workers either.

Middle schoolers replace white collars workers all the time, it takes 10 years for them to do it but they can do it.

No current model can do the same since they aren't able to learn over time like a middle schooler.

replies(1): >>43746692 #
sebastiennight ◴[] No.43746692[source]
Compared to someone who graduated middle school on November 30th, 2022 (2.5 years ago, would you say that today's gemini 2.5 pro has NOT gained intelligence faster?

I mean, if you're a CEO or middle manager and you have the choice of hiring this middle schooler for general office work, or today's gemini-2.5-pro, are you 100% saying the ex-middle-schooler is definitely going to give you best bang for your buck?

Assuming you can either pay them $100k a year, or spend the $100k on gemini inference.

replies(1): >>43746742 #
Jensson ◴[] No.43746742[source]
> would you say that today's gemini 2.5 pro has NOT gained intelligence faster?

Gemini 2.5 pro the model has not gained any intelligence since it is a static model.

New models are not the models learning, it is humans creating new models. The models trained has access to all the same material and knowledge a middle schooler has as they go on to learn how to do a job, yet they fail to learn the job while the kid succeeds.

replies(3): >>43747033 #>>43747197 #>>43749355 #
1. sebastiennight ◴[] No.43749355[source]
This argument needlessly anthropomorphizes the models. They are not humans nor living entities, they are systems.

So, fine, the gemini-2.5-pro model hasn't gotten more intelligent. What about the "Google AI Studio API" as a system? Or the "OpenAI chat completions API" as a system?

This system has definitely gotten vastly smarter based on the input it's gotten. Would you now concede, that if we look at the API-level (which, by the way, is the way you as the employer do interact with it) then this entity has gotten smarter way faster than the middle-schooler in the last 2.5 years?

replies(1): >>43749376 #
2. Jensson ◴[] No.43749376[source]
But its the AI researchers that made it smarter, it isn't a self contained system like a child. If you fired the people maintaining it and it just interacted with people it would stop improving.
replies(2): >>43750455 #>>43752312 #
3. sebastiennight ◴[] No.43750455[source]
1. The child didn't learn algebra on its own either. Aside from Blaise Pascal, most children learned those skills by having experienced humans teach them.

2. How likely is it that we're going to fire everyone maintaining those models in the next 7.5 years?

replies(1): >>43750596 #
4. Jensson ◴[] No.43750596{3}[source]
> The child didn't learn algebra on its own either. Aside from Blaise Pascal, most children learned those skills by having experienced humans teach them.

That is them interacting with an environment. We don't go and rewire their brain to make them learn math.

If you made an AI that we can put in a classroom and it learns everything needed to do any white collar job that way then it is an AGI. Of course just like a human different jobs would mean it needs different classes, but just like a human you can still make them learn anything.

> How likely is it that we're going to fire everyone maintaining those models in the next 7.5 years?

If you stop making new models? Zero chance the model will replace such high skill jobs. If not? Then that has nothing to do with whether current models are general intelligences.

replies(1): >>43754392 #
5. ben_w ◴[] No.43752312[source]
The brain of a child is not self-contained either. Neither is the entire complete child themselves — "It takes a village to raise a child", to quote the saying.

The entire reason we have a mandatory education system that doesn't stop with middle school (for me, middle school ended age 11), is that it's a way to improve kids.

6. int_19h ◴[] No.43754392{4}[source]
Your brain does rewire itself as you learn.

Here's a question for you. If we take a model with open weights - say, LLaMA or Qwen - and give it access to learning materials as well as tools to perform training runs on its weights and dynamically reload those updated weights - would that constitute learning, to you? If not, then why not?

replies(1): >>43756933 #
7. Jensson ◴[] No.43756933{5}[source]
> Here's a question for you. If we take a model with open weights - say, LLaMA or Qwen - and give it access to learning materials as well as tools to perform training runs on its weights and dynamically reload those updated weights - would that constitute learning, to you? If not, then why not?

It does constitute learning, but it wont make it smart since it isn't intelligent about its learning like human brains are.