Also human can reason, LLMs currently can't do this in useful way and is very limited by their context in all the trials to make it do that. Not to mention their ability to make new things if they do not exist (and not complete made up stuff that are non-sense) is very limited.
You're basically ignoring all the experts saying "LLMs suck at all these things that even beginning domain experts don't suck at" to generate your claim & then ignoring all evidence to the contrary.
And you're ignoring the ways in which LLMs fall on their face to be creative that aren't language-based. Creative problem solving in ways they haven't been trained on is out of their domain while fully squarely in the domain of human intelligence.
> You can claim that that's not intelligence until the cows come home, but any person able to do that would be considered a savant
Computers can do arithmetic really quickly but that's not intelligence but a person computing that quickly is considered a savant. You've built up an erroneous dichotomy in your head.
Sure, for any domain expert, you can easily get an LLM to trip on something. But just the shear amount of things it is above average at puts it easily into the top echelon of humans.
> You're basically ignoring all the experts saying "LLMs suck at all these things that even beginning domain experts don't suck at" to generate your claim & then ignoring all evidence to the contrary.
Domain expertise is not the only form of intelligence. The most interesting things often lie at the intersections of domains. As I said in another comment. There are a variety of ways to judge intillegence, and no one quantifiable metric. It's like asking if Einstein is better than Mozart. I don't know... their fields are so different. However, I think it's pretty safe to say that the modern slate of LLMs fall into the top 10% of human intelligence, simply for their breath of knowledge and ability to synthesize ideas at the cross-section of any wide number of fields.
But they're not. The people who are extremely competent at many fields will still outperform LLMs in those fields. The LLM can basically only outperform a complete beginner in the area & makes up for that weakness by scaling up the amount it can output which a human can't match. That doesn't take away from the fact that the output is complete garbage when given anything it doesn't know the answer to. As I noted elsewhere, ask it to provide an implementation of the S3 ListObjects operation (like the actual backend) and see what BS it tries to output to the point where you have to spend a good amount of time to convince it just to not output an example of using the S3 ListObjects API.
> I think it's pretty safe to say that the modern slate of LLMs fall into the top 10% of human intelligence, simply for their breath of knowledge and ability to synthesize ideas at the cross-section of any wide number of fields.
Again, evidence assumed that's not been submitted. Please provide an indication of any truly novel ideas being synthesized by LLMs that are a cross-section of fields.
The problem here is that you expect something akin to relativity, the Poincare conjecture, et al. The vast majority of humans are not able to do this.
If you restrict yourself to the sorts of creativity that average people are good at, the models do extremely well.
I'm not sure how to convince you of this. Ideally, I'd get a few people of above average intelligence together, and give them an hour (?) to work on some problem / creative endeavor (we'd have to restrict their tool use to the equivalent of whatever we allow GPT to have), and then we can compare the results.
EDIT: Here's what ChatGPT thinks we should do: https://chatgpt.com/share/673b90ca-8dd4-8010-a1a0-61af699a44...
I want to be clear - I'm talking about the intelligence of AI systems available today and today only. There's lots of reason to be enthusiastic about the future but similarly very cautious about understanding what is available today & what is available today isn't human-like.
This is a common fallacy. The average human ingests a few dozen GB of data a day [1] [2].
ChatGPT 4 was trained on 13 trillion tokens. Say a token is 4 bytes (it's more like 3, but we're being conservative). That's 52 trillion bytes or 52 terabytes.
Say the average human only consumes the lower estimate of 30 GB a day. That means it would take a human 1625 days to consume the number of tokens ChatGPT was trained on, or 4.5 years. Assuming humans and the LLM start from the same spot [3], the proper question is... is ChatGPT smarter than a 4.5 year old. If we use the higher estimate, then we have to ask if ChatGPT is smarter than a 2 year old. Does ChatGPT hallucinate more or less than the average toddler?
The cognitive bias I've seen everywhere is the idea that humans are trained on a small amount of data. Nothing is further from the truth. Humans require training on an insanely large amount of data. A 40 year old human has been trained on orders of magnitudes more data than I think we even have available as data sets. If you prevent a human from being trained on this amount of data through sensory deprivation they go crazy (and hallucinate very vividly too!).
No argument about energy, but this is a technology problem.
[1] https://www.tech21century.com/the-human-brain-is-loaded-dail...
[2] https://kids.frontiersin.org/articles/10.3389/frym.2017.0002...
[3] this is a bad assumption since LLMs are randomly initialized whereas humans seem to be born with some biases that significantly aid in the acquisition of language and social skills
A student consumes only ~6 hours of relevant material a day on various in textual form (textbooks) with minimal guidance from a domain expert and some guidance from peers.
Have you read the studies backing your links? The methodology for how they come up with that estimate is highly questionable especially on its own let alone when it comes to comparing with LLMs. Domain experts in the field are pretty confident that LLMs are trained on more actual information than humans.
> If you prevent a human from being trained on this amount of data through sensory deprivation they go crazy (and hallucinate very vividly too!).
People who are deaf & blind experience a significant amount of sensory deprivation compared with the typical human but do not go crazy or start hallucinating. This suggests that your analysis is flawed. For humans communication is the important bit - as long as we have some kind of communication mechanism we can achieve quite a fair bit.