Also human can reason, LLMs currently can't do this in useful way and is very limited by their context in all the trials to make it do that. Not to mention their ability to make new things if they do not exist (and not complete made up stuff that are non-sense) is very limited.
You're basically ignoring all the experts saying "LLMs suck at all these things that even beginning domain experts don't suck at" to generate your claim & then ignoring all evidence to the contrary.
And you're ignoring the ways in which LLMs fall on their face to be creative that aren't language-based. Creative problem solving in ways they haven't been trained on is out of their domain while fully squarely in the domain of human intelligence.
> You can claim that that's not intelligence until the cows come home, but any person able to do that would be considered a savant
Computers can do arithmetic really quickly but that's not intelligence but a person computing that quickly is considered a savant. You've built up an erroneous dichotomy in your head.
Sure, for any domain expert, you can easily get an LLM to trip on something. But just the shear amount of things it is above average at puts it easily into the top echelon of humans.
> You're basically ignoring all the experts saying "LLMs suck at all these things that even beginning domain experts don't suck at" to generate your claim & then ignoring all evidence to the contrary.
Domain expertise is not the only form of intelligence. The most interesting things often lie at the intersections of domains. As I said in another comment. There are a variety of ways to judge intillegence, and no one quantifiable metric. It's like asking if Einstein is better than Mozart. I don't know... their fields are so different. However, I think it's pretty safe to say that the modern slate of LLMs fall into the top 10% of human intelligence, simply for their breath of knowledge and ability to synthesize ideas at the cross-section of any wide number of fields.
How many LLMs have created companies entirely on their own? Or do anything unprompted, for that matter? You can go on about it but the fact that they require human interaction means the intelligence comes from the human using them, not the LLM itself. Tools are not intelligent.