←back to thread

114 points cmcconomy | 6 comments | | HN request time: 0s | source | bottom
Show context
anon291 ◴[] No.42174879[source]
Can we all agree that these models far surpass human intelligence now? I mean they process hours worth of audio in less time than it would take a human to even listen. I think the singularity passed and we didn't even notice (which would be expected)
replies(11): >>42174949 #>>42174987 #>>42175002 #>>42175008 #>>42175019 #>>42175095 #>>42175118 #>>42175171 #>>42175223 #>>42175324 #>>42176838 #
elashri ◴[] No.42174987[source]
Processing speed is not the metric for measuring intelligence. The same way we have an above average intelligent people taking longer time to think about stuff and coming with better ideas. One can argue that this useful in some aspects but humans have different types of intelligence spectrum that an LLM will lack. Also are you comparing against average person or people on top of their fields or people working in science?

Also human can reason, LLMs currently can't do this in useful way and is very limited by their context in all the trials to make it do that. Not to mention their ability to make new things if they do not exist (and not complete made up stuff that are non-sense) is very limited.

replies(1): >>42175020 #
anon291 ◴[] No.42175020[source]
You've hit on the idea that intelligence is not quantifiable by one metric. I completely agree. But you're holding a much different goal for AI than for average people. Modern LLMs are able to produce insights much faster and more accurately than most people (you think you could pass the retrieval tasks in the way that the LLMs do (reading the whole text)?... I really encourage people to try). By that metric (insights/speed), I think they far surpass even the most brilliant. You can claim that that's not intelligence until the cows come home, but any person able to do that would be considered a savant.
replies(2): >>42175080 #>>42175136 #
1. elashri ◴[] No.42175080[source]
I would argue the opposite actually. The same way we don't call someone who are able to do arithmetic calculations very fast a genius if they can't think in more useful mathematical way and construct novel ideas. The samething is happening here, these tools are useful in retrieving and processing current information at high speeds but intelligence is not about the ability to process some data at high speed and then recall them. This is what we actually call servant. It is the ability to build on top this knowledge retrieval and use reason to create new ideas is a closer definition of intelligence and would be better goal.
replies(1): >>42175152 #
2. anon291 ◴[] No.42175152[source]
Let's step back.

1. The vast majority of people never come up with a truly new idea. those that do are considered exceptional and their names go down in history books.

2. Most 'new ideas' are rehashes of old ones.

3. If you set the temperature up on an LLM, it will absolutely come up with new ideas. Expecting an LLM to make a scientific discover a la einstein is ... a bit much, don't you think [1]? When it comes to 'everyday' creativity, such as short poems, songs, recipes, vacation itineraries, etc. ChatGPT is more capable than the vast majority of people. Literally, ask ChatGPT to write you a song about _____, and it will come up with something creative. Ask it for a recipe with ridiculous ingredients and see what it does. It'll make things you've never seen before, generate an image for you and even come up with a neologism if you ask it too. It's insanely creative.

[1] Although I have walked chatgpt through various theoretical physics scenarios and it will create new math for you.

replies(2): >>42176118 #>>42176286 #
3. vlovich123 ◴[] No.42176118[source]
> The vast majority of people never come up with a truly new idea. those that do are considered exceptional and their names go down in history books.

Depends on your definition of "truly" new since any idea could be argued to be a mix of all past ideas. But I see truly new ideas all the time without going down in the history books because most new ideas are incrementally building on what came before or are extremely niche and only a very few turn out to be a massive turning point which has a broad impact which is also only usually evident in retrospect (e.g. blue LEDs was basically trial and error and almost an approach that was given up on, transistors were believed to be impactful but not a huge revolution for computing like they turned out to be, etc etc).

replies(1): >>42176221 #
4. anon291 ◴[] No.42176221{3}[source]
> Depends on your definition of "truly" new since any idea could be argued to be a mix of all past ideas.

My personal feeling when I engage in these conversations is that we humans have a cognitive bias to ascribe a human remixing of an old idea to intelligence, but an AI-model remixing of an old idea as lookup.

Indeed, basically every revolutionary idea is a mix of past ideas if you look closely enough. AI is a great example. To the 'lay person' AI is novel! It's new. It can talk to you! It's amazing. But for people who've been in this field for a while, it's an incremental improvement over linear algebra, topology, functional spaces, etc.

5. ehhehehh ◴[] No.42176286[source]
It is not about novelty so much as it is about reasoning from first principles and learning new things.

I don’t need to finetune on five hundred pictures of rabbits to know one. I need one look and then I’ll know for life and can use this in unimaginable and endless variety.

This is a simplistic example which you can naturally pick apart but when you do I’ll provide another such example. My point is, learning at human (or even animal) speeds is definitely not solved and I’d say we are not even attempting that kind of learning yet. There is “in context learning” and “finetuning” and both are not going to result in human level intelligence judging from anything I’ve had access to.

I think you are anthropomorphizing the clever text randomization process. There is a bunch of information being garbled and returned in a semi-legible fashion and you imbue the process behind it with intelligence that I don’t think it has. All these models stumble over simple reasoning unless specifically trained for those specific types of problems. Planning is one particularly famous example.

Time will tell, but I’m not betting on LLMs. I think other forms of AI are needed. Ones that understand substance, modality, time and space and have working memory, not just the illusion of it.

replies(1): >>42176414 #
6. anon291 ◴[] No.42176414{3}[source]
> I don’t need to finetune on five hundred pictures of rabbits to know one. I need one look and then I’ll know for life and can use this in unimaginable and endless variety.

So if you do use in-context learning and give chatGPT a few images of your novel class, then it will correctly classify usually. Finetuning is so you an save on token cost.

Moreover, you don't typically need that many pictures to fine tune. The studies show that the models successfully extrapolate once they've been 'pre-trained'. This is similar to how my toddler insists that a kangaroo is a dog. She's not been exposed to enough data to know otherwise. Dog is a much more fluid category for her than in real life. If you talk with her for a while about it, she will eventually figure out kangaroo is kangaroo and dog is dog. But if you ask her again next week, she'll go back to saying they're dogs. Eventually she'll learn.

> All these models stumble over simple reasoning unless specifically trained for those specific types of problems. Planning is one particularly famous example.

We have extremely expensive programs called schools and universities designed to teach little humans how to plan and execute. If you look at cultures without American/Western biases (and there's not very many left, so we really have to look to history), we see that the idea of planning the way we do it is not universal.