Most active commenters
  • trott(4)
  • lostmsu(4)
  • LoganDark(3)

←back to thread

56 points trott | 15 comments | | HN request time: 1.92s | source | bottom
Show context
makapuf ◴[] No.40714795[source]
Funny that it does not need that much data to train your average 20th century human genius. I'd say that if we are dreaming of the future of ai, learning and reasoning seems the greatest issue, not data. That said, the article title is about LLMs, so that's what will need changing I guess.
replies(3): >>40715430 #>>40715643 #>>40716666 #
jstanley ◴[] No.40715430[source]
Humans aren't just text interfaces though. The majority of your input is not textual but is sights, sounds, feelings, etc., that LLMs don't (yet?) have access to.

Humans receive an enormous amount of training data in forms not currently available to LLMs.

If you locked baby Einstein in a room with the collected works of humanity and left him there for a lifetime, I doubt he'd have even learnt to read on his own.

replies(6): >>40715609 #>>40715647 #>>40715822 #>>40715950 #>>40716247 #>>40716485 #
1. trott ◴[] No.40715822[source]
The stream of data from vision does NOT explain why humans learn 1000x faster: Children who lost their sight early on, can grow up to be intelligent. They can learn English, for example. They don't need to hear 200B words, like GPT-3.
replies(3): >>40716628 #>>40716999 #>>40720531 #
2. bhickey ◴[] No.40716628[source]
The human brain isn't randomly initialized. It's undergone 500m years of pretraining.
replies(2): >>40717032 #>>40719440 #
3. LoganDark ◴[] No.40716999[source]
Humans use bottom-up reinforcement learning, but nearly all LLMs use gradient descent. Not only are those completely different directions (bottom-up as in humans versus top-down as in gradient descent) with completely different emergent behavior, but minimizing loss is not in the reward function of a human, even if schools like to think it makes for an effective education. (I'd argue it doesn't.)
4. LoganDark ◴[] No.40717032[source]
This makes me wonder if human brains can be genetically predisposed to a particular dominant language. I'd imagine not since that isn't typically a factor in selection, but I still wonder.
replies(2): >>40717173 #>>40717313 #
5. bhickey ◴[] No.40717173{3}[source]
I doubt it. Language and human evolution operate on different time scales. We wouldn't be able to converse with someone from 13th century England. If anything I will expect selective pressure on languages—those that are easy to use are more likely to be adopted.

Secondarily, I would expect this effect to be swamped by other factors (e.g. conquest).

6. Grimblewald ◴[] No.40717313{3}[source]
From what I have read and have come to understand, it is more that we are generally predisposed to human language in general, specific portions of our brain especially so.
replies(1): >>40717555 #
7. LoganDark ◴[] No.40717555{4}[source]
> it is more that we are generally predisposed to human language in general

I understand, that's not what I was wondering.

8. trott ◴[] No.40719440[source]
> The human brain isn't randomly initialized. It's undergone 500m years of pretraining.

All of the information accumulated by evolution gets passed through DNA. For humans, that's well under 1GB. Probably a very tiny fraction of that determines how the brain works at the algorithmic level. You should think of this information as the "software" of the brain, not pretrained LLM weights (350GB for GPT-3).

9. lostmsu ◴[] No.40720531[source]
Even audio is several magnitudes larger. Uncompressed stereo is 100 kilobytes per second. So an hour is already 0.5 gigabytes. A year is ~3 TB.
replies(1): >>40720817 #
10. trott ◴[] No.40720817[source]
> Uncompressed stereo is 100 kilobytes per second.

How much of that is cognitively useful for learning English? On top of the textual content, audio gives you emphasis and mood. Not a lot of information in that -- a few bits per sentence.

replies(1): >>40721824 #
11. lostmsu ◴[] No.40721824{3}[source]
Nearly all of it. You need a lot of pictures without cats to explain what a cat is.
replies(2): >>40722319 #>>40723538 #
12. makapuf ◴[] No.40722319{4}[source]
But you don't need millions of pictures of lions as a kid to know what a lion is.
replies(1): >>40723371 #
13. lostmsu ◴[] No.40723371{5}[source]
Neither do CNNs, so I don't quite see your point. You are throwing numbers without good estimates. Get descent estimates for both children and NNs then make categorical conclusions.

Better even measure in bytes. And remember that kids look at video, not at individual pictures (even if these are videos of pictures).

14. trott ◴[] No.40723538{4}[source]
> Nearly all of it.

Maybe you misunderstood me. I'm not talking about learning to understand spoken English.

You don't need hearing or vision at all to grow up to be intelligent (and able to write English).

replies(1): >>40725184 #
15. lostmsu ◴[] No.40725184{5}[source]
What is your point exactly? Did you estimate raw amount of data received by people to make claims about data efficiency?