The upshot of this is that LLMs are quite good at the stuff that he thinks only humans will be able to do. What they aren't so good at (yet) is really rigorous reasoning, exactly the opposite of what 20th century people assumed.
The upshot of this is that LLMs are quite good at the stuff that he thinks only humans will be able to do. What they aren't so good at (yet) is really rigorous reasoning, exactly the opposite of what 20th century people assumed.
I mean, not only human-generated text. Also, human brains are arguably statistical models trained on human-generated/collected data as well...
I'd say no, human brains are "trained" on billions of years of sensory data. A very small amount of that is human-generated.
LLMs have access to what we generate, but not the source. So it embed how we may use words, but not why we use this word and not others.
No reason to think an LLM (a few generations down the line if not now) cannot do that
And we can distort quite far (see cartoons in drawing, dubstep in music,...)