If anything, LLMs have surprised at much better they are than humans in understanding instructions for text based activities. But they are MUCH worse than humans when it comes to creating images/videos.
If anything, LLMs have surprised at much better they are than humans in understanding instructions for text based activities. But they are MUCH worse than humans when it comes to creating images/videos.
That's demonstrateably false, as proven by both OpenAI's own research [1] and endless independent studies by now.
What is fascinating is how some people cling on false ideas about what LLM is and isnt.
Its a recurring fallacy that's bound to get it's own name any time soon.
Put it this way — I’m going to give you a text based question to solve and you have a choice to get another human to solve it (randomly selected from adults in the US) or ChatGPT, and both will be given 30 minutes to read and solve the problem — which would you choose?
You wouldn't randomly selected an arbitrary adult from the USA to do a brain surgery on you, so this argument is rabulistic.
But if you were to go back to 2020 and ask if your take a random human over a the state of the art AI to answer a text question you’d take the random human every time except for arithmetic (and you’d have to write it in math notation and not plain English).
And if you were to ask AI experts when would you chose an AI they’d say at least not for a decade or two, if ever.