> I think it's now clear that higher level capabilities are emerging in these models and we need to take this seriously as a social/economic/political challenge.
It is a hard truth to face. I admit I always feel a little bit of happiness when someone shows me a stupid error ChatGPT made, as if it would somehow invalidate all the awesome things it can do and the impact it will certainly have on all of us. What does it matter if ChatGPT is conscious or not when it can clearly automate a lot of work we previously considered to be creative?.
Since last year I started to seriously take a look at AI and started learning about LLMs. Until a few days ago I hadn't bought the explanation that these things are just predicting the next word, but I accepted it once I started running the Alpaca/Llama locally on my computer.
The concept of predicting words based on statistics seems simple, but clearly complex behavior emerges from it. Maybe our own intelligence emerges from simple primitives too?