It seems to boil down to:
1. LLMs aren't actually "learning" the way humans do so we shouldn't be worried
2. LLMs don't actually "understand" anything so we shouldn't be worried
3. Technology has always been advancing and we've always been freaking out about that, so we shouldn't be worried
4. If your job is automate-able, it probably should be eliminated anyway
What's scary is not that these models are smarter than us, but that we are dumb enough to deploy them in critical contexts and trust the output they generate.
What's scary isn't that these models are so good they'll replace us, but that despite how limited they are, someone will make the decision to replace humans anyway.
What's scary isn't that LLMs will displace good developers, but that LLMs put the power of development in the hands of people who have no idea what they're wielding.
> Sure, with a millions upon millions of training examples, of course you can mimic intelligence. If you already know what’s going to be on the test, common patterns for answers in the test, or even the answer key itself, then are you really intelligent? OR are you just regurgitating information from billions of past tests?
How different are humans from this description in actuality? What are we if not the results of a process that has been optimized by millions upon millions of iterations over long periods of time?