There are better ways to argue the same position but it's probably indefensible.
Neural networks are fundamentally approximators, both when they are approximating long-range relations between concepts (like LLMs do) or when they denoise some noise for images. They approximate thought and intelligence, because we have coded it in our writings. Our writings are a complete fingerprint of the thought process, because we are able to pick up any thought process from a book, without ever seeing or otherwise having contact with the author. Therefore there is an increasingly more high-fidelity "real thought" in there.
> that in and of itself is a monumental achievement but there is no real thought involved.
Pretty much anything that requires our current level of thought is therefore reachable with ANNs now that we know how to train them in depth and width.
The real question is whether this is enough. We want ASI, but our texts only have AGI, and everything that comes from biology (including intelligence) is logarithmic in scale. There is zero evidence that language models will ever learn to create abstract entities better than any collection of humans do. AI companies are advertising armies of PhD students, but we already have millions of PhD students , yet our most pressing problems have not made a lot of progress for decades. That's what should be worrying to us, not the fact that we will all lose our jobs.