A lot of LLM based software is uneconomical because we don't have enough compute and electricity for what they're trying to do.
GPUs are different, unless things go very poorly, these GPUs should be pretty much obsolete after 10 years.
The ecosystem for GPGPU software and the ability to design and manufacture new GPUs might be like fiber. But that is different because it doesn’t become a useful thing at rest, it only works while Nvidia (or some successor) is still running.
I do think that ecosystem will stick around. Whatever the next thing after AI is, I bet Nvidia has a good enough stack at this point to pivot to it. They are the vendor for these high-throughput devices: CPU vendors will never keep up with their ability to just go wider, and coders are good enough nowadays to not need the crutch of lower latency that CPUs provide (well actually we just call frameworks written by cleverer people, but borrowing smarts is a form of cleverness).
But we do need somebody to keep releasing new versions of CUDA.