AI is such a thing.
Either AI is a total fraud, completely useless (at least for programming), or it's deus ex machina.
But reality has more than one bit to answer questions like "Is AI a hype?"
Despite only recently becoming a father and feeling like I am in my prime, I've seen many hypes.
And IT is an eternal cycle of hypes. Every few years a new holy cow is sent through the village to bring salvation to all of us and rid us of every problem (un)imaginable.
To give a few examples:
Client-Server SPAs Industry 4.0 Machine Learning Agile Blockchain Cloud Managed Languages
To me LLMs are nice, though no revelation.
I can use them fine to generate JS or Python code, because apparently the training sets were big enough, and they help me by writing boilerplate code I was gonna write anyway.
When I try them to help me write Rust or Zig, they fall extremely short though.
LLMs are extremely overhyped. They made a few people very rich by promising too much.
They are not AI by any means but marketing.
But they are a tool. And as such they should be treated. Use them when appropriate, but don't hail them...
It is genuinely a useful technology. But it can't do everything and we will have to figure out where it works well and where it doesn't
For myself, I am not a huge user of it. But on my personal projects I have:
1) built graphing solutions in JavaScript in a day despite not really knowing the language or the libraries. This would have taken me weeks (elapsed) rather than one Saturday.
2) finished a large test suite, again in a day that would have been weeks of elapsed effort for me.
3) figured out how to intercept messages to alter default behaviour in a java swing ui. Googling didnt help.
So I have found it to be a massive productivity boost when exploring things I'm not familiar with, or automating boring tests. So I'm surprised that the study says developers were slower using it. Maybe they were holding it wrong ;)
You were previously talking about AI being a bubble and also useful For reference, wikipedia defines a bubble as: "a period when current asset prices greatly exceed their intrinsic valuation". I find that hard to reason about. One way to think about it is that all that AI does is create economic value, and for it to be useful it would have to create more economic value than it destroys. But that's hard to reason about without knowing the actual economics of the business, which the labs are not super transparent about. On the other hand I would imagine that all that infra building by all big players shows some level of confidence that we are way past "is it going to be useful enough?". That is not what reasonable people do when they think there's a bubble, at least that would be unprecedented.
And that's why I was asking.
AI is currently being treated as if it's a multi-trillion dollar market. What if it turns out to be more of a, say, tens of billions of dollars market?
If it was treated as a multi-trillion dollar market, and that was necessary to justify the current investments, then it turning out to be a tens of billions of dollar market would make it not useful.
We can go to the most extreme example: Human life, that presumably is invaluable, which would mean that, no matter what, if we have an effective treatment for a life threatning diseases, that's useful. But it clearly is not: If the single treatment cost the GDP of the entire country, we should clearly not do it, even if we technically could. The treatment is simply not useful.
For AI the case is much simpler: If the AI, that we are currently building, will in effect have destroyed economic value, then it will not have been useful (because, as far as I can tell, at a minimum the promise of AI has to be positive economic value).
I prefer working with AI but it ain't prefect for sure.