Me, I hate the externalities, but I love the thing. I want to use my own AI, hyper optimized and efficient and private. It would mitigate a lot. Maybe some day.
Me, I hate the externalities, but I love the thing. I want to use my own AI, hyper optimized and efficient and private. It would mitigate a lot. Maybe some day.
It's weird how AI-lovers are always trying to shoehorn an unsupported "it does useful things" into some kind of criticism sandwich where only the solvable problems can be acknowledged as problems.
Just because some technologies have both upsides and downsides doesn't mean that every technology automatically has upsides. GenAI is good at generating these kinds of hollow statements that mimic the form of substantial arguments, but anyone who actually reads it can see how hollow it is.
If you want to argue that it does useful things, you have to explain at least one of those things.
- Actually knowing things / being correct - Creating anything original
It's good at
- Producing convincing output fast and cheap
There are lots of applications where correctness and originality matter less than "can I get convincing output fast and cheap". Other commenters have mentioned being able to vibe-code up a simple app, for example. I know an older man who is not great at writing in English (but otherwise very intelligent) who uses it for correspondence.
Being wrong or lying is almost universally bad and unproductive. But making money has nothing to do with being productive - you can actively make the world worse and make money. Ask RJ Reynolds.
Sure, but there are cases where rightness isn’t a thing.
Don’t get me wrong I’m not an AI Stan, it has real problems, but it’s also not going anywhere. Eventually the bubble will pop and we’ll see which applications of AI turned out to be useful and which didn’t.