Trying to claim victory against AI/US Companies this early is a dangerous move.
Trying to claim victory against AI/US Companies this early is a dangerous move.
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize."
If my comment can be characterized as flamebait, it has to be to a lesser degree than the parent, right?
And I'm not even claiming that the situation applies. If you take the strongest plausible interpretation of my comment, it says that if indeed this whole AI bubble is hubris, if indeed there's a huge fallout, then the leaders of this merry adventure, right now, must feel like Napoleon entering Moscow.
But well, anyways, cheers dang, it's a tough job.
[1]: the strongest possible interpretation of "This is how America ends up being ahead of the rest of world with every new technology breakthrough" is arrogance, right?
But yeah, I have to be colored surprised. I was certain I was replying to arrogance, rather than (possibly misplaced) admiration. I guess that puts me on the wrong side of the moderation fence. Sorry.
But I totally get how the GP comment landed the way you describe, but that's why we have guidelines like these:
"Please don't pick the most provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead."
and (repeating this one) "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
Applying those to the GP comment (https://news.ycombinator.com/item?id=44974675), while it's true that the first sentence could sound like chest-beating, the rest of the comment was making an interesting point about risk tolerance.
The 'strongest plausible interpretation' might go something like this: "Even if the article is correct that 95% of companies are seeing zero return on AI spend so far, that by no means proves that they're on the wrong track. With a major technical wave like AI, it's to be expected that early efforts will involve a lot of losses. Long-term success may require taking early risk, and those with lesser risk tolerance, who aren't willing to sustain the losses associated with these pathfinding efforts, may find themselves losing out in the long run."
I have no idea whether that's right or not but it would make for a more interesting and less hostile conversation! which is basically what we're shooting for here.