←back to thread

418 points speckx | 1 comments | | HN request time: 0s | source
Show context
eldenring ◴[] No.44974675[source]
This is how America ends up being ahead of the rest of world with every new technology breakthrough. They spend a lot of money, lose a lot of money, take risks, and then end up being too far for others to catch up.

Trying to claim victory against AI/US Companies this early is a dangerous move.

replies(5): >>44974781 #>>44974815 #>>44975537 #>>44976039 #>>44976360 #
vdupras ◴[] No.44974815[source]
[flagged]
replies(2): >>44974936 #>>44975892 #
dang ◴[] No.44975892[source]
"Eschew flamebait. Avoid generic tangents."

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize."

https://news.ycombinator.com/newsguidelines.html

replies(1): >>44976248 #
vdupras ◴[] No.44976248[source]
I have trouble understanding how that guideline applies here. The original article shows how it's possible that we're about to see an AI bubble pop, the parent comment show generic american arrogance[1], and I come up with a historical example of such a mix of hubris and arrogance.

If my comment can be characterized as flamebait, it has to be to a lesser degree than the parent, right?

And I'm not even claiming that the situation applies. If you take the strongest plausible interpretation of my comment, it says that if indeed this whole AI bubble is hubris, if indeed there's a huge fallout, then the leaders of this merry adventure, right now, must feel like Napoleon entering Moscow.

But well, anyways, cheers dang, it's a tough job.

[1]: the strongest possible interpretation of "This is how America ends up being ahead of the rest of world with every new technology breakthrough" is arrogance, right?

replies(2): >>44976886 #>>44981003 #
1. dang ◴[] No.44981003{3}[source]
By generic tangent I just meant we ended up arguing about Napoleon of all things! and the flamebait part was the sarcastic/snarky bit.

But I totally get how the GP comment landed the way you describe, but that's why we have guidelines like these:

"Please don't pick the most provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead."

and (repeating this one) "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

Applying those to the GP comment (https://news.ycombinator.com/item?id=44974675), while it's true that the first sentence could sound like chest-beating, the rest of the comment was making an interesting point about risk tolerance.

The 'strongest plausible interpretation' might go something like this: "Even if the article is correct that 95% of companies are seeing zero return on AI spend so far, that by no means proves that they're on the wrong track. With a major technical wave like AI, it's to be expected that early efforts will involve a lot of losses. Long-term success may require taking early risk, and those with lesser risk tolerance, who aren't willing to sustain the losses associated with these pathfinding efforts, may find themselves losing out in the long run."

I have no idea whether that's right or not but it would make for a more interesting and less hostile conversation! which is basically what we're shooting for here.