Facebook? "Steal your data"
Google? "Kill your favourite feature"
Apple? "App Store is enemy of the people"
OpenAI? "More like ClosedAI amirite"
They apparently didn't read the article, or didn't understand i, or disregard from it. (Why, why, why?)
And they fail to realize that they don't know what they are talking about, nevertheless keep talking. Similar to an over confident AI.
On a discussion about hallucinating AIs, the humans start hallucinating.
If we (humans) make confident guesses, but are wrong — then, others will look at us disappointedly, thinking "oh s/he doesn't know what s/he is talking about, I'm going to trust them a bit less hereafter". And we'll tend to feel shame and want to withdraw.
That's a pretty strong punishment, for being confidently wrong? Not that odd, then, that humans say "I'm not sure" more often than AIs?
Altman simping all over.