and it is wilful, they know full well it has no concept of truthfulness, yet they serve up its slop output directly into the faces of billions of people
and if this makes "AI" nonviable as a business? tough shit
and it is wilful, they know full well it has no concept of truthfulness, yet they serve up its slop output directly into the faces of billions of people
and if this makes "AI" nonviable as a business? tough shit
And even if they did, it wouldn't really matter. The way Google search is overwhelmingly used in practice, misinformation spread by it is a public hazard and needs to be treated as such.
They certainly don't make hyperspecific claims like "this YouTuber traveled to Israel and changed his mind about the war there, as documented in a video he posted on August 18".
So you accept that all of this is just a quibble over what the disclaimer says? Rather than "AI generated, might contain mistakes", it should just say "for entertainment purposes only" and they'll be in the clear?
If a Fortune teller published articles claiming false things about random prople, gave dangerous medical advice, claiming to be a Nigerian prince, or convinced you to put all your savings into bitcoin; the "entertainment purposes" shield dissolves quite quickly.
Google makes an authorative statement on top of the worlds most used search engine, in a similar way they previously did with Wikipedia for relevant topics.
The little disclaimer should not shield them from doing real tangible harm to people.
The Google disclaimer should probably be upfront and say something more like, “The following statements are fictional, provided for entertainment purposes only. Any resemblance to persons living or dead are purely coincidental.”