That's deeply concerning, especially when these two companies control almost all the content we access through their search engines, browsers and LLMs.
This needs to be regulated. These companies should be held accountable for spreading false information or rumours, as it can have unexpected consequences.
Regulated how? Held accountable how? If we start fining LLM operators for pieces of incorrect information you might as well stop serving the LLM to that country.
> since it can have unexpected consequences
Generally you hold the person who takes action accountable. Claiming an LLM told you bad information isn’t any more of a defense than claiming you saw the bad information on a Tweet or Reddit comment. The person taking action and causing the consequences has ownership of their actions.
I recall the same hand-wringing over early search engines: There was a debate about search engines indexing bad information and calls for holding them accountable for indexing incorrect results. Same reasoning: There could be consequences. The outrage died out as people realize they were tools to be used with caution, not fact-checked and carefully curated encyclopedias.
> I'm worried about this. Companies like Wikipedia spent years trying to get things right,
Would you also endorse the same regulations against Wikipedia? Wikipedia gets fined every time incorrect information is found on the website?
EDIT: Parent comment was edited while I was replying to add the comment about outside of the US. I welcome some country to try regulating LLMs to hold them accountable for inaccurate results so we have some precedent for how bad of an idea that would be and how much the citizens would switch to using VPNs to access the LLM providers that are turned off for their country in response.
Other companies have been fined for misleading customers [0] after a product launch. So why make an exception for Big Tech outside the US?
And why is the EU the only bloc actively fining US Big Tech? We need China, Asia and South America to follow their lead.
[0] https://en.m.wikipedia.org/wiki/Volkswagen_emissions_scandal
But how do we know they're telling the truth? How do we know it wasn't intentional? And more importantly, who's held accountable?
While Google's AI made the mistake, Google deployed it, branded it, and controls it. If this kind of error causes harm (like defamation, reputational damage, or interference in public opinion), intent doesn't necessarily matter in terms of accountability.
So while it's not illegal to be wrong, the scale and influence of Big Tech means they can't hide behind "it was the AI, not us."