←back to thread

Death by AI

(davebarry.substack.com)
583 points ano-ther | 3 comments | | HN request time: 0.719s | source
Show context
devinplatt ◴[] No.44619933[source]
This reminds me a lot of the special policies Wikipedia has developed through experience about sensitive topics, like biographies of living persons, deaths, etc.
replies(2): >>44619994 #>>44620034 #
pyman ◴[] No.44619994[source]
I'm worried about this. Companies like Wikipedia spent years trying to get things right, and now suddenly Google and Microsoft (including OpenAI) are using GenAI to generate content that, frankly, can't be trusted because it's often made up.

That's deeply concerning, especially when these two companies control almost all the content we access through their search engines, browsers and LLMs.

This needs to be regulated. These companies should be held accountable for spreading false information or rumours, as it can have unexpected consequences.

replies(3): >>44620081 #>>44621388 #>>44621928 #
Aurornis ◴[] No.44620081[source]
> This needs to be regulated. They should be held accountable for spreading false information or rumours,

Regulated how? Held accountable how? If we start fining LLM operators for pieces of incorrect information you might as well stop serving the LLM to that country.

> since it can have unexpected consequences

Generally you hold the person who takes action accountable. Claiming an LLM told you bad information isn’t any more of a defense than claiming you saw the bad information on a Tweet or Reddit comment. The person taking action and causing the consequences has ownership of their actions.

I recall the same hand-wringing over early search engines: There was a debate about search engines indexing bad information and calls for holding them accountable for indexing incorrect results. Same reasoning: There could be consequences. The outrage died out as people realize they were tools to be used with caution, not fact-checked and carefully curated encyclopedias.

> I'm worried about this. Companies like Wikipedia spent years trying to get things right,

Would you also endorse the same regulations against Wikipedia? Wikipedia gets fined every time incorrect information is found on the website?

EDIT: Parent comment was edited while I was replying to add the comment about outside of the US. I welcome some country to try regulating LLMs to hold them accountable for inaccurate results so we have some precedent for how bad of an idea that would be and how much the citizens would switch to using VPNs to access the LLM providers that are turned off for their country in response.

replies(2): >>44620301 #>>44620631 #
pyman ◴[] No.44620301[source]
If Google accidentally generates an article claiming a politician in XYZ country is corrupt the day before an election, then quietly corrects it after the election, should we NOT hold them accountable?

Other companies have been fined for misleading customers [0] after a product launch. So why make an exception for Big Tech outside the US?

And why is the EU the only bloc actively fining US Big Tech? We need China, Asia and South America to follow their lead.

[0] https://en.m.wikipedia.org/wiki/Volkswagen_emissions_scandal

replies(1): >>44620565 #
1. jdietrich ◴[] No.44620565[source]
Volkswagen intentionally and persistently lied to regulators. In this instance, Google confused one Dave Barry with another Dave Barry. While it is illegal to intentionally deceive for material gain, it is not generally illegal to merely be wrong.
replies(1): >>44620652 #
2. pyman ◴[] No.44620652[source]
This is exactly why we need to regulate Big Tech. Right now, they're saying: "It wasn't us, it was our AI's fault."

But how do we know they're telling the truth? How do we know it wasn't intentional? And more importantly, who's held accountable?

While Google's AI made the mistake, Google deployed it, branded it, and controls it. If this kind of error causes harm (like defamation, reputational damage, or interference in public opinion), intent doesn't necessarily matter in terms of accountability.

So while it's not illegal to be wrong, the scale and influence of Big Tech means they can't hide behind "it was the AI, not us."

replies(1): >>44621924 #
3. ◴[] No.44621924[source]