Most active commenters
  • pyman(6)
  • weatherlite(4)

←back to thread

Death by AI

(davebarry.substack.com)
583 points ano-ther | 19 comments | | HN request time: 1.244s | source | bottom
Show context
devinplatt ◴[] No.44619933[source]
This reminds me a lot of the special policies Wikipedia has developed through experience about sensitive topics, like biographies of living persons, deaths, etc.
replies(2): >>44619994 #>>44620034 #
1. pyman ◴[] No.44619994[source]
I'm worried about this. Companies like Wikipedia spent years trying to get things right, and now suddenly Google and Microsoft (including OpenAI) are using GenAI to generate content that, frankly, can't be trusted because it's often made up.

That's deeply concerning, especially when these two companies control almost all the content we access through their search engines, browsers and LLMs.

This needs to be regulated. These companies should be held accountable for spreading false information or rumours, as it can have unexpected consequences.

replies(3): >>44620081 #>>44621388 #>>44621928 #
2. Aurornis ◴[] No.44620081[source]
> This needs to be regulated. They should be held accountable for spreading false information or rumours,

Regulated how? Held accountable how? If we start fining LLM operators for pieces of incorrect information you might as well stop serving the LLM to that country.

> since it can have unexpected consequences

Generally you hold the person who takes action accountable. Claiming an LLM told you bad information isn’t any more of a defense than claiming you saw the bad information on a Tweet or Reddit comment. The person taking action and causing the consequences has ownership of their actions.

I recall the same hand-wringing over early search engines: There was a debate about search engines indexing bad information and calls for holding them accountable for indexing incorrect results. Same reasoning: There could be consequences. The outrage died out as people realize they were tools to be used with caution, not fact-checked and carefully curated encyclopedias.

> I'm worried about this. Companies like Wikipedia spent years trying to get things right,

Would you also endorse the same regulations against Wikipedia? Wikipedia gets fined every time incorrect information is found on the website?

EDIT: Parent comment was edited while I was replying to add the comment about outside of the US. I welcome some country to try regulating LLMs to hold them accountable for inaccurate results so we have some precedent for how bad of an idea that would be and how much the citizens would switch to using VPNs to access the LLM providers that are turned off for their country in response.

replies(2): >>44620301 #>>44620631 #
3. pyman ◴[] No.44620301[source]
If Google accidentally generates an article claiming a politician in XYZ country is corrupt the day before an election, then quietly corrects it after the election, should we NOT hold them accountable?

Other companies have been fined for misleading customers [0] after a product launch. So why make an exception for Big Tech outside the US?

And why is the EU the only bloc actively fining US Big Tech? We need China, Asia and South America to follow their lead.

[0] https://en.m.wikipedia.org/wiki/Volkswagen_emissions_scandal

replies(1): >>44620565 #
4. jdietrich ◴[] No.44620565{3}[source]
Volkswagen intentionally and persistently lied to regulators. In this instance, Google confused one Dave Barry with another Dave Barry. While it is illegal to intentionally deceive for material gain, it is not generally illegal to merely be wrong.
replies(1): >>44620652 #
5. blibble ◴[] No.44620631[source]
> If we start fining LLM operators for pieces of incorrect information you might as well stop serving the LLM to that country.

sounds good to me?

replies(1): >>44620664 #
6. pyman ◴[] No.44620652{4}[source]
This is exactly why we need to regulate Big Tech. Right now, they're saying: "It wasn't us, it was our AI's fault."

But how do we know they're telling the truth? How do we know it wasn't intentional? And more importantly, who's held accountable?

While Google's AI made the mistake, Google deployed it, branded it, and controls it. If this kind of error causes harm (like defamation, reputational damage, or interference in public opinion), intent doesn't necessarily matter in terms of accountability.

So while it's not illegal to be wrong, the scale and influence of Big Tech means they can't hide behind "it was the AI, not us."

replies(1): >>44621924 #
7. pyman ◴[] No.44620664{3}[source]
+1

Fines, when backed by strong regulation, can lead to more control and better quality information, but only if companies are actually held to account.

8. Timwi ◴[] No.44621388[source]
Wikipedia is not a company, it's a website.

The organization that runs the website, the Wikimedia Foundation, is also not a company. It's a nonprofit.

And the Wikimedia Foundation have not “spent years trying to get things right”, assuming you're referring to facts posted on Wikipedia. That was in fact a bunch of unpaid volunteer contributors, many of whom anonymous and almost all of whom unaffiliated with the Wikimedia Foundation.

replies(1): >>44622714 #
9. ◴[] No.44621924{5}[source]
10. weatherlite ◴[] No.44621928[source]
> I'm worried about this. Companies like Wikipedia spent years trying to get things right,

Did they ? Lots of people, and some research verify this, think it has a major left leaning bias, so while usually not making up any facts editors still cherry pick whatever facts fit the narrative and leave all else aside.

replies(2): >>44622185 #>>44622505 #
11. decimalenough ◴[] No.44622185[source]
This is indeed a problem, but it's a different problem from just making shit up, which is an AI specialty. If you see something that's factually wrong on Wikipedia, it's usually pretty straightforward to get it fixed.
replies(2): >>44622724 #>>44622977 #
12. fake-name ◴[] No.44622505[source]
To be fair, wikipedia generally tries to represent reality, which _also_ has a "left leaning bias", so maybe it's just you?
replies(2): >>44622986 #>>44622992 #
13. pyman ◴[] No.44622714[source]
Yes, Wikipedia is an organisation, not a company (my bad). They spent years improving its tools and building a strong community. Volunteers review changes and some edits get automatically flagged or even reversed if they look suspicious or come from anonymous users. When there's a dispute, editors use "Talk" pages to discuss what should or shoulda't be included.

You can't really argue with those facts.

14. pyman ◴[] No.44622724{3}[source]
Exactly
15. weatherlite ◴[] No.44622977{3}[source]
> This is indeed a problem, but it's a different problem from just making shit up, which is an AI specialty

It's a bigger problem than AI errors imo, there are so many Wikipedia articles that are heavily biased. A.I makes up silly nonsense maybe once in 200 queries, not 20% of the time. Also, people perhaps are more careful and skeptical with A.I results but take Wikipedia as a source of truth.

replies(1): >>44624933 #
16. card_zero ◴[] No.44622986{3}[source]
The article about it is Ideological Bias on Wikipedia:

https://en.wikipedia.org/wiki/Ideological_bias_on_Wikipedia

17. weatherlite ◴[] No.44622992{3}[source]
Reality has no biases, reality is just reality. A left leaning world view can be beneficial or can be deterimental depending on many factors, what makes you trust that a couple of Wikipedia editors with tons of editing power will be fair?
18. Tijdreiziger ◴[] No.44624933{4}[source]
[citation needed]
replies(1): >>44627125 #
19. weatherlite ◴[] No.44627125{5}[source]
"Larry Sanger, co-founder of Wikipedia, has been critical of Wikipedia since he was laid off as the only editorial employee and departed from the project in 2002.[28][29][30] He went on to found and work for competitors to Wikipedia, including Citizendium and Everipedia. Among other criticisms, Sanger has been vocal in his view that Wikipedia's articles present a left-wing and liberal or "establishment point of view"

https://en.wikipedia.org/wiki/Ideological_bias_on_Wikipedia