←back to thread

423 points sohkamyung | 1 comments | | HN request time: 0s | source
Show context
MangoToupe ◴[] No.45669488[source]
Now let's run this experiment against the editorial boards in newsrooms.

Obviously, AI isn't an improvement, but people who blindly trust the news have always been credulous rubes. It's just that the alternative is being completely ignorant of the worldviews of everyone around you.

Peer-reviewed science is as close as we can get to good consensus and there's a lot of reasons this doesn't work for reporting.

replies(4): >>45669508 #>>45669515 #>>45669649 #>>45669813 #
vidarh ◴[] No.45669813[source]
> Now let's run this experiment against the editorial boards in newsrooms.

Or against people in general.

It's a pet peeve of mine that we get these kinds of articles without a baseline established of how people do on the same measure.

Is misrepresenting news content 45% of the time better or worse than the average person? I don't know.

By extension: Would a person using an AI assistant misrepresent news more or less after having read a summary of the news provided by an AI assistant? I don't know that either.

When they have a "Why this distortion matters" section, those things matter. They've not established if this will make things better or worse.

(the cynic in me want another question answered too: How often does reporters misrepresent the news? Would it be better or worse if AI reviewed the facts and presented them vs. letting reporters do it? again: no idea)

replies(2): >>45670353 #>>45671489 #
JumpCrisscross ◴[] No.45670353[source]
> It's a pet peeve of mine that we get these kinds of articles without a baseline established of how people do on the same measure

I don’t have a personal human news summarizer?

The comparison is between a human reading the primary source against the same human reading an LLM hallucination mixed with an LLM referring the primary source.

> cynic in me want another question answered too: How often does reporters misrepresent the news?

The fact that you mark as cynical a question answered pretty reliably for most countries sort of tanks the point.

replies(2): >>45671567 #>>45675252 #
vidarh ◴[] No.45671567[source]
> I don’t have a personal human news summarizer?

Not a personal one. You do however have reporters sitting between you and the source material a lot of the time, and sometimes multiple levels of reporters playing games of telephone with the source material.

> The comparison is between a human reading the primary source against the same human reading an LLM hallucination mixed with an LLM referring the primary source.

In modern news reporting, a fairly substantial proportion of what we digest is not primary sources. It's not at all clear whether an LLM summarising primary sources would be better or worse than reading a reporter passing on primary sources. And in fact, in many cases the news is not even secondary sources - e.g. a wire service report on primary sources getting rewritten by a reporter is not uncommon.

> The fact that you mark as cynical a question answered pretty reliably for most countries sort of tanks the point.

It's a cynical point within the context of this article to point out that it is meaningless to report on the accuracy of AI in isolation because it's not clear that human reporting is better for us. I find it kinda funny that you dismiss this here, after having downplayed the games of telephone that news reporting often is earlier in your reply, thereby making it quite clear I am in fact being a lot more cynical than you about it.

replies(1): >>45672865 #
JumpCrisscross ◴[] No.45672865{3}[source]
> You do however have reporters sitting between you and the source material a lot of the time

In cases where a reporter is just summarising e.g. a court case, sure. Stock market news has been automated since the 2000s.

More broadly, AI assistants misrepresenting news content may sometimes direct reference a court case. But they often don't. Even if they only could, that covers a small fraction of the news, much of which the AI will need to rely on reporters detailing the primary sources they're interfacing with.

Reporter error is somewhat orthogonal to AI assistants' accuracy.

replies(1): >>45675263 #
MangoToupe ◴[] No.45675263{4}[source]
> Reporter error is somewhat orthogonal to AI assistants' accuracy.

It is not at all. Journalists are wrong all the time, but you still treat news like record and not a sample. In fact I'd put money that AI mischaracterizes events at a LOWER rate than AI does: narratives shift over time, and journalists are more likely to succumb to this shift.

replies(1): >>45676576 #
1. JumpCrisscross ◴[] No.45676576{5}[source]
> Journalists are wrong all the time, but you still treat news like record and not a sample

Straw man. Everyone educated constantly argues over sourcing.

> I'd put money that AI mischaracterizes events at a LOWER rate than AI does

Maybe it does. But an AI sourcing journalists is demonstrably worse. Source: TFA.

> narratives shift over time, and journalists are more likely to succumb to this shift

Lol, we’ve already forgotten about MechaHitler.

At the end of the day, a lot of people consume news to be entertained. They’re better served by AI. The risk is folks of consequence start doing that, at which point I suppose the system self resolves by making them, in the long run, of no consequence compared to those who own and control the AI.