Most active commenters
  • MangoToupe(4)
  • n4r9(3)
  • vidarh(3)
  • JumpCrisscross(3)

←back to thread

421 points sohkamyung | 17 comments | | HN request time: 1.585s | source | bottom
1. MangoToupe ◴[] No.45669488[source]
Now let's run this experiment against the editorial boards in newsrooms.

Obviously, AI isn't an improvement, but people who blindly trust the news have always been credulous rubes. It's just that the alternative is being completely ignorant of the worldviews of everyone around you.

Peer-reviewed science is as close as we can get to good consensus and there's a lot of reasons this doesn't work for reporting.

replies(4): >>45669508 #>>45669515 #>>45669649 #>>45669813 #
2. n4r9 ◴[] No.45669508[source]
I guess the claim is not that rubes did not used to exist, but rather that technology is increasingly encouraging and streamlining rubism.
replies(2): >>45669613 #>>45669667 #
3. raincole ◴[] No.45669515[source]
Yep.

How could a candidate who yelling "Fake News" like an idiot get elected? Because of the state of journalism.

How could people turn to AI slop? Because of the state of human slop.

4. MangoToupe ◴[] No.45669613[source]
I agree with that assessment, or at least that this is indeed the claim.

But, technology also gave us the internet, and social media. Yes, both are used to propagate misinformation, but it also laid bare how bad traditional media was at both a) representing the world competently and b) representing the opinions and views of our neighbors. Manufacturing consent has never been so difficult (or, I suppose, so irrelevant to the actions of the states that claim to represent us).

replies(1): >>45670016 #
5. falcor84 ◴[] No.45669649[source]
> Peer-reviewed science is as close as we can get to good consensus

I think we're on the same side of this, but I just want to say that we can do a lot better. As per studies around the Replication Crisis over the last decade [0], and particularly this 2016 survey conducted by Monya Baker from Nature [1]:

> 1,576 researchers who took a brief online questionnaire on reproducibility found that more than 70% of researchers have tried and failed to reproduce another scientist's experiment results (including 87% of chemists, 77% of biologists, 69% of physicists and engineers, 67% of medical researchers, 64% of earth and environmental scientists, and 62% of all others), and more than half have failed to reproduce their own experiments.

We need to expect better, needing both better incentives and better evaluation, and I think that AI can help with this.

[0] https://en.wikipedia.org/wiki/Replication_crisis

[1] https://www.nature.com/articles/533452a

6. walkabout ◴[] No.45669667[source]
I decided about a decade ago that McLuhan was a prophet, and that the “message” of the Internet may not include compatibility with democracy, as it turns out.
7. vidarh ◴[] No.45669813[source]
> Now let's run this experiment against the editorial boards in newsrooms.

Or against people in general.

It's a pet peeve of mine that we get these kinds of articles without a baseline established of how people do on the same measure.

Is misrepresenting news content 45% of the time better or worse than the average person? I don't know.

By extension: Would a person using an AI assistant misrepresent news more or less after having read a summary of the news provided by an AI assistant? I don't know that either.

When they have a "Why this distortion matters" section, those things matter. They've not established if this will make things better or worse.

(the cynic in me want another question answered too: How often does reporters misrepresent the news? Would it be better or worse if AI reviewed the facts and presented them vs. letting reporters do it? again: no idea)

replies(2): >>45670353 #>>45671489 #
8. intended ◴[] No.45670016{3}[source]
Technology has been used to absolutely decimate the news media. Organizations like Fox have blazed the path forward for how news organizations succeed in the cable and later internet worlds.

You just give up on uneconomical efforts at accuracy and you sell narratives that work for one political party or the other.

It is a model that has been taken up world over. It just works. “The world is too complex to explain, so why bother?”

And what will you or me do about it? Subscribe to the NYT? Most of us would rather spend that money on a GenAI subscription because that is bucketed differently in our heads.

9. JumpCrisscross ◴[] No.45670353[source]
> It's a pet peeve of mine that we get these kinds of articles without a baseline established of how people do on the same measure

I don’t have a personal human news summarizer?

The comparison is between a human reading the primary source against the same human reading an LLM hallucination mixed with an LLM referring the primary source.

> cynic in me want another question answered too: How often does reporters misrepresent the news?

The fact that you mark as cynical a question answered pretty reliably for most countries sort of tanks the point.

replies(2): >>45671567 #>>45675252 #
10. n4r9 ◴[] No.45671489[source]
The difference is the ease with which AI can be rolled out, scaled up, and woven into the fabric of our interactions with society.
replies(1): >>45671612 #
11. vidarh ◴[] No.45671567{3}[source]
> I don’t have a personal human news summarizer?

Not a personal one. You do however have reporters sitting between you and the source material a lot of the time, and sometimes multiple levels of reporters playing games of telephone with the source material.

> The comparison is between a human reading the primary source against the same human reading an LLM hallucination mixed with an LLM referring the primary source.

In modern news reporting, a fairly substantial proportion of what we digest is not primary sources. It's not at all clear whether an LLM summarising primary sources would be better or worse than reading a reporter passing on primary sources. And in fact, in many cases the news is not even secondary sources - e.g. a wire service report on primary sources getting rewritten by a reporter is not uncommon.

> The fact that you mark as cynical a question answered pretty reliably for most countries sort of tanks the point.

It's a cynical point within the context of this article to point out that it is meaningless to report on the accuracy of AI in isolation because it's not clear that human reporting is better for us. I find it kinda funny that you dismiss this here, after having downplayed the games of telephone that news reporting often is earlier in your reply, thereby making it quite clear I am in fact being a lot more cynical than you about it.

replies(1): >>45672865 #
12. vidarh ◴[] No.45671612{3}[source]
That makes understanding the baseline all the more important. It could be a disaster, or it could in fact be a distinct improvement. Every time someone pushes a breathless headline about failure rates of AI without comparing it to a human baseline, they are in essence potentially misleading us because without that baseline we don't know whether it's better or worse.
replies(1): >>45671775 #
13. n4r9 ◴[] No.45671775{4}[source]
I disagree. Comparison with human baseline is basically irrelevant. AI will be used in so many more ways and at so much greater scale that the failure rate has to stand alone as extraordinarily low regardless of human abilities.
14. JumpCrisscross ◴[] No.45672865{4}[source]
> You do however have reporters sitting between you and the source material a lot of the time

In cases where a reporter is just summarising e.g. a court case, sure. Stock market news has been automated since the 2000s.

More broadly, AI assistants misrepresenting news content may sometimes direct reference a court case. But they often don't. Even if they only could, that covers a small fraction of the news, much of which the AI will need to rely on reporters detailing the primary sources they're interfacing with.

Reporter error is somewhat orthogonal to AI assistants' accuracy.

replies(1): >>45675263 #
15. MangoToupe ◴[] No.45675252{3}[source]
> I don’t have a personal human news summarizer?

Is this not the editorial board and journalist? I'm not sure what the gripe is here.

16. MangoToupe ◴[] No.45675263{5}[source]
> Reporter error is somewhat orthogonal to AI assistants' accuracy.

It is not at all. Journalists are wrong all the time, but you still treat news like record and not a sample. In fact I'd put money that AI mischaracterizes events at a LOWER rate than AI does: narratives shift over time, and journalists are more likely to succumb to this shift.

replies(1): >>45676576 #
17. JumpCrisscross ◴[] No.45676576{6}[source]
> Journalists are wrong all the time, but you still treat news like record and not a sample

Straw man. Everyone educated constantly argues over sourcing.

> I'd put money that AI mischaracterizes events at a LOWER rate than AI does

Maybe it does. But an AI sourcing journalists is demonstrably worse. Source: TFA.

> narratives shift over time, and journalists are more likely to succumb to this shift

Lol, we’ve already forgotten about MechaHitler.

At the end of the day, a lot of people consume news to be entertained. They’re better served by AI. The risk is folks of consequence start doing that, at which point I suppose the system self resolves by making them, in the long run, of no consequence compared to those who own and control the AI.