←back to thread

678 points georgemandis | 6 comments | | HN request time: 0.902s | source | bottom
1. timerol ◴[] No.44378334[source]
> Is It Accurate?

> I don’t know—I didn’t watch it, lol. That was the whole point. And if that answer makes you uncomfortable, buckle-up for this future we're hurtling toward. Boy, howdy.

This is a great bit of work, and the author accurately summarizes my discomfort

replies(2): >>44381178 #>>44384572 #
2. BHSPitMonkey ◴[] No.44381178[source]
As if human-generated transcriptions of audio ever came with guarantees of accuracy?

This kind of transformation has always come with flaws, and I think that will continue to be expected implicitly. Far more worrying is the public's trust in _interpretations_ and claims of _fact_ produced by gen AI services, or at least the popular idea that "AI" is more trustworthy/unbiased than humans, journalists, experts, etc.

replies(1): >>44383420 #
3. angst ◴[] No.44383420[source]
at least human-generated transcriptions have entities that we can hold responsible for...
replies(1): >>44383704 #
4. _kb ◴[] No.44383704{3}[source]
That still holds true for gen-AI. Organisations that provide transcription services can’t offload responsibility to a language model any more than they can to steno keyboard manufacturers.

If you are the one feeding content to a model then you are that responsible entity.

5. raincole ◴[] No.44384572[source]
A lot of people read newspaper.

Newspaper is essentially just an inaccurate summary of what really happened. So I don't find this realization that uncomfortable.

replies(1): >>44388431 #
6. dmix ◴[] No.44388431[source]
That's why I find the idea of training breaking news on Reddit or Twitter funny, wild exaggerations and targeted spin is the sort of stuff that does best on those sites and generates the most comments, 50% of the output would be lies.