←back to thread

693 points jsheard | 3 comments | | HN request time: 0.657s | source
Show context
AnEro ◴[] No.45093447[source]
I really hope this stays up, despite the politics involvement to a degree. I think this is a situation that is a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward. A very nuanced and serious topic with lots of back and forth being distilled down to headlines by any source, it is a terrifying reality. Especially if we aren't able to communicate how these tools work to the public. (if they even will care to learn it) At least when humans did this they knew at some level at least they skimmed the information on the person/topic.
replies(8): >>45093755 #>>45093831 #>>45094062 #>>45094915 #>>45095210 #>>45095704 #>>45097171 #>>45097177 #
geerlingguy ◴[] No.45093831[source]
I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc., and there are I think a growing number of people who blindly trust the results of any of these models... including the 'results summary' posted at the top of Google search results.

Almost every summary I have read through contains at least one glaring mistake, but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.

Trust, but verify is all the more relevant today. Except I would discount the trust, even.

replies(8): >>45093911 #>>45094040 #>>45094155 #>>45094750 #>>45097691 #>>45098969 #>>45100795 #>>45107694 #
Aurornis ◴[] No.45094040[source]
> I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc.,

A growing number of Discords, open source projects, and other spaces where I participate now have explicit rules against copying and pasting ChatGPT content.

When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.

The LLM copy and paste wall of text that may or may not be accurate is extremely frustrating to everyone else. Some people think they’re being helpful by doing it, but it’s quickly becoming a social faux pas.

replies(4): >>45094727 #>>45094762 #>>45095823 #>>45096892 #
novok ◴[] No.45096892[source]
LLM text walls are the new pasting a google or wikipedia result link, just more annoying
replies(1): >>45098765 #
1. grg0 ◴[] No.45098765[source]
In the old days, when somebody asked a stupid question on a chat/forum that was just a search away, you would link them to "let me google it for you" (site seems down, but there is now a "let me google that for you"), where it'd take the search query in the URL and display an animation of typing the search in the box and clicking the "search" button.

Every time somebody pastes an LLM response at work, it feels exactly like that. As if I were too fucking stupid to look something up and the thought hadn't even occurred to me, when the whole fucking point of me talking to you is that I wanted a personal response and your opinion to begin with.

replies(1): >>45098996 #
2. rafram ◴[] No.45098996[source]
(It’s always been Let Me Google That For You.)
replies(1): >>45111414 #
3. grg0 ◴[] No.45111414[source]
I am getting old.