Most active commenters
  • freeopinion(5)
  • nosianu(4)

←back to thread

693 points jsheard | 49 comments | | HN request time: 0.209s | source | bottom
Show context
AnEro ◴[] No.45093447[source]
I really hope this stays up, despite the politics involvement to a degree. I think this is a situation that is a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward. A very nuanced and serious topic with lots of back and forth being distilled down to headlines by any source, it is a terrifying reality. Especially if we aren't able to communicate how these tools work to the public. (if they even will care to learn it) At least when humans did this they knew at some level at least they skimmed the information on the person/topic.
replies(8): >>45093755 #>>45093831 #>>45094062 #>>45094915 #>>45095210 #>>45095704 #>>45097171 #>>45097177 #
1. geerlingguy ◴[] No.45093831[source]
I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc., and there are I think a growing number of people who blindly trust the results of any of these models... including the 'results summary' posted at the top of Google search results.

Almost every summary I have read through contains at least one glaring mistake, but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.

Trust, but verify is all the more relevant today. Except I would discount the trust, even.

replies(8): >>45093911 #>>45094040 #>>45094155 #>>45094750 #>>45097691 #>>45098969 #>>45100795 #>>45107694 #
2. add-sub-mul-div ◴[] No.45093911[source]
We all think ourselves as understanding the tradeoffs of this tech and that we know how to use it responsibly. And we here may be right. But the typical person wants to do the least amount of effort and thinking possible. Our society will evolve to reflect this, it won't be great, and it will affect all of us no matter how personally responsible some of us remain.
replies(1): >>45094235 #
3. Aurornis ◴[] No.45094040[source]
> I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc.,

A growing number of Discords, open source projects, and other spaces where I participate now have explicit rules against copying and pasting ChatGPT content.

When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.

The LLM copy and paste wall of text that may or may not be accurate is extremely frustrating to everyone else. Some people think they’re being helpful by doing it, but it’s quickly becoming a social faux pas.

replies(4): >>45094727 #>>45094762 #>>45095823 #>>45096892 #
4. freeopinion ◴[] No.45094155[source]
prompt> use javascript to convert a unix timestamp to a date in 'YYYY-MM-DD' format using Temporal

answer> Temporal.Instant.fromEpochSeconds(timestamp).toPlainDate()

Trust but verify?

replies(3): >>45094353 #>>45094358 #>>45096304 #
5. iotku ◴[] No.45094235[source]
I consider myself pretty technically literate, and not the worst at programming (though certainly far from the very best). Even so I can spend plenty of time arguing with LLMs which will give me plausible looking but extremely broken answers to some of the programming problems.

In the programming domain I can at least run something and see it doesn't compile or work as I expect, but you can't verify that a written statement about someone/something is the correct interpretation without knowing the correct answer ahead of time. To muddy the waters further, things work just well enough on common knowledge that it's easy to believe it could be right about uncommon knowledge which you don't know how to verify. (Or else you wouldn't be asking it in the first place)

replies(1): >>45094369 #
6. nielsbot ◴[] No.45094353[source]
what does this mean in this convo?
replies(2): >>45094514 #>>45094851 #
7. eszed ◴[] No.45094358[source]
I mean... Yes? That looks correct to me°, but it's been a minute since I worked with Temporal, so I'd run it myself and examine the output before I cut and paste.

Or have I missed your point?

---

°Missing a TZ assertion, but I don't remember what happens by default. Zulu time? I'd hope so, but that reinforces my point.

replies(1): >>45094438 #
8. add-sub-mul-div ◴[] No.45094369{3}[source]
Even with code, "seeing" a block of code working isn't a guarantee there's not a subtle bug that will expose itself in a week, in a month, in a year under the right conditions.
replies(1): >>45095051 #
9. nosianu ◴[] No.45094438{3}[source]
I would also read the documentation. In the given example, for example, you don't know if the desired fixed format "YYYY-MM-DD" might depend on some locale setting and only works because you happen to have the correct one in that test console.
replies(2): >>45094572 #>>45103572 #
10. freeopinion ◴[] No.45094514{3}[source]
If you were considering purchasing a Biology text book, and spot read two chapters, what if you found the following?:

In the first chapter it claimed that most adult humans have 20 teeth.

In the second chapter you read that female humans have 22 chromosomes and male humans have 23.

You find these claims in the 24 pages you sample. Do you buy the book?

Companies are paying huge sums to AI companies with worse track records.

Would you put the book in your reference library if somebody gave it to you for free? Services like Google or DuckDuckGo put their AI-generated content at the top of search results with these inaccuracies.

[edit: replace paragraph that somehow got deleted, fix typo]

11. freeopinion ◴[] No.45094572{4}[source]
Part of my point is this: If you have to read through the docs to see if the answer can be trusted, why didn't you just read the docs to begin with instead of asking the AI?
replies(2): >>45096193 #>>45124739 #
12. lawlessone ◴[] No.45094727[source]
see it with comments here sometimes , "i asked chatgpt about Y" , really annoying, we all could have asked chatgpt, we didn't.
replies(2): >>45096119 #>>45096987 #
13. leeoniya ◴[] No.45094750[source]
> but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.

https://en.m.wikipedia.org/wiki/Gell-Mann_amnesia_effect

14. tavavex ◴[] No.45094762[source]
> When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.

This doesn't seem to be universal across all people. The techier crowd, the kind of people who may not immediately trust LLM content, will try to prevent its usage. You know, the type of people to run Discord servers or open-source projects.

But completely average people don't seem to care in the slightest. The kind of people who are completely disconnected from technology just type in whatever, pick the parts they like, and then parade the LLM output around: "Look at what the all-knowing truth machine gave me!"

Most people don't care and don't want to care.

replies(1): >>45096203 #
15. freeopinion ◴[] No.45094851{3}[source]
Google distinguished itself early with techniques like PageRank that put more relevant content at the top of their search results.

Is it too late for a rival to distinguish itself with techniques like "Don't put garbage AI at the top of search results"?

16. giantrobot ◴[] No.45095051{4}[source]
I've pointed this out a lot and I often get replies along the lines of "people make mistakes too". While this is true, LLMs lack institutional memory leading to decisions. Even good reasoning models can't reliably tell you why they wrote some code they did when asked to review it. They can't even reliably run tests since they'll hardcode passing values for tests.

The same code out of an intern or junior programmer you can at least walk through their reasoning on a code review. Even better if they tend to learn and not make that same mistake again. LLMs will happily screw up randomly on every repeated prompt.

The hardest code you encounter is code written by someone else. You don't have the same mental model or memories as the original author. So you need to build all that context and then reason through the code. If an LLM is writing a lot of your code you're missing out on all the context you'd normally build writing it.

17. bboygravity ◴[] No.45095823[source]
Hi, I'm from 1 year in the future. None of what you typed applies anymore.
replies(2): >>45096395 #>>45101244 #
18. ljm ◴[] No.45096119{3}[source]
Have had some conversations where the other person goes into chatgpt to answer a question while I’m in the process of explaining a solution, and then says “GPT says this, look…”

Use an agent to help you code or whatever all you want, I don’t care about that. At least listen when I’m trying to share some specific knowledge instead of fobbing me off with GPT.

If we’re both stumped, go nuts. But at least put some effort into the prompt to get a better response.

19. niccl ◴[] No.45096193{5}[source]
It's just dawned on me that one possible reason is that you don't know which docs to read. I've recently been forced into learning some JavaScript. Given the original question I wouldn't have known where to start. Now the AI has given me a bunch of things I can look at to see if it's the right thing
replies(1): >>45097577 #
20. Aurornis ◴[] No.45096203{3}[source]
They’ll get there. Tech people have been exposed to it longer. They’ve been around long enough to see people embarrassed by LLM hallucinations.

For people who are newer to it (most people) they think it’s so amazing that errors are forgivable.

replies(2): >>45096376 #>>45107709 #
21. meindnoch ◴[] No.45096304[source]

  >Temporal.Instant.fromEpochSeconds(0).toPlainDate()
  Uncaught TypeError: Temporal.Instant.fromEpochSeconds is not a function
Hmm, docs [1] say it should be fromEpochMilliseconds(0). Let's try with that!

  Temporal.Instant.fromEpochMilliseconds(0).toPlainDate()
  Uncaught TypeError: Temporal.Instant.fromEpochMilliseconds(...).toPlainDate is not a function
[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
replies(1): >>45097224 #
22. simonw ◴[] No.45096376{4}[source]
If anything, I expect this to get worse.

The problem is that ChatGPT results are getting significantly better over time. GPT-5 with its search tool outputs genuinely useful results without any glaring errors for the majority of things I throw at it.

I'm still very careful not to share information I found using GPT-5 without verifying it myself, but as the quality of results go up the social stigma against sharing them is likely to fade.

replies(1): >>45096573 #
23. crashabr ◴[] No.45096395{3}[source]
I think you messed up something with your time-travelling setup. We're in the timeline where GPT5 did not become the all powerful sentient AI that Ai boosters promised us. Which timeline are you from?
replies(1): >>45097173 #
24. LtWorf ◴[] No.45096573{5}[source]
I think it's more that google is getting considerably worse
25. novok ◴[] No.45096892[source]
LLM text walls are the new pasting a google or wikipedia result link, just more annoying
replies(1): >>45098765 #
26. buu700 ◴[] No.45096987{3}[source]
I don't have an issue quoting LLMs in and of itself, but the context and how you present it both matter.

"ChatGPT says X" seems roughly equivalent to "some random blog I found claims X". There's a difference between sharing something as a starting point for investigation and passing off unverified information (from any source) as your own well researched/substantiated work which you're willing to stake your professional reputation on standing by.

Of course, quoting an LLM is also pretty different from merely collaborating with an LLM on writing content that's substantially your own words or ideas, which no one should care about one way or another, at least in most contexts.

27. tough ◴[] No.45097173{4}[source]
GPT-6 will save us!
replies(1): >>45097467 #
28. zdragnar ◴[] No.45097224{3}[source]
"The docs" also say that it's only available in firefox. If you're going to use the docs, you should use the docs.
replies(1): >>45100255 #
29. lioeters ◴[] No.45097467{5}[source]
Hi, I'm from 2 years in the future. Stop this before it's too late, GPT-7 will enslave humanity and aaargh..
replies(1): >>45097854 #
30. freeopinion ◴[] No.45097577{6}[source]
If you didn't know enough to provide the original prompt, the AI sends you down the Date path instead of Temporal. But you could have used a 20 year-old search engine that would lead you to Javascript docs. You don't need error-riddled AI-generated code to find your way to the docs.
replies(1): >>45124748 #
31. userbinator ◴[] No.45097691[source]
"Trust, but verify" is an oxymoron. AI is not to be trusted for information.
replies(1): >>45107441 #
32. tough ◴[] No.45097854{6}[source]
but we get time travel into the past???
replies(1): >>45098059 #
33. lioeters ◴[] No.45098059{7}[source]
Time travel was a minor side effect of achieving AGI. We got bigger problems now in the future, something to do with the multiverse, aargh..
34. grg0 ◴[] No.45098765{3}[source]
In the old days, when somebody asked a stupid question on a chat/forum that was just a search away, you would link them to "let me google it for you" (site seems down, but there is now a "let me google that for you"), where it'd take the search query in the URL and display an animation of typing the search in the box and clicking the "search" button.

Every time somebody pastes an LLM response at work, it feels exactly like that. As if I were too fucking stupid to look something up and the thought hadn't even occurred to me, when the whole fucking point of me talking to you is that I wanted a personal response and your opinion to begin with.

replies(1): >>45098996 #
35. abustamam ◴[] No.45098969[source]
Whenever I use AI in social settings to fact check or do research or get advice, I always trust but verify, but also disclaim it so that people know to trust but verify.

I think this is a good habit to get people into, even in casual conversations. Even if someone didn't directly get their info from AI and got it online, the content could have still been generated by AI. Like you said, the trust part of trust but verify is quickly dwindling.

36. rafram ◴[] No.45098996{4}[source]
(It’s always been Let Me Google That For You.)
replies(1): >>45111414 #
37. meindnoch ◴[] No.45100255{4}[source]
This output is from Firefox :)
38. account42 ◴[] No.45100795[source]
> I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc., and there are I think a growing number of people who blindly trust the results of any of these models... including the 'results summary' posted at the top of Google search results.

I like the term "echoborg" for these people. I hope it catches on.

39. imtringued ◴[] No.45101244{3}[source]
Why would people need discord if they can just talk to the AI directly?
replies(1): >>45104081 #
40. MobiusHorizons ◴[] No.45103572{4}[source]
I’m pretty sure toPlainDate() returns an object not a string.
replies(1): >>45124655 #
41. Rohansi ◴[] No.45104081{4}[source]
Because arguing with people who are wrong on the internet. It's no fun doing the same with an LLM because you're either actually wrong or it will assume you're right without putting up a fight
42. nielsbot ◴[] No.45107441[source]
AI = convincing garbage generator
43. fennecbutt ◴[] No.45107694[source]
It's just mass stupidity really. Technology is just a lever for what already existed.

The same people blindly trusting AI nonsense are the same people who trusted nonsense from social media or talking heads on unreputable news channels.

Like, who actually reads the output of The Sun, etc? Those people do, have always done and will continue to do so. And they vote, yaaay democracy - if your voter base lives in a fantasy world of fake news and false science is democracy still sancrosact?

44. fennecbutt ◴[] No.45107709{4}[source]
No, I don't believe they will.
45. grg0 ◴[] No.45111414{5}[source]
I am getting old.
46. nosianu ◴[] No.45124655{5}[source]
???

What does that have to do with my comment?

The OP explicitly wrote

> prompt> use javascript to convert a unix timestamp to a date in 'YYYY-MM-DD' format using Temporal

replies(1): >>45128498 #
47. nosianu ◴[] No.45124739{5}[source]
Because you skip the step of finding out which docs to read. You get the specific functions and what parts of it you should check presented to you. Also, you only need to do this when it isn't already clear.
48. nosianu ◴[] No.45124748{7}[source]
Are you saying there is more than one way to solve most problems in life? I don't believe it! Obviously, only what you personally like is the one true way. /s
49. MobiusHorizons ◴[] No.45128498{6}[source]
> answer> Temporal.Instant.fromEpochSeconds(timestamp).toPlainDate()

The answer ends in `toPlainDate()` which returns an object with year, month and day properties. ie it does not output the requested format.

This is in addition to the issue that `fromEpochSeconds(timestamp)` really should probably be `fromEpochMilliseconds(timestamp * 1000)`