Most active commenters
  • freeopinion(5)
  • nosianu(4)
  • dotancohen(3)
  • fennecbutt(3)

←back to thread

693 points jsheard | 72 comments | | HN request time: 0.205s | source | bottom
1. AnEro ◴[] No.45093447[source]
I really hope this stays up, despite the politics involvement to a degree. I think this is a situation that is a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward. A very nuanced and serious topic with lots of back and forth being distilled down to headlines by any source, it is a terrifying reality. Especially if we aren't able to communicate how these tools work to the public. (if they even will care to learn it) At least when humans did this they knew at some level at least they skimmed the information on the person/topic.
replies(8): >>45093755 #>>45093831 #>>45094062 #>>45094915 #>>45095210 #>>45095704 #>>45097171 #>>45097177 #
2. jaccola ◴[] No.45093755[source]
This story will probably become big enough to drown out the fake video and the AI (which is presumably being fed top n search results) will automatically describe this fake video controversy instead...
replies(1): >>45093804 #
3. ◴[] No.45093804[source]
4. geerlingguy ◴[] No.45093831[source]
I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc., and there are I think a growing number of people who blindly trust the results of any of these models... including the 'results summary' posted at the top of Google search results.

Almost every summary I have read through contains at least one glaring mistake, but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.

Trust, but verify is all the more relevant today. Except I would discount the trust, even.

replies(8): >>45093911 #>>45094040 #>>45094155 #>>45094750 #>>45097691 #>>45098969 #>>45100795 #>>45107694 #
5. add-sub-mul-div ◴[] No.45093911[source]
We all think ourselves as understanding the tradeoffs of this tech and that we know how to use it responsibly. And we here may be right. But the typical person wants to do the least amount of effort and thinking possible. Our society will evolve to reflect this, it won't be great, and it will affect all of us no matter how personally responsible some of us remain.
replies(1): >>45094235 #
6. Aurornis ◴[] No.45094040[source]
> I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc.,

A growing number of Discords, open source projects, and other spaces where I participate now have explicit rules against copying and pasting ChatGPT content.

When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.

The LLM copy and paste wall of text that may or may not be accurate is extremely frustrating to everyone else. Some people think they’re being helpful by doing it, but it’s quickly becoming a social faux pas.

replies(4): >>45094727 #>>45094762 #>>45095823 #>>45096892 #
7. freeopinion ◴[] No.45094155[source]
prompt> use javascript to convert a unix timestamp to a date in 'YYYY-MM-DD' format using Temporal

answer> Temporal.Instant.fromEpochSeconds(timestamp).toPlainDate()

Trust but verify?

replies(3): >>45094353 #>>45094358 #>>45096304 #
8. iotku ◴[] No.45094235{3}[source]
I consider myself pretty technically literate, and not the worst at programming (though certainly far from the very best). Even so I can spend plenty of time arguing with LLMs which will give me plausible looking but extremely broken answers to some of the programming problems.

In the programming domain I can at least run something and see it doesn't compile or work as I expect, but you can't verify that a written statement about someone/something is the correct interpretation without knowing the correct answer ahead of time. To muddy the waters further, things work just well enough on common knowledge that it's easy to believe it could be right about uncommon knowledge which you don't know how to verify. (Or else you wouldn't be asking it in the first place)

replies(1): >>45094369 #
9. nielsbot ◴[] No.45094353{3}[source]
what does this mean in this convo?
replies(2): >>45094514 #>>45094851 #
10. eszed ◴[] No.45094358{3}[source]
I mean... Yes? That looks correct to me°, but it's been a minute since I worked with Temporal, so I'd run it myself and examine the output before I cut and paste.

Or have I missed your point?

---

°Missing a TZ assertion, but I don't remember what happens by default. Zulu time? I'd hope so, but that reinforces my point.

replies(1): >>45094438 #
11. add-sub-mul-div ◴[] No.45094369{4}[source]
Even with code, "seeing" a block of code working isn't a guarantee there's not a subtle bug that will expose itself in a week, in a month, in a year under the right conditions.
replies(1): >>45095051 #
12. nosianu ◴[] No.45094438{4}[source]
I would also read the documentation. In the given example, for example, you don't know if the desired fixed format "YYYY-MM-DD" might depend on some locale setting and only works because you happen to have the correct one in that test console.
replies(2): >>45094572 #>>45103572 #
13. freeopinion ◴[] No.45094514{4}[source]
If you were considering purchasing a Biology text book, and spot read two chapters, what if you found the following?:

In the first chapter it claimed that most adult humans have 20 teeth.

In the second chapter you read that female humans have 22 chromosomes and male humans have 23.

You find these claims in the 24 pages you sample. Do you buy the book?

Companies are paying huge sums to AI companies with worse track records.

Would you put the book in your reference library if somebody gave it to you for free? Services like Google or DuckDuckGo put their AI-generated content at the top of search results with these inaccuracies.

[edit: replace paragraph that somehow got deleted, fix typo]

14. freeopinion ◴[] No.45094572{5}[source]
Part of my point is this: If you have to read through the docs to see if the answer can be trusted, why didn't you just read the docs to begin with instead of asking the AI?
replies(2): >>45096193 #>>45124739 #
15. lawlessone ◴[] No.45094727{3}[source]
see it with comments here sometimes , "i asked chatgpt about Y" , really annoying, we all could have asked chatgpt, we didn't.
replies(2): >>45096119 #>>45096987 #
16. leeoniya ◴[] No.45094750[source]
> but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.

https://en.m.wikipedia.org/wiki/Gell-Mann_amnesia_effect

17. tavavex ◴[] No.45094762{3}[source]
> When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.

This doesn't seem to be universal across all people. The techier crowd, the kind of people who may not immediately trust LLM content, will try to prevent its usage. You know, the type of people to run Discord servers or open-source projects.

But completely average people don't seem to care in the slightest. The kind of people who are completely disconnected from technology just type in whatever, pick the parts they like, and then parade the LLM output around: "Look at what the all-knowing truth machine gave me!"

Most people don't care and don't want to care.

replies(1): >>45096203 #
18. freeopinion ◴[] No.45094851{4}[source]
Google distinguished itself early with techniques like PageRank that put more relevant content at the top of their search results.

Is it too late for a rival to distinguish itself with techniques like "Don't put garbage AI at the top of search results"?

19. reaperducer ◴[] No.45094915[source]
I think this is a situation that is a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward.

This has been a Google problem for decades.

I used to run a real estate forum. Someone once wrote a message along the lines of "Joe is a really great real estate agent, but Frank is a total scumbag. Stole all my money."

When people would Google Joe, my forum was the first result. And the snippet Google made from the content was "Joe... is a total scumbag. Stole all my money."

I found out about it when Joe lawyered up. That was a fun six months.

replies(1): >>45103285 #
20. giantrobot ◴[] No.45095051{5}[source]
I've pointed this out a lot and I often get replies along the lines of "people make mistakes too". While this is true, LLMs lack institutional memory leading to decisions. Even good reasoning models can't reliably tell you why they wrote some code they did when asked to review it. They can't even reliably run tests since they'll hardcode passing values for tests.

The same code out of an intern or junior programmer you can at least walk through their reasoning on a code review. Even better if they tend to learn and not make that same mistake again. LLMs will happily screw up randomly on every repeated prompt.

The hardest code you encounter is code written by someone else. You don't have the same mental model or memories as the original author. So you need to build all that context and then reason through the code. If an LLM is writing a lot of your code you're missing out on all the context you'd normally build writing it.

21. ants_everywhere ◴[] No.45095210[source]
Has anyone independently confirmed the accuracy of his claim?
22. haswell ◴[] No.45095704[source]
One of the arguments used to justify the mass-ingestion of copyrighted content to build these models is that the resulting model is a transformative work, and thus fair use.

If this is indeed true, it seems like Google et al must be liable for output like this according to their own argument, i.e. if the work is transformative, they can’t claim someone else is liable.

These companies can’t have their cake and eat it too. It’ll be interesting to see how this plays out.

replies(2): >>45100834 #>>45100992 #
23. bboygravity ◴[] No.45095823{3}[source]
Hi, I'm from 1 year in the future. None of what you typed applies anymore.
replies(2): >>45096395 #>>45101244 #
24. ljm ◴[] No.45096119{4}[source]
Have had some conversations where the other person goes into chatgpt to answer a question while I’m in the process of explaining a solution, and then says “GPT says this, look…”

Use an agent to help you code or whatever all you want, I don’t care about that. At least listen when I’m trying to share some specific knowledge instead of fobbing me off with GPT.

If we’re both stumped, go nuts. But at least put some effort into the prompt to get a better response.

25. niccl ◴[] No.45096193{6}[source]
It's just dawned on me that one possible reason is that you don't know which docs to read. I've recently been forced into learning some JavaScript. Given the original question I wouldn't have known where to start. Now the AI has given me a bunch of things I can look at to see if it's the right thing
replies(1): >>45097577 #
26. Aurornis ◴[] No.45096203{4}[source]
They’ll get there. Tech people have been exposed to it longer. They’ve been around long enough to see people embarrassed by LLM hallucinations.

For people who are newer to it (most people) they think it’s so amazing that errors are forgivable.

replies(2): >>45096376 #>>45107709 #
27. meindnoch ◴[] No.45096304{3}[source]

  >Temporal.Instant.fromEpochSeconds(0).toPlainDate()
  Uncaught TypeError: Temporal.Instant.fromEpochSeconds is not a function
Hmm, docs [1] say it should be fromEpochMilliseconds(0). Let's try with that!

  Temporal.Instant.fromEpochMilliseconds(0).toPlainDate()
  Uncaught TypeError: Temporal.Instant.fromEpochMilliseconds(...).toPlainDate is not a function
[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
replies(1): >>45097224 #
28. simonw ◴[] No.45096376{5}[source]
If anything, I expect this to get worse.

The problem is that ChatGPT results are getting significantly better over time. GPT-5 with its search tool outputs genuinely useful results without any glaring errors for the majority of things I throw at it.

I'm still very careful not to share information I found using GPT-5 without verifying it myself, but as the quality of results go up the social stigma against sharing them is likely to fade.

replies(1): >>45096573 #
29. crashabr ◴[] No.45096395{4}[source]
I think you messed up something with your time-travelling setup. We're in the timeline where GPT5 did not become the all powerful sentient AI that Ai boosters promised us. Which timeline are you from?
replies(1): >>45097173 #
30. LtWorf ◴[] No.45096573{6}[source]
I think it's more that google is getting considerably worse
31. novok ◴[] No.45096892{3}[source]
LLM text walls are the new pasting a google or wikipedia result link, just more annoying
replies(1): >>45098765 #
32. buu700 ◴[] No.45096987{4}[source]
I don't have an issue quoting LLMs in and of itself, but the context and how you present it both matter.

"ChatGPT says X" seems roughly equivalent to "some random blog I found claims X". There's a difference between sharing something as a starting point for investigation and passing off unverified information (from any source) as your own well researched/substantiated work which you're willing to stake your professional reputation on standing by.

Of course, quoting an LLM is also pretty different from merely collaborating with an LLM on writing content that's substantially your own words or ideas, which no one should care about one way or another, at least in most contexts.

33. fsckboy ◴[] No.45097171[source]
>a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward.

how about stop forming judgments of people based on their stance on Israel/Hamas, and stop hanging around people who do, and you'll be fine. if somebody misstates your opinion, it won't matter.

probably you'll have to drop bluesky and parts of HN (like this political discussion that you urge be left up) but that's necessary because all legitimate opinions about Israel/Hamas are very misinformed/cherry picked, and AI is just flipping a coin which is just as good as an illegitimate opinion.

(if anybody would like to convince me that they are well informed on these topics, i'm all ears, but doing it here is imho a bad idea so it's on you if you try)

replies(3): >>45097350 #>>45097356 #>>45097546 #
34. tough ◴[] No.45097173{5}[source]
GPT-6 will save us!
replies(1): >>45097467 #
35. Gigachad ◴[] No.45097177[source]
Google should be held liable for this. They are the ones who published and hosted this. And should be accountable for every bit of libel they publish.
replies(1): >>45101001 #
36. zdragnar ◴[] No.45097224{4}[source]
"The docs" also say that it's only available in firefox. If you're going to use the docs, you should use the docs.
replies(1): >>45100255 #
37. slg ◴[] No.45097350[source]
>because all legitimate opinions about Israel/Hamas are very misinformed/cherry picked

Sure, there is plenty of misinformation being thrown in multiple different directions, but if you think literally "all legitimate opinions" are "misinformed/cherry picked", then odds are you are just looking at the issue through your own misinformed frame of reference.

replies(1): >>45097399 #
38. pimlottc ◴[] No.45097356[source]
This has very little to do with Israel/Hamas. It could be false information about a lewd act, a violent crime, a racist comment, an affair, gross incompetence, a medical condition, religious blasphemy, etc, etc, etc.

People make judgments about people based on second hand information. That is just how people work.

39. fsckboy ◴[] No.45097399{3}[source]
>but if you think literally "all legitimate opinions" are "misinformed/cherry picked", then odds are

yes, i literally do think that, so there are no odds.

i think i am well informed on the related subjects to the extent that whatever point someone might want to make i'll probably have a counterpoint

replies(2): >>45097655 #>>45097687 #
40. lioeters ◴[] No.45097467{6}[source]
Hi, I'm from 2 years in the future. Stop this before it's too late, GPT-7 will enslave humanity and aaargh..
replies(1): >>45097854 #
41. Aeolun ◴[] No.45097546[source]
> how about stop forming judgments of people based on their stance on Israel/Hamas

I really don’t need to do much more than compare ‘number of children killed’ between Israel and Palestine to see who is on the right side of history here. I’ll absolutely form judgements of people based on how they feel about that.

replies(1): >>45097670 #
42. freeopinion ◴[] No.45097577{7}[source]
If you didn't know enough to provide the original prompt, the AI sends you down the Date path instead of Temporal. But you could have used a 20 year-old search engine that would lead you to Javascript docs. You don't need error-riddled AI-generated code to find your way to the docs.
replies(1): >>45124748 #
43. ◴[] No.45097655{4}[source]
44. dotancohen ◴[] No.45097670{3}[source]
I hope that you also consider "how many shelters" each side builds for its civilians. And "how many children are used as human shields" as well.
45. dotancohen ◴[] No.45097687{4}[source]
Whenever someone asks my opinion on something related to the conflict in the holy land, I ask them if they want the anti-Jewish answer, the anti-Muslim answer, the pro-Jewish answer, of the pro-Muslim answer.

And it took me decades of studying this to determine what to call the two sides.

replies(1): >>45100093 #
46. userbinator ◴[] No.45097691[source]
"Trust, but verify" is an oxymoron. AI is not to be trusted for information.
replies(1): >>45107441 #
47. tough ◴[] No.45097854{7}[source]
but we get time travel into the past???
replies(1): >>45098059 #
48. lioeters ◴[] No.45098059{8}[source]
Time travel was a minor side effect of achieving AGI. We got bigger problems now in the future, something to do with the multiverse, aargh..
49. grg0 ◴[] No.45098765{4}[source]
In the old days, when somebody asked a stupid question on a chat/forum that was just a search away, you would link them to "let me google it for you" (site seems down, but there is now a "let me google that for you"), where it'd take the search query in the URL and display an animation of typing the search in the box and clicking the "search" button.

Every time somebody pastes an LLM response at work, it feels exactly like that. As if I were too fucking stupid to look something up and the thought hadn't even occurred to me, when the whole fucking point of me talking to you is that I wanted a personal response and your opinion to begin with.

replies(1): >>45098996 #
50. abustamam ◴[] No.45098969[source]
Whenever I use AI in social settings to fact check or do research or get advice, I always trust but verify, but also disclaim it so that people know to trust but verify.

I think this is a good habit to get people into, even in casual conversations. Even if someone didn't directly get their info from AI and got it online, the content could have still been generated by AI. Like you said, the trust part of trust but verify is quickly dwindling.

51. rafram ◴[] No.45098996{5}[source]
(It’s always been Let Me Google That For You.)
replies(1): >>45111414 #
52. dmurray ◴[] No.45100093{5}[source]
I also like "the Holy land" as a less political name for that region.

It's out of fashion and perhaps identified with Christianity, and some people think I'm being tongue-in-cheek or gently trolling by using it. But IMO it's neutral and unambiguous: that's a part of the world that is sacred to all the major religions of the Western hemisphere, while not being tied to any particular set of boundaries.

replies(1): >>45101226 #
53. meindnoch ◴[] No.45100255{5}[source]
This output is from Firefox :)
54. account42 ◴[] No.45100795[source]
> I've had multiple people copy and paste AI conversations and results in GitHub issues, emails, etc., and there are I think a growing number of people who blindly trust the results of any of these models... including the 'results summary' posted at the top of Google search results.

I like the term "echoborg" for these people. I hope it catches on.

55. account42 ◴[] No.45100834[source]
I mean that's what they have been getting already - the average joe had to deal with draconian copyright all this time but not that it's inconvenient for big tech they get to hand-wave it away. The social contract has already been broken.

And companies have always been able to get away with relatively minor fines for things that get individuals locked up until they rot.

56. pjc50 ◴[] No.45100992[source]
> These companies can’t have their cake and eat it too

I think you're underestimating the effect of billions of dollars on the legal system, and the likely impact of the Have Your Cake And Eat It Act 2026.

replies(1): >>45107727 #
57. pjc50 ◴[] No.45101001[source]
American attitudes to free speech mean that only the most dramatic, damaging libels can be held accountable, years after the effect. I think the only one I can think of that got justice was Alex Jones libelling Sandy Hook victims.

(no easy answers: UK libel law errs in the other direction)

58. dotancohen ◴[] No.45101226{6}[source]
I don't know about the Druze, but the Muslims and the Jews also use the term The Holy Land.
59. imtringued ◴[] No.45101244{4}[source]
Why would people need discord if they can just talk to the AI directly?
replies(1): >>45104081 #
60. dieortin ◴[] No.45103285[source]
Sorry but I don’t see how what you mention is a problem. A search engine is just surfacing the content you have in your site. That is very different from it making stuff up.
replies(1): >>45116914 #
61. MobiusHorizons ◴[] No.45103572{5}[source]
I’m pretty sure toPlainDate() returns an object not a string.
replies(1): >>45124655 #
62. Rohansi ◴[] No.45104081{5}[source]
Because arguing with people who are wrong on the internet. It's no fun doing the same with an LLM because you're either actually wrong or it will assume you're right without putting up a fight
63. nielsbot ◴[] No.45107441{3}[source]
AI = convincing garbage generator
64. fennecbutt ◴[] No.45107694[source]
It's just mass stupidity really. Technology is just a lever for what already existed.

The same people blindly trusting AI nonsense are the same people who trusted nonsense from social media or talking heads on unreputable news channels.

Like, who actually reads the output of The Sun, etc? Those people do, have always done and will continue to do so. And they vote, yaaay democracy - if your voter base lives in a fantasy world of fake news and false science is democracy still sancrosact?

65. fennecbutt ◴[] No.45107709{5}[source]
No, I don't believe they will.
66. fennecbutt ◴[] No.45107727{3}[source]
Yup, corporations have owned us and the government for a long time. Idk why people still act surprised about this.
67. grg0 ◴[] No.45111414{6}[source]
I am getting old.
68. reaperducer ◴[] No.45116914{3}[source]
No apology necessary. Read it again. More carefully this time.
69. nosianu ◴[] No.45124655{6}[source]
???

What does that have to do with my comment?

The OP explicitly wrote

> prompt> use javascript to convert a unix timestamp to a date in 'YYYY-MM-DD' format using Temporal

replies(1): >>45128498 #
70. nosianu ◴[] No.45124739{6}[source]
Because you skip the step of finding out which docs to read. You get the specific functions and what parts of it you should check presented to you. Also, you only need to do this when it isn't already clear.
71. nosianu ◴[] No.45124748{8}[source]
Are you saying there is more than one way to solve most problems in life? I don't believe it! Obviously, only what you personally like is the one true way. /s
72. MobiusHorizons ◴[] No.45128498{7}[source]
> answer> Temporal.Instant.fromEpochSeconds(timestamp).toPlainDate()

The answer ends in `toPlainDate()` which returns an object with year, month and day properties. ie it does not output the requested format.

This is in addition to the issue that `fromEpochSeconds(timestamp)` really should probably be `fromEpochMilliseconds(timestamp * 1000)`