Almost every summary I have read through contains at least one glaring mistake, but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.
Trust, but verify is all the more relevant today. Except I would discount the trust, even.
A growing number of Discords, open source projects, and other spaces where I participate now have explicit rules against copying and pasting ChatGPT content.
When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.
The LLM copy and paste wall of text that may or may not be accurate is extremely frustrating to everyone else. Some people think they’re being helpful by doing it, but it’s quickly becoming a social faux pas.
answer> Temporal.Instant.fromEpochSeconds(timestamp).toPlainDate()
Trust but verify?
In the programming domain I can at least run something and see it doesn't compile or work as I expect, but you can't verify that a written statement about someone/something is the correct interpretation without knowing the correct answer ahead of time. To muddy the waters further, things work just well enough on common knowledge that it's easy to believe it could be right about uncommon knowledge which you don't know how to verify. (Or else you wouldn't be asking it in the first place)
Or have I missed your point?
---
°Missing a TZ assertion, but I don't remember what happens by default. Zulu time? I'd hope so, but that reinforces my point.
In the first chapter it claimed that most adult humans have 20 teeth.
In the second chapter you read that female humans have 22 chromosomes and male humans have 23.
You find these claims in the 24 pages you sample. Do you buy the book?
Companies are paying huge sums to AI companies with worse track records.
Would you put the book in your reference library if somebody gave it to you for free? Services like Google or DuckDuckGo put their AI-generated content at the top of search results with these inaccuracies.
[edit: replace paragraph that somehow got deleted, fix typo]
This doesn't seem to be universal across all people. The techier crowd, the kind of people who may not immediately trust LLM content, will try to prevent its usage. You know, the type of people to run Discord servers or open-source projects.
But completely average people don't seem to care in the slightest. The kind of people who are completely disconnected from technology just type in whatever, pick the parts they like, and then parade the LLM output around: "Look at what the all-knowing truth machine gave me!"
Most people don't care and don't want to care.
Is it too late for a rival to distinguish itself with techniques like "Don't put garbage AI at the top of search results"?
This has been a Google problem for decades.
I used to run a real estate forum. Someone once wrote a message along the lines of "Joe is a really great real estate agent, but Frank is a total scumbag. Stole all my money."
When people would Google Joe, my forum was the first result. And the snippet Google made from the content was "Joe... is a total scumbag. Stole all my money."
I found out about it when Joe lawyered up. That was a fun six months.
The same code out of an intern or junior programmer you can at least walk through their reasoning on a code review. Even better if they tend to learn and not make that same mistake again. LLMs will happily screw up randomly on every repeated prompt.
The hardest code you encounter is code written by someone else. You don't have the same mental model or memories as the original author. So you need to build all that context and then reason through the code. If an LLM is writing a lot of your code you're missing out on all the context you'd normally build writing it.
If this is indeed true, it seems like Google et al must be liable for output like this according to their own argument, i.e. if the work is transformative, they can’t claim someone else is liable.
These companies can’t have their cake and eat it too. It’ll be interesting to see how this plays out.
Use an agent to help you code or whatever all you want, I don’t care about that. At least listen when I’m trying to share some specific knowledge instead of fobbing me off with GPT.
If we’re both stumped, go nuts. But at least put some effort into the prompt to get a better response.
For people who are newer to it (most people) they think it’s so amazing that errors are forgivable.
>Temporal.Instant.fromEpochSeconds(0).toPlainDate()
Uncaught TypeError: Temporal.Instant.fromEpochSeconds is not a function
Hmm, docs [1] say it should be fromEpochMilliseconds(0). Let's try with that! Temporal.Instant.fromEpochMilliseconds(0).toPlainDate()
Uncaught TypeError: Temporal.Instant.fromEpochMilliseconds(...).toPlainDate is not a function
[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...The problem is that ChatGPT results are getting significantly better over time. GPT-5 with its search tool outputs genuinely useful results without any glaring errors for the majority of things I throw at it.
I'm still very careful not to share information I found using GPT-5 without verifying it myself, but as the quality of results go up the social stigma against sharing them is likely to fade.
"ChatGPT says X" seems roughly equivalent to "some random blog I found claims X". There's a difference between sharing something as a starting point for investigation and passing off unverified information (from any source) as your own well researched/substantiated work which you're willing to stake your professional reputation on standing by.
Of course, quoting an LLM is also pretty different from merely collaborating with an LLM on writing content that's substantially your own words or ideas, which no one should care about one way or another, at least in most contexts.
how about stop forming judgments of people based on their stance on Israel/Hamas, and stop hanging around people who do, and you'll be fine. if somebody misstates your opinion, it won't matter.
probably you'll have to drop bluesky and parts of HN (like this political discussion that you urge be left up) but that's necessary because all legitimate opinions about Israel/Hamas are very misinformed/cherry picked, and AI is just flipping a coin which is just as good as an illegitimate opinion.
(if anybody would like to convince me that they are well informed on these topics, i'm all ears, but doing it here is imho a bad idea so it's on you if you try)
Sure, there is plenty of misinformation being thrown in multiple different directions, but if you think literally "all legitimate opinions" are "misinformed/cherry picked", then odds are you are just looking at the issue through your own misinformed frame of reference.
People make judgments about people based on second hand information. That is just how people work.
yes, i literally do think that, so there are no odds.
i think i am well informed on the related subjects to the extent that whatever point someone might want to make i'll probably have a counterpoint
I really don’t need to do much more than compare ‘number of children killed’ between Israel and Palestine to see who is on the right side of history here. I’ll absolutely form judgements of people based on how they feel about that.
And it took me decades of studying this to determine what to call the two sides.
Every time somebody pastes an LLM response at work, it feels exactly like that. As if I were too fucking stupid to look something up and the thought hadn't even occurred to me, when the whole fucking point of me talking to you is that I wanted a personal response and your opinion to begin with.
I think this is a good habit to get people into, even in casual conversations. Even if someone didn't directly get their info from AI and got it online, the content could have still been generated by AI. Like you said, the trust part of trust but verify is quickly dwindling.
It's out of fashion and perhaps identified with Christianity, and some people think I'm being tongue-in-cheek or gently trolling by using it. But IMO it's neutral and unambiguous: that's a part of the world that is sacred to all the major religions of the Western hemisphere, while not being tied to any particular set of boundaries.
I like the term "echoborg" for these people. I hope it catches on.
And companies have always been able to get away with relatively minor fines for things that get individuals locked up until they rot.
(no easy answers: UK libel law errs in the other direction)
The same people blindly trusting AI nonsense are the same people who trusted nonsense from social media or talking heads on unreputable news channels.
Like, who actually reads the output of The Sun, etc? Those people do, have always done and will continue to do so. And they vote, yaaay democracy - if your voter base lives in a fantasy world of fake news and false science is democracy still sancrosact?
The answer ends in `toPlainDate()` which returns an object with year, month and day properties. ie it does not output the requested format.
This is in addition to the issue that `fromEpochSeconds(timestamp)` really should probably be `fromEpochMilliseconds(timestamp * 1000)`