←back to thread

579 points paulpauper | 1 comments | | HN request time: 0.251s | source
Show context
aerhardt ◴[] No.43604214[source]
My mom told me yesterday that Paul Newman had massive problems with alcohol. I was somewhat skeptical, so this morning I asked ChatGPT a very simple question:

"Is Paul Newman known for having had problems with alcohol?"

All of the models up to o3-mini-high told me he had no known problems. Here's o3-mini-high's response:

"Paul Newman is not widely known for having had problems with alcohol. While he portrayed characters who sometimes dealt with personal struggles on screen, his personal life and public image were more focused on his celebrated acting career, philanthropic work, and passion for auto racing rather than any issues with alcohol. There is no substantial or widely reported evidence in reputable biographies or interviews that indicates he struggled with alcohol abuse."

There is plenty of evidence online that he struggled a lot with alcohol, including testimony from his long-time wife Joanne Woodward.

I sent my mom the ChatGPT reply and in five minutes she found an authoritative source to back her argument [1].

I use ChatGPT for many tasks every day, but I couldn't fathom that it would get so wrong something so simple.

Lesson(s) learned... Including not doubting my mother's movie trivia knowledge.

[1] https://www.newyorker.com/magazine/2022/10/24/who-paul-newma...

replies(27): >>43604240 #>>43604254 #>>43604266 #>>43604352 #>>43604411 #>>43604434 #>>43604445 #>>43604447 #>>43604474 #>>43605109 #>>43605148 #>>43605609 #>>43605734 #>>43605773 #>>43605938 #>>43605941 #>>43606141 #>>43606176 #>>43606197 #>>43606455 #>>43606465 #>>43606551 #>>43606632 #>>43606774 #>>43606870 #>>43606938 #>>43607090 #
stavros ◴[] No.43604447[source]
LLMs aren't good at being search engines, they're good at understanding things. Put an LLM on top of a search engine, and that's the appropriate tool for this use case.

I guess the problem with LLMs is that they're too usable for their own good, so people don't realizing that they can't perfectly know all the trivia in the world, exactly the same as any human.

replies(4): >>43604471 #>>43604558 #>>43606272 #>>43610103 #
MegaButts ◴[] No.43604471[source]
> LLMs aren't good at being search engines, they're good at understanding things.

LLMs are literally fundamentally incapable of understanding things. They are stochastic parrots and you've been fooled.

replies(5): >>43604573 #>>43604575 #>>43604616 #>>43604708 #>>43604736 #
bobsmooth ◴[] No.43604575[source]
What do you call someone that mentions "stochastic parrots" every time LLMs are mentioned?
replies(2): >>43604599 #>>43605079 #
MegaButts ◴[] No.43604599[source]
It's the first time I've ever used that phrase on HN. Anyway, what phrase do you think works better than 'stochastic parrot' to describe how LLMs function?
replies(2): >>43604796 #>>43604903 #
karn97 ◴[] No.43604796[source]
Try to come up with a way to prove humans aren't stochastic parrots then maybe people will atart taking you seriously. Just childish reddit angst rn nothing else.
replies(2): >>43605150 #>>43612004 #
1. throw4847285 ◴[] No.43612004[source]
I hate to be the burden of proof guy, but in this case I'll say: the burden of proof is on you to prove that humans are stochastic parrots. For millenia, nobody thought to assert that the human brain was computational in nature, until people invented computers, and all of a sudden started asserting that many the human brain was just like a classical computer.

Of course, this turned out to be completely false, with advances in understanding of neural networks. Now, again with no evidence other than "we invented this thing that's, useful to us" people have been asserting that humans are just like this thing we invented. Why? What's the evidence? There never is any. It's high dorm room behavior. "What if we're all just machines, man???" And the argument is always that if I disagree with you when you assert this, then I am acting unscientifically and arguing for some kind of magic.

But there's no magic. The human brain just functions in a way different than the new shiny toys that humans have invented, in terms of ability to model an external world, in terms of the way emotions and sense experience are inseparable from our capacity to process information, in terms of consciousness. The hardware is entirely different, and we're functionally different.

The closest things to human minds are out there, and they've been out there for as long as we have: other animals. The real unscientific perspective is that to get high on your own supply and assert that some kind of fake, creepily ingratiating Spock we made up (who is far less charming than Leonard Nimony) is more like us than a chimp is.