←back to thread

579 points paulpauper | 1 comments | | HN request time: 0s | source
Show context
aerhardt ◴[] No.43604214[source]
My mom told me yesterday that Paul Newman had massive problems with alcohol. I was somewhat skeptical, so this morning I asked ChatGPT a very simple question:

"Is Paul Newman known for having had problems with alcohol?"

All of the models up to o3-mini-high told me he had no known problems. Here's o3-mini-high's response:

"Paul Newman is not widely known for having had problems with alcohol. While he portrayed characters who sometimes dealt with personal struggles on screen, his personal life and public image were more focused on his celebrated acting career, philanthropic work, and passion for auto racing rather than any issues with alcohol. There is no substantial or widely reported evidence in reputable biographies or interviews that indicates he struggled with alcohol abuse."

There is plenty of evidence online that he struggled a lot with alcohol, including testimony from his long-time wife Joanne Woodward.

I sent my mom the ChatGPT reply and in five minutes she found an authoritative source to back her argument [1].

I use ChatGPT for many tasks every day, but I couldn't fathom that it would get so wrong something so simple.

Lesson(s) learned... Including not doubting my mother's movie trivia knowledge.

[1] https://www.newyorker.com/magazine/2022/10/24/who-paul-newma...

replies(27): >>43604240 #>>43604254 #>>43604266 #>>43604352 #>>43604411 #>>43604434 #>>43604445 #>>43604447 #>>43604474 #>>43605109 #>>43605148 #>>43605609 #>>43605734 #>>43605773 #>>43605938 #>>43605941 #>>43606141 #>>43606176 #>>43606197 #>>43606455 #>>43606465 #>>43606551 #>>43606632 #>>43606774 #>>43606870 #>>43606938 #>>43607090 #
ilrwbwrkhv ◴[] No.43604352[source]
LLMs will never be good at specific knowledge unless specifically trained for with narrow "if else" statements.

Its good for broad general overview such as most popular categories of books in the world.

replies(1): >>43604438 #
Rebuff5007 ◴[] No.43604438[source]
Really? Open-AI says PhD intelligence is just around the corner!
replies(2): >>43604495 #>>43604528 #
ludwik ◴[] No.43604528[source]
I don't think they mean "knowledge" when they talk about "intelligence." LLMs are definitely not knowledge bases. They can transform information given to them in impressive ways, but asking a raw (non-RAG-enabled) LLM to provide its own information will probably always be a mistake.
replies(1): >>43604630 #
1. AnimalMuppet ◴[] No.43604630[source]
They kind of are knowledge bases, just not in the usual way. The knowledge is encoded in the words they were trained on. They weren't trained on words chosen at random; they were trained on words written by humans to encode some information. In fact, that's the only thing that makes LLMs somewhat useful.