←back to thread

579 points paulpauper | 2 comments | | HN request time: 0.416s | source
Show context
aerhardt ◴[] No.43604214[source]
My mom told me yesterday that Paul Newman had massive problems with alcohol. I was somewhat skeptical, so this morning I asked ChatGPT a very simple question:

"Is Paul Newman known for having had problems with alcohol?"

All of the models up to o3-mini-high told me he had no known problems. Here's o3-mini-high's response:

"Paul Newman is not widely known for having had problems with alcohol. While he portrayed characters who sometimes dealt with personal struggles on screen, his personal life and public image were more focused on his celebrated acting career, philanthropic work, and passion for auto racing rather than any issues with alcohol. There is no substantial or widely reported evidence in reputable biographies or interviews that indicates he struggled with alcohol abuse."

There is plenty of evidence online that he struggled a lot with alcohol, including testimony from his long-time wife Joanne Woodward.

I sent my mom the ChatGPT reply and in five minutes she found an authoritative source to back her argument [1].

I use ChatGPT for many tasks every day, but I couldn't fathom that it would get so wrong something so simple.

Lesson(s) learned... Including not doubting my mother's movie trivia knowledge.

[1] https://www.newyorker.com/magazine/2022/10/24/who-paul-newma...

replies(27): >>43604240 #>>43604254 #>>43604266 #>>43604352 #>>43604411 #>>43604434 #>>43604445 #>>43604447 #>>43604474 #>>43605109 #>>43605148 #>>43605609 #>>43605734 #>>43605773 #>>43605938 #>>43605941 #>>43606141 #>>43606176 #>>43606197 #>>43606455 #>>43606465 #>>43606551 #>>43606632 #>>43606774 #>>43606870 #>>43606938 #>>43607090 #
stavros ◴[] No.43604447[source]
LLMs aren't good at being search engines, they're good at understanding things. Put an LLM on top of a search engine, and that's the appropriate tool for this use case.

I guess the problem with LLMs is that they're too usable for their own good, so people don't realizing that they can't perfectly know all the trivia in the world, exactly the same as any human.

replies(4): >>43604471 #>>43604558 #>>43606272 #>>43610103 #
MegaButts ◴[] No.43604471[source]
> LLMs aren't good at being search engines, they're good at understanding things.

LLMs are literally fundamentally incapable of understanding things. They are stochastic parrots and you've been fooled.

replies(5): >>43604573 #>>43604575 #>>43604616 #>>43604708 #>>43604736 #
bobsmooth ◴[] No.43604575[source]
What do you call someone that mentions "stochastic parrots" every time LLMs are mentioned?
replies(2): >>43604599 #>>43605079 #
MegaButts ◴[] No.43604599[source]
It's the first time I've ever used that phrase on HN. Anyway, what phrase do you think works better than 'stochastic parrot' to describe how LLMs function?
replies(2): >>43604796 #>>43604903 #
karn97 ◴[] No.43604796[source]
Try to come up with a way to prove humans aren't stochastic parrots then maybe people will atart taking you seriously. Just childish reddit angst rn nothing else.
replies(2): >>43605150 #>>43612004 #
bluefirebrand ◴[] No.43605150[source]
> Try to come up with a way to prove humans aren't stochastic parrots

Look around you

Look at Skyscrapers. Rocket ships. Agriculture.

If you want to make a claim that humans are nothing more than stochastic parrots then you need to explain where all of this came from. What were we parroting?

Meanwhile all that LLMs do is parrot things that humans created

replies(1): >>43606469 #
jodrellblank ◴[] No.43606469[source]
Skyscrapers: trees, mountains, cliffs, caves in mountainsides, termite mounds, humans knew things could go high, the Colosseum was built two thousand years ago as a huge multi-storey building.

Rocket ships: volcanic eruptions show heat and explosive outbursts can fling things high, gunpowder and cannons, bellows showing air moves things.

Agriculture: forests, plains, jungle, desert oases, humans knew plants grew from seeds, grew with rain, grew near water, and grew where animals trampled them into the ground.

We need a list of all atempted ideas, all inventions and patents that were ever tried or conceived, and then we see how inventions are the same random permutations on ideas with Darwinian style survivorship as everything else; there were steel boats with multiple levels in them before skyscrapers; is the idea of a tall steel building really so magical when there were over a billion people on Earth in 1800 who could have come up with it?

replies(2): >>43606755 #>>43606817 #
bluefirebrand ◴[] No.43606817[source]
> when there were over a billion people on Earth in 1800 who could have come up with it

My point is that humans did come up with it. Humans did not parrot it from someone or something else that showed it to us. We didn't "parrot" splitting the atom. We didn't learn how to build skyscrapers from looking at termite hills and we didn't learn to build rockets that can send a person to the moon from seeing a volcano

You are just speaking absolute drivel

replies(1): >>43611484 #
jodrellblank ◴[] No.43611484[source]
It's obvious that humans imitate concepts and don't come up with things de-novo from a blank slate of pure intelligence. So your claim hinges on LLMs parrotting the words they are trained on. But they don't do that, their training makes them abstract over concepts and remix them in new ways to output sentences they weren't trained on, e.g.:

Prompt: "Can you give me a URL with some novel components, please?"

DuckDuckGo LLM returns: "Sure! Here’s a fictional URL with some novel components: https://www.example-novels.com/2023/unique-tales/whimsical-j..."

An living parrot echoing "pieces of eight" cannot do this, it cannot say "pieces of <currency>" or "pieces of <valuable mineral>" even if asked to do that. The LLM training has abstracted some concept of what it means for a text pattern to be a URL and what it means for things to be "novel" and what it means to switch out the components of a URL but keep them individually valid. It can also give a reasonable answer asking for a new kind of protocol. So your position hinges on the word "stochastic" which is used as a slur to mean "the LLM isn't innovating like we do it's just a dice roll of remixing parts it was taught". But if you are arguing that makes it a "stochastic parrot" then you need to consider splitting the atom in its wider context...

> "We didn't "parrot" splitting the atom"

That's because we didn't "split the atom" in one blank-slate experiment with no surrounding context. Rutherford and team disintegrated the atom in 1914-1919 ish, they were building on the surrounding scientific work happening at that time: 1869 Johann Hittorf recognising that there was something coming in a straight line from or near the cathode of a Crookes vacuum tube, 1876 Eugen Goldstein proving they were coming from the cathode and naming them cathode rays (see: Cathode Ray Tube computer monitors), and 1897 J.J Thompson proving the rays are much lighter than the lightest known element and naming them Electrons, the first proof of sub-atomic particles existing. He proposed the model of the atom as a 'plum pudding' (concept parroting). Hey guess who JJ Thomspon was an academic advisor of? Ernest Rutherford! 1911 Rutherford discovery of the atomic nucleus. 1909 Rutherford demonstrated sub-atomic scattering and Millikan determined the charge on an electron. Eugen Goldstein also discovered the anode rays travelling the other way in the Crookes tube and that was picked up by Wilhelm Wien and it became Mass Spectrometry for identifying elements. In 1887 Heinrich Hertz was investigating the Photoelectric effect building on the work of Alexandre Becquerel, Johann Elster, Hans Geitel. Dalton's atomic theory of 1803.

Not to mention Rutherford's 1899 studies of radioactivity, following Henri Becquerel's work on Uranium, following Marie Curie's work on Radium and her suggestion of radioactivity being atoms breaking up, and Rutherford's student Frederick Soddy and his work on Radon, and Paul Villard's work on Gamma Ray emissions from Radon.

When Philipp Lenard was studying cathode rays in the 1890s he bought up all the supply of one phosphorescent material which meant Röntgen had to buy a different one to reproduce the results and bought one which responded to X-Rays as well, and that's how he discovered them - not by pure blank-sheet intelligence but by probability and randomness applied to an earlier concept.

That is, nobody taught humans to split the atom and then humans literally parotted the mechanism and did it, but you attempting to present splitting the atom as a thing which appeared out of nowhere and not remixing any existing concepts is, in your terms, absolute drivel. Literally a hundred years and more of scientists and engineers investigating the subatomic world and proposing that atoms could be split, and trying to work out what's in them by small varyations on the ideas and equipment and experiments seen before, you can just find names and names and names on Wikipedia of people working on this stuff and being inspired by others' work and remixing the concepts in it, and we all know the 'science progresses one death at a time' idea that individual people pick up what they learned and stick with it until they die, and new ideas and progress need new people to do variations on the ideas which exist.

No people didn't learn to build rockets from "seeing a volcano" but if you think there was no inspiration from fireworks, cannons, jellyfish squeezing water out to accelerate, no sudies of orbits from moons and planets, no chemistry experiments, no inspiration from thousands of years of flamethrowers: https://en.wikipedia.org/wiki/Flamethrower#History no seeing explosions moving large things, you're living in a dream

replies(1): >>43612967 #
1. bluefirebrand ◴[] No.43612967[source]
> fireworks, cannons, jellyfish squeezing water out to accelerate, no sudies of orbits from moons and planets, no chemistry experiments, no inspiration from thousands of years of flamethrowers

Fireworks, cannons, chemistry experiments and flamethrowers are all human inventions

And yes, exactly! We studied orbits of moons and planets. We studied animals like Jellyfish. We choose to observe the world, we extracted data, we experimented, we saw what worked, refined, improved, and succeeded

LLMs are not capable of observing anything. They can only regurgitate and remix the information they are fed by humans! By us, because we can observe

An LLM trained on 100% wrong information will always return wrong information for anything you ask it.

Say you train an LLM with the knowledge that fire can burn underwater. It "thinks" that the step by step instructions for building a fire is to pile wood and then pour water on the wood. It has no conflicting information in its model. It cannot go try to build a fire this way and observe that it is wrong. It is a parrot. It repeats the information that you give it. At best it can find some relationships between data points that humans haven't realized might be related

A human could easily go attempt this, realize it doesn't work, and learn from the experience. Humans are not simply parrots. We are capable of exploring our surroundings and internalizing things without needing someone else to tell us how everything works

> That is, nobody taught humans to split the atom and then humans literally parotted the mechanism and did it, but you attempting to present splitting the atom as a thing which appeared out of nowhere and not remixing any existing concepts is, in your terms, absolute drivel

Building on the work of other humans is not parroting

You outlined the absolute genius of humanity building from first principles all the way to splitting the atom and you still think we're just parroting,

I think we disagree what parroting is entirely.

replies(1): >>43660214 #
2. karn97 ◴[] No.43660214[source]
Your point is contingent on sensor availability to an llm. Llms are a frozen human mind until they behave like live ml algos.