←back to thread

An LLM is a lossy encyclopedia

(simonwillison.net)
509 points tosh | 7 comments | | HN request time: 0s | source | bottom

(the referenced HN thread starts at https://news.ycombinator.com/item?id=45060519)
Show context
quincepie ◴[] No.45101219[source]
I totally agree with the author. Sadly, I feel like that's not what the majority of LLM users tend to view LLMs. And it's definitely not what AI companies marketing.

> The key thing is to develop an intuition for questions it can usefully answer vs questions that are at a level of detail where the lossiness matters

the problem is that in order to develop an intuition for questions that LLMs can answer, the user will at least need to know something about the topic beforehand. I believe that this lack of initial understanding of the user input is what can lead to taking LLM output as factual. If one side of the exchange knows nothing about the subject, the other side can use jargon and even present random facts or lossy facts which can almost guarantee to impress the other side.

> The way to solve this particular problem is to make a correct example available to it.

My question is how much effort would it take to make a correct example available for the LLM before it can output quality and useful data? If the effort I put in is more than what I would get in return, then I feel like it's best to write and reason it myself.

replies(7): >>45102038 #>>45102286 #>>45103159 #>>45103931 #>>45104349 #>>45105150 #>>45116121 #
cj ◴[] No.45103159[source]
> the user will at least need to know something about the topic beforehand.

I used ChatGPT 5 over the weekend to double check dosing guidelines for a specific medication. "Provide dosage guidelines for medication [insert here]"

It spit back dosing guidelines that were an order of magnitude wrong (suggested 100mcg instead of 1mg). When I saw 100mcg, I was suspicious and said "I don't think that's right" and it quickly corrected itself and provided the correct dosing guidelines.

These are the kind of innocent errors that can be dangerous if users trust it blindly.

The main challenge is LLMs aren't able to gauge confidence in its answers, so it can't adjust how confidently it communicates information back to you. It's like compressing a photo, and a photographer wrongly saying "here's the best quality image I have!" - do you trust the photographer at their word, or do you challenge him to find a better quality image?

replies(12): >>45103322 #>>45103346 #>>45103459 #>>45103642 #>>45106112 #>>45106634 #>>45108321 #>>45108605 #>>45109136 #>>45110008 #>>45110773 #>>45112140 #
BeetleB ◴[] No.45108605[source]
> I used ChatGPT 5 over the weekend to double check dosing guidelines for a specific medication.

This use case is bad by several degrees.

Consider an alternative: Using Google to search for it and relying on its AI generated answer. This usage would be bad by one degree less, but still bad.

What about using Google and clicking on one of the top results? Maybe healthline.com? This usage would reduce the badness by one further degree, but still be bad.

I could go on and on, but for this use case, unless it's some generic drug (ibuprofen or something), the only correct use case is going to the manufacturer's web site, ensuring you're looking at the exact same medication (not some newer version or a variant), and looking at the dosage guidelines.

No, not Mayo clinic or any other site (unless it's a pretty generic medicine).

This is just not a good example to highlight the problems of using an LLM. You're likely not that much worse off than using Google.

replies(1): >>45108739 #
cj ◴[] No.45108739[source]
The compound I was researching was [edit: removed].

Problem is it's not FDA approved, only prescribed by compounding pharmacies off label. Experimental compound with no official guidelines.

The first result on Google for "[edit: removed] dosing guidelines" is a random word document hosted by a Telehealth clinic. Not exactly the most reliable source.

Edit: Jeesh, what’s with the downvotes?

replies(2): >>45109196 #>>45114986 #
nonameiguess ◴[] No.45114986{4}[source]
I think this actually points at a different problem, a problem with LLM users, but only to the extent that it's a problem with people with respect to any questions they have to ask any source they consider an authority at all. No LLM, nor any other source on the Internet, nor any other source off the Internet, can give you reliable dosage guidelines for copper peptides because this is information that is not known to humans. There is some answer to the question of what response you might expect and how that varies by dose, but without the clinical trials ever having been conducted, it's not an answer anyone actually has. Marketing and popular misconceptions about AI lead to people expecting it to be able to conjure facts out of thin air, perhaps reasoning from first principles using its highly honed model of human physiology.

It's an uncomfortable position to be in trying to biohack your way to a more youthful appearance using treatments that have never been studied in human trials, but that's the reality you're facing. Whatever guidelines you manage to find, whether from the telehealth clinic directly, or from a language model that read the Internet and ingested that along with maybe a few other sources, are generally extrapolated from early rodent studies and all that's being extrapolated is an allometric scaling from rat body to human body of the dosage the researchers actually gave to the rats. What effect that actually had, and how that may or may not translate to humans, is not usually a part of the consideration. To at least some extent, it can't be if the compound was never trialed on humans.

You're basically just going with scale up a dosage to human sized that at least didn't kill the rats. Take that and it probably won't kill you. What it might actually do can't be answered, not by doctors, not by an LLM, not by Wikipedia, not by anecdotes from past biohackers who tried it on themselves. This is not a failure of information retrieval or compression. You're just asking for information that is not known to anyone, so no one can give it to you.

If there's a problem here specific to LLMs, it's that they'll generally give you an answer anyway and will not in any way quantify the extent to which it is probably bullshit and why.

replies(1): >>45115509 #
cj ◴[] No.45115509{5}[source]
> a problem with LLM users

I think the flaw here is placing blame on users rather than the service provider.

HN is cutting LLM companies slack because we understand the technical limitations making it hard for the LLM to just say “I don’t know”.

In any other universe, we would be blaming the service rather than the user.

Why don’t we fix LLMs so they don’t spit out garbage when it doesn’t know the answer. Have we given up on that thought?

replies(2): >>45115686 #>>45117082 #
1. simonw ◴[] No.45115686{6}[source]
Current frontier LLMs - Claude 4, GPT-5, Gemini 2.5 - are massively more likely to say "I don't know" than last year's models.
replies(1): >>45115727 #
2. cj ◴[] No.45115727[source]
I don’t think I’ve ever seen ChatGPT 5 refuse to answer any prompt I’ve ever given it. I’m doing 20+ chats a day.

What’s an example prompt where it will say “idk”?

Edit: Just tried a silly one, asking it to tell me about the 8th continent on earth, which doesn’t exist. How difficult is it for the model to just say “sorry, there are only 7 continents”. I think we should expect more from LLMs and stop blaming things on technical limitations. “It’s hard” is getting to be an old excuse considering the amount of money flowing into building these systems.

replies(1): >>45116250 #
3. simonw ◴[] No.45116250[source]
https://chatgpt.com/share/68b85035-62ec-8006-ab20-af5931808b... - "There are only seven recognized continents on Earth: Africa, Antarctica, Asia, Australia, Europe, North America, and South America."

Here's a recent example of it saying "I don't know" - I asked it to figure out why there was an octopus in a mural about mushrooms: https://chatgpt.com/share/68b8507f-cc90-8006-b9d1-c06a227850... - "I wasn’t able to locate a publicly documented explanation of why Jo Brown (Bernoid) chose to include an octopus amid a mushroom-themed mural."

replies(1): >>45116394 #
4. cj ◴[] No.45116394{3}[source]
Not sure what your system prompt is, but asking the exact same prompt word for word for me results in a response talking about "Zealandia, a continent that is 93% submerged underwater."

The 2nd example isn't all that impressive since you're asking it to provide you something very specific. It succeeded in not hallucinating. It didn't succeed at saying "I'm not sure" in the face of ambiguity.

I want the LLM to respond more like a librarian: When they know something for sure, they tell you definitively, otherwise they say "I'm not entirely sure, but I can point you to where you need to look to get the information you need."

replies(1): >>45116537 #
5. simonw ◴[] No.45116537{4}[source]
I'm using regular GPT-5, no custom instructions and memory turned off.

Can you link to your shared Zealandia result?

I think that mural result was spectacularly impressive, given that it started with a photo I took of the mural with almost no additional context.

replies(1): >>45116627 #
6. cj ◴[] No.45116627{5}[source]
I can't link since it's in an enterprise account.

Interestingly I tried the same question in a separate ChatGPT account and it gave a similar response you got. Maybe it was pulling context from the (separate) chat thread where it was talking about Zealandia. Which raises another question: once it gets something wrong once, will it just keep reenforcing the inaccuracy in future chats? That could lead to some very suboptimal behavior.

Getting back on topic, I strongly dislike the argument that this is all "user error". These models are on track to be worth a trillion dollars at some point in the future. Let's raise our expectations of them. Fix the models, not the users.

replies(1): >>45116910 #
7. simonw ◴[] No.45116910{6}[source]
I wonder if you're stuck on an older model like GPT-4o?

EDIT: I think that's likely what is happening here: I tried the prompt against GPT-4o and got this https://chatgpt.com/share/68b8683b-09b0-8006-8f66-a316bfebda...

My consistent position on this stuff is that it's actually way harder to use than most people (and the companies marketing it) let on.

I'm not sure if it's getting easier to use over time either. The models are getting "better" but that partly means their error cases are harder to reason about, especially as they become less common.