←back to thread

361 points mseri | 1 comments | | HN request time: 0.226s | source
Show context
Y_Y ◴[] No.46002975[source]
I asked it if giraffes were kosher to eat and it told me:

> Giraffes are not kosher because they do not chew their cud, even though they have split hooves. Both requirements must be satisfied for an animal to be permissible.

HN will have removed the extraneous emojis.

This is at odds with my interpretation of giraffe anatomy and behaviour and of Talmudic law.

Luckily old sycophant GPT5.1 agrees with me:

> Yes. They have split hooves and chew cud, so they meet the anatomical criteria. Ritual slaughter is technically feasible though impractical.

replies(3): >>46004171 #>>46005088 #>>46006063 #
Flere-Imsaho ◴[] No.46006063[source]
Models should not have memorised whether animals are kosher to eat or not. This is information that should be retrieved from RAG or whatever.

If a model responded with "I don't know the answer to that", then that would be far more useful. Is anyone actually working on models that are trained to admit not knowing an answer to everything?

replies(4): >>46006191 #>>46009037 #>>46009499 #>>46010963 #
1. spmurrayzzz ◴[] No.46006191[source]
There is an older paper on something related to this [1], where the model outputs reflection tokens that either trigger retrieval or critique steps. The idea is that the model recognizes that it needs to fetch some grounding subsequent to generating some factual content. Then it reviews what it previously generated with the retrieved grounding.

The problem with this approach is that it does not generalize well at all out of distribution. I'm not aware of any follow up to this, but I do think it's an interesting area of research nonetheless.

[1] https://arxiv.org/abs/2310.11511