←back to thread

223 points benkaiser | 2 comments | | HN request time: 0.42s | source
Show context
Animats ◴[] No.42542976[source]
It's discouraging that an LLM can accurately recall a book. That is, in a sense, overfitting. The LLM is supposed to be much smaller than the training set, having in some sense abstracted the training inputs.

Did they try this on obscure bible excerpts, or just ones likely to be well known and quoted elsewhere? Well known quotes would be reinforced by all the copies.

replies(4): >>42543124 #>>42543534 #>>42544514 #>>42545640 #
1. kenjackson ◴[] No.42543534[source]
Does GPT now query in real-time? If so, it should be able to reproduce anything searchable verbatim. It just needs to determine when verbatim quoting is appropriate given the prompt.
replies(1): >>42544640 #
2. benkaiser ◴[] No.42544640[source]
Some services may overlay this functionality (e.g. Bing), but in the article I'm making direct LLM calls without any external function calling.