←back to thread

317 points thunderbong | 3 comments | | HN request time: 0.713s | source
1. djoldman ◴[] No.42203262[source]
One interesting metric for LLMs is that for some tasks their precision is garbage but recall is high. (in essence: their top 5 answers are wrong but top 100 have the right answer).

As relates to infinite context, if one pairs the above with some kind of intelligent "solution-checker," it's interesting if models may be able to provide value across absolute monstrous text sizes where it's critical to tie two facts that are worlds apart.

replies(1): >>42203274 #
2. mormegil ◴[] No.42203274[source]
This probably didn't belong here?
replies(1): >>42204059 #
3. djoldman ◴[] No.42204059[source]
It didn't! Thanks