←back to thread

272 points lermontov | 2 comments | | HN request time: 0s | source
Show context
nuz ◴[] No.41905984[source]
Seems like a non pessimistic idea of something LLMs could help us out with. Mass analysis of old texts for new finds like this. If this one exists surely there are many more just a mass analysis away
replies(2): >>41906047 #>>41906058 #
steve_adams_86 ◴[] No.41906047[source]
I accidentally got Zed to parse way more code than I intended last night and it cost close to $2 on the anthropic API. All I can think is how incredibly expensive it would be to feed an LLM text in hopes of making those connections. I don’t think you’re wrong, though. This is the territory where their ability to find patterns can feel pretty magical. It would cost many, many, many $2 though
replies(2): >>41906078 #>>41906144 #
1. pcthrowaway ◴[] No.41906078[source]
This is a pretty good case for just using a local model. Even if it's 50% worse than Anthropic or whatever the gap is now between open models and proprietary state of the art, it's still likely 'good enough' to categorize a story in an old newspaper as missing from an author's known bibliography.
replies(1): >>41907039 #
2. steve_adams_86 ◴[] No.41907039[source]
Good point. I use llama3.1 for a lot of small tasks and rarely feel like I need to use Claude instead. It’s fine. I’m even running the model a (big) step down from 70b, because I’ve only got 32GB of ram. It’s a solid model that probably costs me next to nothing to run.