←back to thread

272 points lermontov | 2 comments | | HN request time: 0s | source
Show context
nuz ◴[] No.41905984[source]
Seems like a non pessimistic idea of something LLMs could help us out with. Mass analysis of old texts for new finds like this. If this one exists surely there are many more just a mass analysis away
replies(2): >>41906047 #>>41906058 #
steve_adams_86 ◴[] No.41906047[source]
I accidentally got Zed to parse way more code than I intended last night and it cost close to $2 on the anthropic API. All I can think is how incredibly expensive it would be to feed an LLM text in hopes of making those connections. I don’t think you’re wrong, though. This is the territory where their ability to find patterns can feel pretty magical. It would cost many, many, many $2 though
replies(2): >>41906078 #>>41906144 #
1. diggan ◴[] No.41906144[source]
> I accidentally got Zed to parse way more code than I intended last night and it cost close to $2 on the anthropic API

Is that one API call or some out of control process slinging 100s of requests?

Must have been a ton of data, as their most expensive model (Opus) seems to $15 per million input tokens. I guess if you just set it to use an entire project as the input, you'll hit 1m input tokens quickly.

replies(1): >>41906402 #
2. steve_adams_86 ◴[] No.41906402[source]
Come to think of it, I’m not sure how Zed performs LLM requests with the inline assistant.

I wasn’t working in an enormous file, but I meant to highlight a block and accidentally highlighted the entire file and asked it to do something that made no sense in that context. It did its best to do something with the situation and eventually ran out of steam, haha. It’s possible that multiple requests needed to be made, or I was around the 200k context window.

Previous to this I’m fairly sure most of my requests cost fractions of pennies. My credit takes ages to decrease by any meaningful amount. Except until last night. It’s normally an extremely cost-effective tool for me.