←back to thread

2127 points bakugo | 2 comments | | HN request time: 0.533s | source
Show context
simonw ◴[] No.43175433[source]
I got this working with my LLM tool (new plugin version: llm-anthropic 0.14) and figured out a bunch of things about the model in the process. My detailed notes are here: https://simonwillison.net/2025/Feb/25/llm-anthropic-014/

One of the most exciting new capabilities is that this model has a 120,000 token output limit - up from just 8,000 for the previous Claude 3.5 Sonnet model and way higher than any other model in the space.

It seems to be able to use that output limit effectively. Here's my longest result so far, though it did take 27 minutes to finish! https://gist.github.com/simonw/854474b050b630144beebf06ec4a2...

replies(3): >>43175527 #>>43175552 #>>43183287 #
1. tedsanders ◴[] No.43175552[source]
No shade against Sonnet 3.7, but I don't think it's accurate to say way higher than any other model in the space. o1 and o3-mini go up to 100,000 output tokens.

https://platform.openai.com/docs/models#o1

replies(1): >>43176556 #
2. simonw ◴[] No.43176556[source]
Huh, good call thanks - I've updated my post with a correction.