←back to thread

923 points zh2408 | 1 comments | | HN request time: 0.75s | source
Show context
nitinram ◴[] No.43767715[source]
This is super cool! I attempted to use this on a project and kept running into "This model's maximum context length is 200000 tokens. However, your messages resulted in 459974 tokens. Please reduce the length of the messages." I used open ai o4-mini. Is there an easy way to handle this gracefully? Basically if you had thoughts on how to make some tutorials for really large codebases or project directories?
replies(1): >>43767986 #
1. zh2408 ◴[] No.43767986[source]
Could you try to use gemini 2.5 pro? It's free every day for first 25 requests, and can handle 1M input tokens