←back to thread

113 points sethkim | 1 comments | | HN request time: 0.42s | source
Show context
ramesh31 ◴[] No.44458079[source]
>By embracing batch processing and leveraging the power of cost-effective open-source models, you can sidestep the price floor and continue to scale your AI initiatives in ways that are no longer feasible with traditional APIs.

Context size is the real killer when you look at running open source alternatives on your own hardware. Has anything even come close to the 100k+ range yet?

replies(2): >>44458100 #>>44458707 #
1. ryao ◴[] No.44458707[source]
Mistral Small 3.2 has a 131072 token context window.