←back to thread

361 points mseri | 1 comments | | HN request time: 0s | source
Show context
dangoodmanUT ◴[] No.46005065[source]
What are some of the real world applications of small models like this, is it only on-device inference?

In most cases, I'm only seeing models like sonnet being just barely sufficiently for the workloads I've done historically. Would love to know where others are finding use of smaller models (like gpt-oss-120B and below, esp smaller models like this).

Maybe some really lightweight borderline-NLP classification tasks?

replies(3): >>46005122 #>>46005251 #>>46009108 #
fnbr ◴[] No.46005251[source]
(I’m a researcher on the post-training team at Ai2.)

7B models are mostly useful for local use on consumer GPUs. 32B could be used for a lot of applications. There’s a lot of companies using fine tuned Qwen 3 models that might want to switch to Olmo now that we have released a 32B base model.

replies(2): >>46005571 #>>46010965 #
littlestymaar ◴[] No.46005571[source]
May I ask why you went for a 7B and a 32B dense models instead of a small MoE like Qwen3-30B-A3B or gpt-oss-20b given how successful these MoE experiments were?
replies(2): >>46005991 #>>46006040 #
1. riazrizvi ◴[] No.46006040[source]
7B runs on my Intel Macbook Pro - there is a broad practical application served here for developers who need to figure out a project on their own hardware, which improves time/cost/effort economy. Before committing to a bigger model for the same project.