←back to thread

361 points mseri | 10 comments | | HN request time: 0.211s | source | bottom
1. dangoodmanUT ◴[] No.46005065[source]
What are some of the real world applications of small models like this, is it only on-device inference?

In most cases, I'm only seeing models like sonnet being just barely sufficiently for the workloads I've done historically. Would love to know where others are finding use of smaller models (like gpt-oss-120B and below, esp smaller models like this).

Maybe some really lightweight borderline-NLP classification tasks?

replies(3): >>46005122 #>>46005251 #>>46009108 #
2. schopra909 ◴[] No.46005122[source]
I think you nailed it.

For us it’s classifiers that we train for very specific domains.

You’d think it’d be better to just finetune a smaller non-LLM model, but empirically we find the LLM finetunes (like 7B) perform better.

replies(1): >>46005801 #
3. fnbr ◴[] No.46005251[source]
(I’m a researcher on the post-training team at Ai2.)

7B models are mostly useful for local use on consumer GPUs. 32B could be used for a lot of applications. There’s a lot of companies using fine tuned Qwen 3 models that might want to switch to Olmo now that we have released a 32B base model.

replies(2): >>46005571 #>>46010965 #
4. littlestymaar ◴[] No.46005571[source]
May I ask why you went for a 7B and a 32B dense models instead of a small MoE like Qwen3-30B-A3B or gpt-oss-20b given how successful these MoE experiments were?
replies(2): >>46005991 #>>46006040 #
5. moffkalast ◴[] No.46005801[source]
I think it's no surprise that any model that has a more general understanding of text performs better than some tiny ad-hoc classifier that blindly learns a couple of patterns and has no clue what it's looking at. It's going to fail in much weirder ways that make no sense, like old cnn-based vision models.
6. fnbr ◴[] No.46005991{3}[source]
MoEs have a lot of technical complexity and aren't well supported in the open source world. We plan to release a MoE soon(ish).

I do think that MoEs are clearly the future. I think we will release more MoEs moving forward once we have the tech in place to do so efficiently. For all use cases except local usage, I think that MoEs are clearly superior to dense models.

replies(1): >>46010921 #
7. riazrizvi ◴[] No.46006040{3}[source]
7B runs on my Intel Macbook Pro - there is a broad practical application served here for developers who need to figure out a project on their own hardware, which improves time/cost/effort economy. Before committing to a bigger model for the same project.
8. thot_experiment ◴[] No.46009108[source]
I have Qwen3-30B-VL (an MoE model) resident in my VRAM at all times now because it is quicker to use it to answer most basic google questions. The type of stuff like remembering how to force kill a WSL instance which i don't do that often is now frictionless because i can just write on terminal (q is my utility)

    q how to force kill particular WSL
and it will respond with "wsl --terminate <distro-name>" much faster than google

it's also quite good at tool calling, if you give it shell access it'll happily do things like "find me files over 10mb modified in the last day" etc where remembering the flags and command structure if you're not doing that action regularly previously required a google or a peek at the manpage

i also use it to transcribe todo lists and notes and put them in my todo app as well as text manipulation, for example if i have a list of like, API keys and URLs or whatever that i need to populate into a template, I can just select the relevant part of the template in VSCode, put the relevant data in the context and say "fill this out" and it does it faster than i would be able to do the select - copy - select - paste loop, even with my hard won VIM knowledge

TL;DR

It's very fast (90tok/s) and very low latency and that means it can perform a lot of mildly complex tasks that have an obvious solution faster than you.

and fwiw i don't even think sonnet 4.5 is very useful, it's a decent model but it's very common for me to push it into a situation where it will be subtly wrong and waste a lot of my time (of course that's colored by it being slow and costs money)

9. trebligdivad ◴[] No.46010921{4}[source]
Even local, MoE are just so much faster, and they let you pick a large/less quantized model and still get a useful speed.
10. kurthr ◴[] No.46010965[source]
Are there quantized (eg 4bit) models available yet? I assume the training was done in BF16, but it seems like most inference models are distributed in BF8 until they're quantized.

edit ahh I see it on huggingface: https://huggingface.co/mlx-community/Olmo-3-1125-32B-4bit