←back to thread

577 points simonw | 1 comments | | HN request time: 0.207s | source
Show context
simonw ◴[] No.44727061[source]
There's a new model from Qwen today - Qwen3-30B-A3B-Instruct-2507 - that also runs comfortably on my Mac (using about 30GB of RAM with an 8bit quantization).

I tried the "Write an HTML and JavaScript page implementing space invaders" prompt against it and didn't quite get a working game with a single shot, but it was still an interesting result: https://simonwillison.net/2025/Jul/29/qwen3-30b-a3b-instruct...

replies(1): >>44731755 #
pyman ◴[] No.44731755[source]
I was talking about the new open models with a group of people yesterday, and saying how good they're getting. The big question is:

Can any company now compete with the big players? Or even more interesting, like you showed in your research, are proprietary models becoming less relevant now that anyone can run these models locally?

This trend of better open models that run locally is really picking up. Do you think we'll get to a point where we won't need to buy AI tokens anymore?

replies(1): >>44734106 #
1. simonw ◴[] No.44734106[source]
The problem is cost. A machine that can run a decent local model costs thousands of dollars to buy and won't produce results as good as a model that runs on $30,000+ dedicated servers. Meanwhile you can rent access to LLMs running on those expensive machines for fractions of a cent (because you are sharing them with thousands of other users).

I don't think cost will be a reason to use local models for a very long time, if ever.