←back to thread

GPT-5.2

(openai.com)
1019 points atgctg | 5 comments | | HN request time: 0.41s | source
1. cc62cf4a4f20 ◴[] No.46235887[source]
In other news, been using Devstral 2 (Ollama) with OpenCode, and while it's not as good as Claude Code, my initial sense it that it's nonetheless good enough and doesn't require me to send my data off my laptop.

I kind of wonder how close we are to alternative (not from a major AI lab) models being good enough for a lot of productive work and data sovereignty being the deciding factor.

replies(2): >>46236477 #>>46236996 #
2. Nesco ◴[] No.46236477[source]
Wait, isn't Devstral2 (normal not small) 123b? What type of laptop do you have? MacBooks don't go over 128GiB
replies(1): >>46236695 #
3. cc62cf4a4f20 ◴[] No.46236695[source]
I'm using small - works well for its size
4. yberreby ◴[] No.46236996[source]
Would you share some additional details? CPU, amount of unified memory / VRAM? Tok/s with those?
replies(1): >>46241061 #
5. cc62cf4a4f20 ◴[] No.46241061[source]
MBP M4 Max 64MB - haven't measured the tokens/sec, feels slower than Claude, but not unbearably

It's not yet perfect, my sense is just that it's near the tipping point where models are efficient enough that running a local model is truly viable