←back to thread

326 points threeturn | 2 comments | | HN request time: 0.432s | source

Dear Hackers, I’m interested in your real-world workflows for using open-source LLMs and open-source coding assistants on your laptop (not just cloud/enterprise SaaS). Specifically:

Which model(s) are you running (e.g., Ollama, LM Studio, or others) and which open-source coding assistant/integration (for example, a VS Code plugin) you’re using?

What laptop hardware do you have (CPU, GPU/NPU, memory, whether discrete GPU or integrated, OS) and how it performs for your workflow?

What kinds of tasks you use it for (code completion, refactoring, debugging, code review) and how reliable it is (what works well / where it falls short).

I'm conducting my own investigation, which I will be happy to share as well when over.

Thanks! Andrea.

Show context
firefax ◴[] No.45773255[source]
I've been using Ollama, Gemma3:12b is about all my little air can handle.

If anyone has suggestions on other models, as an experiment I tried asking it to design me a new latex resumé and it struggled for two hours with the request to put my name prominently at the top in a grey box with my email and phone number beside it.

replies(1): >>45773619 #
james2doyle ◴[] No.45773619[source]
I was playing with the new IBM Granite models. They are quick/small and they do seem accurate. You can even try them online in the browser because they are small enough to be loaded via the filesystem: https://huggingface.co/spaces/ibm-granite/Granite-4.0-Nano-W...

Not only are they a lot more recent than gemma, they seem really good at tool calling, so probably good for coding tools. I haven’t personally tried it myself for that.

The actual page is here: https://huggingface.co/ibm-granite/granite-4.0-h-1b

replies(2): >>45773773 #>>45775299 #
firefax ◴[] No.45775299[source]
Interesting. Is there a way to load this into Ollama? Doing things in browser is a cool flex, but my interest is specifically in privacy respecting LLMs -- my goal is to run the most powerful one I can on my personal machine, with the end goal being those little queries I used to send to "the cloud" can be done offline, privately.
replies(1): >>45775719 #
1. fultonn ◴[] No.45775719[source]
> Is there a way to load this into Ollama?

Yes, the granite 4 models are on ollama:

https://ollama.com/library/granite4

> but my interest is specifically in privacy respecting LLMs -- my goal is to run the most powerful one I can on my personal machine

The HF Spaces demo for granite 4 nano does run on your local machine, using Transformers.js and ONNX. After downloading the model weights you can disconnect from the internet and things should still work. It's all happening in your browser, locally.

Of course ollama is preferable for your own dev environment. But ONNX and transformers.js is amazingly useful for edge deployment and easily sharing things with non-technical users. When I want to bundle up a little demo for something I typically just do that instead of the old way I did things (bundle it all up on a server and eat the inference cost).

replies(1): >>45781509 #
2. firefax ◴[] No.45781509[source]
Thanks for this pointer and explanation, I appreciate it.

Also my "dev enviornment" is vi -- I come from infosec (so basically a glorified sysadmin) so I'm mostly making little bash and python scripts, so I'm learning a lot of new things about software engineering as I explore this space :-)

Edit: Hey which of the models on that page were you referring to? I'm grabbing one now that's apparently double digit GB? Or were you saying they're not CPU/ram intensive but still a bit big?