←back to thread

326 points threeturn | 1 comments | | HN request time: 0.199s | source

Dear Hackers, I’m interested in your real-world workflows for using open-source LLMs and open-source coding assistants on your laptop (not just cloud/enterprise SaaS). Specifically:

Which model(s) are you running (e.g., Ollama, LM Studio, or others) and which open-source coding assistant/integration (for example, a VS Code plugin) you’re using?

What laptop hardware do you have (CPU, GPU/NPU, memory, whether discrete GPU or integrated, OS) and how it performs for your workflow?

What kinds of tasks you use it for (code completion, refactoring, debugging, code review) and how reliable it is (what works well / where it falls short).

I'm conducting my own investigation, which I will be happy to share as well when over.

Thanks! Andrea.

1. dnel ◴[] No.45780100[source]
I recently picked up a Threadripper 3960x, 256GB DDR4 and RTX2080ti 11GB running Debian 13 and open web-ui w/ ollama.

It runs well, not much difference to Claude etc but still learning the ropes and how to get the best out of it and local llms in general. Having tonnes of memory is nice for switching out models in ollama quickly since everything stays in cache.

The GPU memory is the weak point though so I'm mostly using models up to 18b parameters that can fit in the vram.