←back to thread

326 points threeturn | 1 comments | | HN request time: 0.339s | source

Dear Hackers, I’m interested in your real-world workflows for using open-source LLMs and open-source coding assistants on your laptop (not just cloud/enterprise SaaS). Specifically:

Which model(s) are you running (e.g., Ollama, LM Studio, or others) and which open-source coding assistant/integration (for example, a VS Code plugin) you’re using?

What laptop hardware do you have (CPU, GPU/NPU, memory, whether discrete GPU or integrated, OS) and how it performs for your workflow?

What kinds of tasks you use it for (code completion, refactoring, debugging, code review) and how reliable it is (what works well / where it falls short).

I'm conducting my own investigation, which I will be happy to share as well when over.

Thanks! Andrea.

1. timenotwasted ◴[] No.45774749[source]
I have an old 2080TI that I use to run Ollama and Qdrant. It has been ok, I haven't found it so good that it has replaced using Claude or Codex but there are times where having RAG available locally is a nice setup for more specific queries. I also just enjoy tinkering with random models which this makes super easy.

My daily drivers though are still either Codex or GPT5, Claude Code used to be but it just doesn't deliver the same results as it has previously.