←back to thread

326 points threeturn | 1 comments | | HN request time: 0.225s | source

Dear Hackers, I’m interested in your real-world workflows for using open-source LLMs and open-source coding assistants on your laptop (not just cloud/enterprise SaaS). Specifically:

Which model(s) are you running (e.g., Ollama, LM Studio, or others) and which open-source coding assistant/integration (for example, a VS Code plugin) you’re using?

What laptop hardware do you have (CPU, GPU/NPU, memory, whether discrete GPU or integrated, OS) and how it performs for your workflow?

What kinds of tasks you use it for (code completion, refactoring, debugging, code review) and how reliable it is (what works well / where it falls short).

I'm conducting my own investigation, which I will be happy to share as well when over.

Thanks! Andrea.

1. sprior ◴[] No.45778766[source]
I wanted to dip my toe in the AI waters, so I bought a cheap Dell Precision 3620 Tower i7-7700, upgraded the RAM (sold what it came with on eBay) and ended up upgrading the power supply (this part wasn't planned) so I could install a RTX 3060 GPU. I set it up with Ubuntu server and set it up as a node on my home kubernetes(k3s) cluster. That node is tainted so only approved workloads get deployed to it. I'm running Ollama on that node and OpenWebUI in the cluster. The most useful thing I use it for is AI tagging and summaries for Karakeep, but I've also used it for a bunch of other applications including code I've written in Python to analyze driveway camera footage for delivery vehicles.