←back to thread

677 points meetpateltech | 10 comments | | HN request time: 1.04s | source | bottom
Show context
ppeetteerr ◴[] No.45118464[source]
I love Zed and I'm glad you now have native support for Claude. I previously ran it using the instructions in this post: https://benswift.me/blog/2025/07/23/running-claude-code-with...

One thing that still suffers is AI autocomplete. While I tried Zed's own solution and supermaven (now part of Cursor), I still find Cursor's AI autocomplete and predictions much more accurate (even pulling up a file via search is more accurate in Cursor).

I am glad to hear that Zed got a round of funding. https://zed.dev/blog/sequoia-backs-zed This will go a long way to creating real competition to Cursor in the form of a quality IDE not built on VSCode

replies(9): >>45118738 #>>45118799 #>>45119067 #>>45119080 #>>45120139 #>>45120380 #>>45121210 #>>45121710 #>>45126045 #
1. hajile ◴[] No.45119080[source]
I was somewhat surprised to find that Zed still doesn't have a way to add your own local autocomplete AI using something like Ollama. Something like Qwen 2.5 coder at a tiny 1.5b parameters will work just fine for the stuff that I want. It runs fast and works when I'm between internet connections too.

I'd also like to see a company like Zed allow me to buy a license of their autocomplete AI model to run locally rather than renting and running it on their servers.

I'd also pay for something in the 10-15b parameter range that used more limited training data focused almost entirely on programming documentation and books along with professional business writing. Something with the coding knowledge of Qwen Coder combined with the professionalism and predictability of IBM Granite 3. I'd pay quite a lot for such an agent (especially if it got updates every couple of months that worked in new documentation, bugfixes, github threads, etc to keep the answers up-to-date).

replies(7): >>45119664 #>>45119723 #>>45120441 #>>45120982 #>>45121009 #>>45122417 #>>45123178 #
2. slekker ◴[] No.45119664[source]
Ditto, that was one of the dealbreakers for me using Zed, the Copilot integration is miles behind Cursor's
3. rolisz ◴[] No.45119723[source]
> I'd also pay for something in the 10-15b parameter range that used more limited training data focused almost entirely on programming documentation and books along with professional business writing.

Unfortunately, pretraining on a lot of data (~everything they can get their hands on) is needed to give current LLMs their "intelligence" (for whatever definition of intelligence). Using less training data doesn't work as well for now. There definitely not enough programming and business writing to train a good model only on that.

replies(1): >>45128298 #
4. eli ◴[] No.45120441[source]
You don't have to buy a license; the autocomplete model is open source https://huggingface.co/zed-industries/zeta

It is indeed a fine tuned Qwen2.5-Coder-7B

5. kilohotel ◴[] No.45120982[source]
You can use a local model! It's in Settings in a Thread and you can select Ollama.
replies(1): >>45121503 #
6. woodson ◴[] No.45121009[source]
There's an active PR providing inline edit completions via Ollama: https://github.com/zed-industries/zed/pull/33616
7. woodson ◴[] No.45121503[source]
But that doesn't work for inline edit predictions, right?
8. dcreater ◴[] No.45122417[source]
> Ollama

You mean an locally run OpenAI API compatible server?

9. SquidJack ◴[] No.45123178[source]
thats why i created myself nanocoder 0.5b FT for autocomplete in couple of days going to release a v2 version much better

https://huggingface.co/srisree/nano_coder

10. hajile ◴[] No.45128298[source]
If the LLM isn’t getting its data about coding projects from those projects and their surrounding documentation and tutorials, what is it going to train with?

Maybe it also needs some amount of other training data for basic speech patterns, but I’d again show IBM Granite as an example that professional and to-the-point LLMs are possible.