Most active commenters
  • frainfreeze(3)

←back to thread

310 points skarat | 29 comments | | HN request time: 1.231s | source | bottom

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
1. danpalmer ◴[] No.43959899[source]
Zed. They've upped their game in the AI integration and so far it's the best one I've seen (external from work). Cursor and VSCode+Copilot always felt slow and janky, Zed is much less janky feels like pretty mature software, and I can just plug in my Gemini API key and use that for free/cheap instead of paying for the editor's own integration.
replies(9): >>43960069 #>>43960506 #>>43960546 #>>43961423 #>>43961614 #>>43962057 #>>43962974 #>>43967753 #>>44019166 #
2. wellthisisgreat ◴[] No.43960069[source]
Does it have Cursor’s “tab” feature?
replies(2): >>43960155 #>>43961890 #
3. dvtfl ◴[] No.43960155[source]
Yep: https://zed.dev/blog/edit-prediction
replies(1): >>43960299 #
4. eadz ◴[] No.43960299{3}[source]
It would be great if there was an easy way to run their open model (https://huggingface.co/zed-industries/zeta) locally ( for latency reasons ).

I don't think Zeta is quite up to windsurf's completion quality/speed.

I get that this would go against their business model, but maybe people would pay for this - it could in theory be the fastest completion since it would run locally.

replies(2): >>43961605 #>>43962492 #
5. submeta ◴[] No.43960506[source]
Consumes lots of resources on an M4 Macbook. Would love to test it though. If it didn’t freeze my Macbook.

Edit:

With the latest update to 0.185.15 it works perfectly smooth. Excellent addition to my setup.

replies(3): >>43960728 #>>43960874 #>>43997298 #
6. vimota ◴[] No.43960546[source]
I gave Zed an in-depth trial this week and wrote about it here: https://x.com/vimota/status/1921270079054049476

Overall Zed is super nice and opposite of janky, but still found a few of defaults were off and Python support still was missing in a few key ways for my daily workflow.

replies(1): >>43974857 #
7. _bin_ ◴[] No.43960728[source]
I'll second the zed recommendation, sent from my M4 macbook. I don't know why exactly it's doing this for you but mine is idling with ~500MB RAM (about as little as you can get with a reasonably-sized Rust codebase and a language server) and 0% CPU.

I have also really appreciated something that felt much less janky, had better vim bindings, and wasn't slow to start even on a very fast computer. You can completely botch Cursor if you type really fast. On an older mid-range laptop, I ran into problems with a bunch of its auto-pair stuff of all things.

replies(1): >>43960803 #
8. drcongo ◴[] No.43960803{3}[source]
Yeah, same. Zed is incredibly efficient on my M1 Pro. It's my daily driver these days, and my Python setup in it is almost perfect.
replies(1): >>43961860 #
9. aquariusDue ◴[] No.43960874[source]
In my case this was the culprit: https://github.com/zed-industries/zed/issues/13190 otherwise it worked great mostly.
10. frainfreeze ◴[] No.43961423[source]
Zed doesn't even run on my system and the relevant github issue is only updated by people who come to complain about the same issue.
replies(2): >>43961882 #>>43967334 #
11. xmorse ◴[] No.43961605{4}[source]
Running models locally is very expensive in terms of memory and scheduling requirements, maybe instead they should host their model on the Cloudflare AI network which is distributed all around the world and can have lower latency
12. xmorse ◴[] No.43961614[source]
I am using Zed too, it still has some issues but it is comparable to Cursor. In my opinion they iterate even faster than the VSCode forks.
replies(1): >>43962002 #
13. greymalik ◴[] No.43961860{4}[source]
What’s your Python setup?
14. Aeolun ◴[] No.43961882[source]
Don’t use windows? I don’t feel like that’s a terribly uncommon proposition for a dev.
replies(1): >>43971585 #
15. Aeolun ◴[] No.43961890[source]
Sort of. The quality is light and day different (cursor feels like magic, Zed feels like a chore).
replies(2): >>43962686 #>>43972126 #
16. DrBenCarson ◴[] No.43962002[source]
Yep not having to build off a major fork will certainly help you move fast
replies(1): >>43982294 #
17. allie1 ◴[] No.43962057[source]
I just wish they'd release a debugger already. Once its done i'll be moving to them completely.
18. rfoo ◴[] No.43962492{4}[source]
> the fastest completion since it would run locally

We are living in a strange age that local is slower than the cloud. Due to the sheer amount of compute we need to do. Compute takes hundreds of milliseconds (if not seconds) on local hardware, making 100ms of network latency irrelevant.

Even for a 7B model your expensive Mac or 4090 can't beat, for example, a box with 8x A100s running FOSS serving stack (sglang) with TP=8, in latency.

19. atonse ◴[] No.43962686{3}[source]
I can second this. I really do want to move to Zed full time but the code completion is nowhere near as useful or "smart" as cursor's yet.
20. brianzelip ◴[] No.43962974[source]
Here's a recent Changelog podcast episode about the latest with Zed and its new agentic feature, https://changelog.com/podcast/640.
21. KomoD ◴[] No.43967334[source]
Windows? If so, you can run it, you just have to build it.
replies(1): >>43971587 #
22. frainfreeze ◴[] No.43971585{3}[source]
Debian latest stable.
23. frainfreeze ◴[] No.43971587{3}[source]
Debian latest stable.
replies(1): >>43975819 #
24. vendiddy ◴[] No.43972126{3}[source]
Yep I want Zed to win but it has not yet become my daily driver
25. sivartnotrab ◴[] No.43974857[source]
ooc what python support was missing for you? I'm debating Zed
26. KomoD ◴[] No.43975819{4}[source]
Oh, then what's the issue? I'm using Zed on Mint and so far I've only had one issue, the window being invisible (which I fixed by updating GPU drivers)
27. DANmode ◴[] No.43982294{3}[source]
But can they surpass Cursor?
28. enceladus06 ◴[] No.43997298[source]
Are you running ollama local model or one of the zed llms?
29. charlie0 ◴[] No.44019166[source]
Why are the Zeds guys so hung up on UI rendering times....? I don't care that the UI can render at 120FPS if it takes 3 seconds to get input from an LLM. I do like the clean UI though.