←back to thread

310 points skarat | 5 comments | | HN request time: 0.46s | source

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
1. eisfresser ◴[] No.43959991[source]
Windsurf at the moment. It now can run multiple "flows" in parallel, so I can set one cascade off to look into a bug somewhere while another cascade implements a feature elswhere in the code base. The LLMs spit out their tokens in the background, I drop in eventually to reveiew and accept or ask for further changes.
replies(3): >>43960103 #>>43960167 #>>43960258 #
2. Alifatisk ◴[] No.43960103[source]
We are truly living in the future
3. ximeng ◴[] No.43960167[source]
Cursor offers this too - open different tabs in chat and ask for different changes; they’ll run in parallel.
replies(1): >>43961467 #
4. mirekrusin ◴[] No.43960258[source]
Zed has this background flow as well, you can see in the video [0] from their latest blog post.

[0] https://zed.dev/blog/fastest-ai-code-editor

5. frainfreeze ◴[] No.43961467[source]
Until you change model in one of the tabs and all other tabs (and editor instances!) get model changed, stop what they're doing, lose context etc. There is also a bug where if you have two editors working on two codebases they get lost and start working on same thing, I suppose there is some kind of a background workspace that gets mixed up.