I want to get some broader feedback before completely switching my workflow to Aide or Cursor.
To talk through the features, we engineered the following:
- A proactive agent
Agent which iterates on the linter errors (powered by the Language Server) and pulls in relevant context by doing go-to-definitions, go-to-references etc and propose fixes or ask for more files which might be missing in the context.
- Developer control
We encourage you to do edits on top of your coding sessions. To enable this, we built a VSCode native rollback feature which gets rid of all the edits made by the agent in a single click if there were mistakes, without messing up your changes from before.
- A combined chat+edit flow which you can use to brainstorm and edit
You can brainstorm a problem in chat by @’ting the files and then jump into edits (which can happen across multiple files) or go from a smaller set of edits and discuss the side-effects of it
- Inline editing widget
We took inspiration from the macos spotlight widget and created a similar one inside the editor, you can highlight part of the code, do Cmd+K and just give your instructions freely
- Local running AI brain
We ship a binary called sidecar which takes care of talking to the LLM providers, preparing the prompts and using the editor for the LLM. All of this is local first and you get full control over the prompts/responses without anything leaking to our end (unless you choose to use your subscription and share the data with us)
We spent the last 15 months learning about the internals of VSCode (its a non-trivial codebase) and also powering up our AI game, the framework is also at the top of swebench-lite with 43% score. On top of this, since the whole AI side of the logic runs locally on your machine you have complete control over the data, from the prompt to the responses and you can use your own API Keys as well (can be any LLM provider) and talk to them directly.
There’s still a whole lot to build and we are at 1% of the journey. Right now the editor feels robust and does not break on any of the flows which we aimed to solve for.
Let us know if there’s anything else you would like to see us build. We also want to empower extensibility and work together with the community to build the next set of features and set a new milestone of AI native editors.
I want to get some broader feedback before completely switching my workflow to Aide or Cursor.
I’m still using Copilot in VS Code every day. I recently switched from OpenAI to Claude for the browser-based chat stuff and I really like it. The UI for coding assistance in Claude is excellent. Very well thought out.
Claude also has a nice feature called Projects where you can upload a bunch of stuff to build context which is great - so for instance if you are doing an API integration you can dump all the API docs into the project and then every chat you have has that context available.
As with all the AI tools you have to be quite careful. I do find that errors slip into my code more easily when I am not writing it all myself. Reading (or worse, skimming) source code is just different than writing it. However, between type safety and unit testing, I find I get rid of the bugs pretty quickly and overall my productivity is multiples of what it was before.
I can't tell if it is a UX thing or if it also doesn't suit my mental model.
I religiously use Copilot, and then paste stuff into Claude or ChatGPT (both pro) when needed.