Most active commenters
  • skp1995(13)
  • yen223(4)
  • nprateem(3)

←back to thread

162 points skp1995 | 36 comments | | HN request time: 0.42s | source | bottom

Hey HN, We are Sandeep and Naresh, the creators of Aide. We are happy to open source and invite the community to try out Aide which is a VSCode fork built with LLMs integrated.

To talk through the features, we engineered the following:

- A proactive agent

Agent which iterates on the linter errors (powered by the Language Server) and pulls in relevant context by doing go-to-definitions, go-to-references etc and propose fixes or ask for more files which might be missing in the context.

- Developer control

We encourage you to do edits on top of your coding sessions. To enable this, we built a VSCode native rollback feature which gets rid of all the edits made by the agent in a single click if there were mistakes, without messing up your changes from before.

- A combined chat+edit flow which you can use to brainstorm and edit

You can brainstorm a problem in chat by @’ting the files and then jump into edits (which can happen across multiple files) or go from a smaller set of edits and discuss the side-effects of it

- Inline editing widget

We took inspiration from the macos spotlight widget and created a similar one inside the editor, you can highlight part of the code, do Cmd+K and just give your instructions freely

- Local running AI brain

We ship a binary called sidecar which takes care of talking to the LLM providers, preparing the prompts and using the editor for the LLM. All of this is local first and you get full control over the prompts/responses without anything leaking to our end (unless you choose to use your subscription and share the data with us)

We spent the last 15 months learning about the internals of VSCode (its a non-trivial codebase) and also powering up our AI game, the framework is also at the top of swebench-lite with 43% score. On top of this, since the whole AI side of the logic runs locally on your machine you have complete control over the data, from the prompt to the responses and you can use your own API Keys as well (can be any LLM provider) and talk to them directly.

There’s still a whole lot to build and we are at 1% of the journey. Right now the editor feels robust and does not break on any of the flows which we aimed to solve for.

Let us know if there’s anything else you would like to see us build. We also want to empower extensibility and work together with the community to build the next set of features and set a new milestone of AI native editors.

1. hubraumhugo ◴[] No.42065127[source]
I'm curious - what does the AI coding setup of the HN community look like, and how has your experience been so far?

I want to get some broader feedback before completely switching my workflow to Aide or Cursor.

replies(13): >>42065372 #>>42065388 #>>42065902 #>>42065939 #>>42066378 #>>42067621 #>>42071404 #>>42071444 #>>42071578 #>>42071828 #>>42072457 #>>42072591 #>>42072894 #
2. arjunaaqa ◴[] No.42065372[source]
Using cursor and it’s been great !

Founders care about development experience a lot and it shows.

Yet to try others, but already satisfied so not required.

3. skp1995 ◴[] No.42065388[source]
I can give my broader feedback: - Codegen tools today are still not great: The lack of context and not using LSP really burns down the quality of the generated code. - Autocomplete is great Autocomplete is pretty nice, IMHO it helps finish your thoughts and code faster, its like intellisense but better.

If you are working on a greenfield project, AI codegen really shines today and there are many tools in the market for that.

With Aide, we wanted it to work for engineers who spend >= 6 months on the same project and there are deep dependencies between classes/files and the project overall.

For quick answers, I have a renewed habit of going to o1-preview or sonnet3.5 and then fact checking that with google (not been to stack overflow in a long while now)

Do give AI coding a chance, I think you will be excited to say the least for the coming future and develop habits on how to best use the tool.

replies(1): >>42065615 #
4. SparkyMcUnicorn ◴[] No.42065615[source]
> Codegen tools today are still not great: The lack of context and not using LSP really burns down the quality of the generated code

Have you tried Aider?

They've done some discovery on this subject, and it's currently using tree-sitter.

replies(1): >>42065960 #
5. tomr75 ◴[] No.42065902[source]
cursor works well - uses RAG on your code to give context, can directly reference latest docs of whatever you're using

not perfect but good to incrementally build things/find bugs

6. nprateem ◴[] No.42065939[source]
I tried GH copilot again recently with Claude. It was complete shit. Dog slow and gave incomplete responses. Back to aider.
replies(1): >>42065974 #
7. skp1995 ◴[] No.42065960{3}[source]
Yup, I have.

We also use tree-sitter for the smartness of understanding symbols https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186... and also the editor for talking to the Language Server.

What we found was that its not just about having access to these tools but to smartly perform the `go-to-definition` `go-to-reference` etc to grab the right context as and when required.

Every LLM call in between slows down the response time so there are a fair bit of heuristics which we use today to sidestep that process.

8. skp1995 ◴[] No.42065974[source]
what was so bad about it? genuinely curious cause they did make quite a bit of noise about the integration.
replies(2): >>42066057 #>>42071774 #
9. nprateem ◴[] No.42066057{3}[source]
It kept truncating files only about 600 lines long. It also seems to rewrite the entire file each time instead of just sending diffs like aider making it super slow.
replies(1): >>42066121 #
10. skp1995 ◴[] No.42066121{4}[source]
oh, I see your point now. Its weird that they are not doing the search and replace style editing. Altho now that OpenAI also has Predicted Output, I think this will improve and it won't make mistakes while rewriting longer files.

The 600 line limit might be due to the output token limit on the LLM (not sure what they are using for the code rewriting)

replies(1): >>42067646 #
11. xpasky ◴[] No.42066378[source]
Besides Claude.vim for "AI pair programming"? :) (tbh it works well only for small things)

I'm using Codeium and it's pretty decent at picking up the right context automatically, usually it autocompletes within ~100kLoC project quite flawlessly. (So far I haven't been using the chat much, just autocomplete.)

replies(1): >>42066961 #
12. skp1995 ◴[] No.42066961[source]
any reason you don't use the chat often, or maybe it's not your usecase?
replies(1): >>42070915 #
13. viraptor ◴[] No.42067621[source]
Cursor works amazing day to day. Copilot is not even comparable there. I like but rarely use aider and plandex. I'd use them more if the interface didn't take me completely away from the ide. Currently they're closer to "work on this while I'm taking a break".
14. nprateem ◴[] No.42067646{5}[source]
Yeah I guess it's a response limit. It makes it a deal breaker though.
15. Ancapistani ◴[] No.42070915{3}[source]
I'm not the parent poster, but in my case I very rarely use it because it's not in the Neovim UI; it opens in a browser.

I've also had some issues where it doesn't seem to work reliably, but that could be related to my setup.

replies(1): >>42071742 #
16. KronisLV ◴[] No.42071404[source]
GitHub Copilot in either VS Code or JetBrains IDEs. Having more or less the same experience across multiple tools is lovely and meets me where I am, instead of making me get a new tool.

The chat is okay, the autocomplete is also really pleasant for snippets and anything boilerplate heavy. The context awareness also helps. No advanced features like creating entirely new structures of files, though.

Of course, I’ll probably explore additional tools in the future, but for now LLMs are useful in my coding and also sometimes help me figure out what I should Google, because nowadays seemingly accurate search terms return trash.

replies(1): >>42072017 #
17. adriand ◴[] No.42071444[source]
I tried Cursor and found it annoying. I don’t really like talking to AI in IDE chat windows. For whatever reason, I really prefer a web browser. I also didn’t like the overall experience.

I’m still using Copilot in VS Code every day. I recently switched from OpenAI to Claude for the browser-based chat stuff and I really like it. The UI for coding assistance in Claude is excellent. Very well thought out.

Claude also has a nice feature called Projects where you can upload a bunch of stuff to build context which is great - so for instance if you are doing an API integration you can dump all the API docs into the project and then every chat you have has that context available.

As with all the AI tools you have to be quite careful. I do find that errors slip into my code more easily when I am not writing it all myself. Reading (or worse, skimming) source code is just different than writing it. However, between type safety and unit testing, I find I get rid of the bugs pretty quickly and overall my productivity is multiples of what it was before.

replies(1): >>42071501 #
18. thomasfromcdnjs ◴[] No.42071501[source]
This is me also, I don't like the UX/DX of Cursor and such just yet.

I can't tell if it is a UX thing or if it also doesn't suit my mental model.

I religiously use Copilot, and then paste stuff into Claude or ChatGPT (both pro) when needed.

19. vbezhenar ◴[] No.42071578[source]
I'm using Copilot in VScode every day, it works fine, but I mostly use it as glorified one-line autocomplete. I almost never accept multi-line suggestions, don't even look at them.

I tried to use AI deeper, like using aider, but so far I just don't like it. I'm very sensitive to the tiny details of code and AI almost never got it right. I guess actually the main reason that I don't like AI is that I love to write code, simple as that. I don't want to automate that part of my work. I'm fine with trivial autocompletes, but I'm not fine with releasing control over the entire code.

What I would love is to automate interaction with other humans. I don't want to talk to colleagues, boss or other people. I want AI to do so and present me some short extracts.

20. skp1995 ◴[] No.42071742{4}[source]
yeah I am learning that on neovim you can own a buffer region and instead use that for ai back and forth.. it's a very interesting space
21. HyprMusic ◴[] No.42071774{3}[source]
It's not nearly as helpful as Claude.ai - it seems to only want to do the minimum required. On top of that it will quite regularly ignore what you've asked, give you back the exact code you gave it, or even generate syntactically invalid code.

It's amazing how much difference the prompt must make because using it is like going back to gpt3.5 yet it's the same model.

22. yen223 ◴[] No.42071828[source]
I am on day 8 of Cursor's 14-day trial. If things continue to go well, I will be switching from Webstorm to Cursor for my Typescript projects.

The AI integrations are a huge productivity boost. There is a substantial difference in the quality of the AI suggestions between using Claude on the side, and having Claude be deeply integrated in the codebase.

I think I accepted about 60-70% of the suggestions Cursor provided.

Some highlights of Cursor:

- Wrote about 80% of a Vite plugin for consolidating articles in my blog (built on remix.run)

- Wrote a Github Action for automated deployments. Using Cursor to write automation scripts is a tangible productivity boost.

- Made meaningful alterations to a libpg_query fork that allowed it to be cross-compiled to iOS. I have very little experience with C compilation, it would have taken me a substantially long time to figure this out.

There are some downsides to using Cursor though:

- Cursor can get too eager with its suggestions, and I'm not seeing any easy way to temporarily or conditionally turn them off. This was especially bad when I was writing blog posts.

- Cursor does really well with Bash and Typescript, but does not work very well with Kotlin or Swift.

- This is a personal thing, but I'm still not used to some of the shortcuts that Cursor uses (Cursor is built on top of VSCode).

replies(2): >>42072001 #>>42072386 #
23. skp1995 ◴[] No.42072001[source]
Its great that cursor is working for you. I do think LLMs in general are far far better on Typescript and Python compared to other languages (reflects from the training data)

What features of cursor were the most compelling to you? I know their autocomplete experience is elite but wondering if there are other features which you use often!

replies(1): >>42072134 #
24. skp1995 ◴[] No.42072017[source]
yeah I am also getting the sense that people want tooling which meets them in their preferred environment.

Do you use any of the AI features which go for editing multiple files or doing a lot more in the same instruction?

25. yen223 ◴[] No.42072134{3}[source]
Their autocomplete experience is decent, but I've gotten the most value out of Cursor's "chat + codebase context" (no idea what it's called). The feature where you feed it the entire codebase as part of the context, and let Cursor suggest changes to any parts of the codebase.
replies(1): >>42073073 #
26. BoorishBears ◴[] No.42072386[source]
I would not be able to leave a Jetbrains product for Kotlin, or XCode for Swift

Overall it's so unfortunate that Jetbrains doesn't have a Cursor-level AI plugin* because Jetbrains IDEs by themselves are so much more powerful than base level VS Code it actually erases some small portion of the gains from AI...

(* people will link many Jetbrains AI plugins, but none are polished enough)

replies(1): >>42072480 #
27. jaylane ◴[] No.42072457[source]
Vscode + cline + openrouter using claude sonnet 3.5 20241022 model it's unreal the shit it can do
28. yen223 ◴[] No.42072480{3}[source]
I probably would switch to Cursor for Swift projects too if it weren't for the fact that I will still need Xcode to compile the app.

I also agree with the non-AI parts of JetBrains stuff being much better than the non-AI parts of Cursor. Jetbrain's refactoring tools is still very unmatched.

That said, I think the AI part is compelling enough to warrant the switch. There are code rewrite tasks that JetBrains would struggle with, that LLMs can do fairly easily.

replies(1): >>42073084 #
29. jeswin ◴[] No.42072591[source]
I've been building and using these tools for well more than a year now, so here's my journey on building and using them (ORDER BY DESC datetime).

(1) My view now (Nov 2024) is that code building is very conversational and iterative. You need to be able to tweak aspects of generated code by talking to the LLM. For example: "Can you use a params object instead of individual parameters in addToCart?". You also need the ability to sync generated code into your project, run it, and pipe any errors back into the model for refinement. So basically, a very incremental approach to writing it.

For this I made a Chrome plugin, which allowed ChatGPT and Claude to edit source code (using Chrome's File System APIs). You can see a video here: https://www.youtube.com/watch?v=HHzqlI6LLp8

The code is here, but its WIP and for very early users; so please don't give negative reviews yet: https://github.com/codespin-ai/codespin-chrome-extension

(2) Earlier this year, I thought I should build a VS Code plugin. It actually works quite well, allows you to edit code without leaving VSCode. It does stuff like adding dependencies, model selection, prompt histories, sharing git diffs etc. Towards the end, I was convinced that edits need to be conversations, and hence I don't use it as much these days.

Link: https://github.com/codespin-ai/codespin-vscode-extension

(3) Prior to that (2023), I built this same thing in CLI. The idea was that you'd include prompt files in your project, and say something like `my-magical-tool gen prompt.md`. Code would be mostly written as markdown prompt files, and almost never edited directly. In the end, I felt that some form of IDE integration is required - which led to the VSCode extension above.

Link: https://github.com/codespin-ai/codespin

All of these tools were primarily built with AI. So these are not hypotheticals. In addition, I've built half a dozen projects with it; some of it code running in production and hobby stuff like webjsx.org.

Basically, my takeaway is this: code editing is conversational. You need to design a project to be AI-friendly, which means smaller, modular code which can be easily understood by LLMs. Also, my way of using AI is not auto-complete based; I prefer generating from higher level inputs spanning multiple files.

replies(1): >>42072979 #
30. PittleyDunkin ◴[] No.42072894[source]
It really depends on what you're doing. AI is great for generating a ton of text at once but only a small subset of programming tasks clearly benefit from this.

Outside of this it's an autocomplete that's generally 2/3rds incorrect. If you keep coding as you normally do and accept correct solutions as they appear you'll see a few percentage productivity increase.

For highly regular patterns you'll see a drastically improved productivity increase. Sadly this is also a small subset of the programming space.

One exception might be translating user stories into unit tests, but I'm waiting for positive feedback to declare this.

31. skp1995 ◴[] No.42072979[source]
thats a great way to build a tool which solves your need.

In Aide as well, we realised that the major missing loop was the self-correction one, it needs to iteratively expand and do more

Our proactive agent is our first stab at that, and we also realised that the flow from chat -> edit needs to be very free form and the edits are a bit more high level.

I do think you will find value in Aide, do let me know if you got a chance to try it out

replies(1): >>42073369 #
32. skp1995 ◴[] No.42073073{4}[source]
ohh inserting.. I tried it on couple of big repos and it was a bit of a miss to me. How large are the codebases on which you work? I want to get a sense check on where the behavior detoriates with embedding + gpt3.5 based reranker search (not sure if they are doing more now!)
replies(1): >>42073158 #
33. skp1995 ◴[] No.42073084{4}[source]
JetBrains is very interesting, what are the best performing extensions out there for it?

I do wonder what api level access do we get over there as well. For sidecar to run, we need LSP + a web/panel for the ux part (deeper editor layer like undo and redo stack access will also be cool but not totally necessary)

34. yen223 ◴[] No.42073158{5}[source]
Largest repo I used with Cursor was about 600,000 lines long
replies(1): >>42073222 #
35. skp1995 ◴[] No.42073222{6}[source]
that's a good metric to aim for... creating a full local index for 600k lines is pretty expensive but there are a bunch of huristics which can take us pretty far

- looking at git commits - making use of recently accesses files - keyword search

If I set these constraints and allow for maybe around 2 LLM round trips we can get pretty far in terms of performance.

36. jeswin ◴[] No.42073369{3}[source]
> I do think you will find value in Aide, do let me know if you got a chance to try it out

Absolutely, will do it over the weekend. Best of luck with the launch.