←back to thread

162 points skp1995 | 2 comments | | HN request time: 0s | source

Hey HN, We are Sandeep and Naresh, the creators of Aide. We are happy to open source and invite the community to try out Aide which is a VSCode fork built with LLMs integrated.

To talk through the features, we engineered the following:

- A proactive agent

Agent which iterates on the linter errors (powered by the Language Server) and pulls in relevant context by doing go-to-definitions, go-to-references etc and propose fixes or ask for more files which might be missing in the context.

- Developer control

We encourage you to do edits on top of your coding sessions. To enable this, we built a VSCode native rollback feature which gets rid of all the edits made by the agent in a single click if there were mistakes, without messing up your changes from before.

- A combined chat+edit flow which you can use to brainstorm and edit

You can brainstorm a problem in chat by @’ting the files and then jump into edits (which can happen across multiple files) or go from a smaller set of edits and discuss the side-effects of it

- Inline editing widget

We took inspiration from the macos spotlight widget and created a similar one inside the editor, you can highlight part of the code, do Cmd+K and just give your instructions freely

- Local running AI brain

We ship a binary called sidecar which takes care of talking to the LLM providers, preparing the prompts and using the editor for the LLM. All of this is local first and you get full control over the prompts/responses without anything leaking to our end (unless you choose to use your subscription and share the data with us)

We spent the last 15 months learning about the internals of VSCode (its a non-trivial codebase) and also powering up our AI game, the framework is also at the top of swebench-lite with 43% score. On top of this, since the whole AI side of the logic runs locally on your machine you have complete control over the data, from the prompt to the responses and you can use your own API Keys as well (can be any LLM provider) and talk to them directly.

There’s still a whole lot to build and we are at 1% of the journey. Right now the editor feels robust and does not break on any of the flows which we aimed to solve for.

Let us know if there’s anything else you would like to see us build. We also want to empower extensibility and work together with the community to build the next set of features and set a new milestone of AI native editors.

Show context
xpasky ◴[] No.42064002[source]
Any short-term plans for Claude via AWS Bedrock? (That's for me personally a blocker for trying it on our main codebase.)
replies(1): >>42064038 #
skp1995 ◴[] No.42064038[source]
Thanks for your interest in Aide!

If I understood that correctly, it would mean supporting Claude via the AWS Bedrock endpoint, we will make that happen.

If the underlying LLM does not change then adding more connectors is pretty easy, I will ping the thread with updates on this.

replies(1): >>42064272 #
xpasky ◴[] No.42064272[source]
Yep! And AWS Bedrock gives you also plenty of other models on the back end, plus better control over rate limits. (But for us the important thing is data residency, the code isn't uploaded anywhere.)

Is it ~just about adding another file to https://github.com/codestoryai/sidecar/blob/main/llm_client/... ?

I could take a look too - another way for me to test Aide by working with it to implement this. :-)

(https://github.com/pasky/claude.vim/blob/main/plugin/claude_... is sample code with basic wrapper emulating Claude streaming API with AWS Bedrock backend.)

replies(1): >>42064340 #
skp1995 ◴[] No.42064340[source]
yup! feel free to add the client support, you are on the right track with the changes.

To test the whole flow out here are a few things you will want to do: - https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186... (you need to create the LLMProperties object over here) - add support for it in the broker over here: https://github.com/codestoryai/sidecar/blob/ba20fb3596c71186... - after this you should be at the very least able to test out Cmd+K (highlight and ask it to edit a section) - In Aide, if you go to User Settings: "aide self run" you can tick this and then run your local sidecar so you are hitting the right binary (kill the binary running on 42424 port, thats the webserver binary that ships along with the editor)

If all of this sounds like a lot, you can just add the client and I can also take care of the plumbing!

replies(1): >>42066264 #
1. xpasky ◴[] No.42066264{3}[source]
Hmm looks like this is still pretty early project for me. :)

My experience: 1. I didn't have a working installation window after opening it for the first time. Maybe what fixed it was downloading and opening some random javascript repo, but maybe it was rather switching to "Trusted mode" (which makes me a bit nervous but ok).

2. Once the assistant window input became active, I wrote something short like "hi", but nothing happenned after pressing ctrl-Enter. I rageclicked around a bit, it's possible I have queued multiple requests. About 30 seconds later, suddenly I got a reply (something like "hi what do you want me to do"). That's .. not great latency. :)

3. Since I got it working, I opened the sidecar project and sent my second assistant prompt. I got back this response after few tens of seconds: "You have used up your 5 free requests. Please log in for unlimited requests." (Idk what these 5 requests were...)

I gave it one more go by creating an account. However after logging in through the browser popup, "Signing in to CodeStory..." spins for a long time, then disappears but AIDE still isn't logged in. (Even after trying again after a restart.)

One more thought is maybe you got DDos'd by HN?

replies(1): >>42066382 #
2. skp1995 ◴[] No.42066382[source]
> 2. Once the assistant window input became active, I wrote something short like "hi", but nothing happenned after pressing ctrl-Enter. I rageclicked around a bit, it's possible I have queued multiple requests. About 30 seconds later, suddenly I got a reply (something like "hi what do you want me to do"). That's .. not great latency. :)

Yup thats cause of the traffic and the LLM rate limits :( we are getting more TPM right now so the latency spikes should go away, I had half a mind to spin up multiple accounts to get higher TPM but oh well.... if you do end up using your own API Key, then there is no latency at all, right now the requests get pulled in a global queue so thats probably whats happening.

> 3. Since I got it working, I opened the sidecar project and sent my second assistant prompt. I got back this response after few tens of seconds: "You have used up your 5 free requests. Please log in for unlimited requests." (Idk what these 5 requests were...)

The auth flow being wonky is on us, we did fuzzy test it a bit but as with any software it slipped from the cracks. We were even wondering to skip the auth completely if you are using your own API Keys, that way there is 0 touch interaction with our llm proxy infra.

Thanks for the feedback tho, I appreciate it and we will do better