←back to thread

310 points skarat | 6 comments | | HN request time: 1.024s | source | bottom

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
Show context
welder ◴[] No.43960527[source]
Neither? I'm surprised nobody has said it yet. I turned off AI autocomplete, and sometimes use the chat to debug or generate simple code but only when I prompt it to. Continuous autocomplete is just annoying and slows me down.
replies(27): >>43960550 #>>43960616 #>>43960839 #>>43960844 #>>43960845 #>>43960859 #>>43960860 #>>43960985 #>>43961007 #>>43961090 #>>43961128 #>>43961133 #>>43961220 #>>43961271 #>>43961282 #>>43961374 #>>43961436 #>>43961559 #>>43961887 #>>43962085 #>>43962163 #>>43962520 #>>43962714 #>>43962945 #>>43963070 #>>43963102 #>>43963459 #
1. alentred ◴[] No.43962520[source]
To be fair, I think the most value is added by Agent modes, not autocomplete. And I agree that AI-autocomplete is really quite annoying, personally I disable it too.

But coding agents can indeed save some time writing well-defined code and be of great help when debugging. But then again, when they don't work on a first prompt, I would likely just write the thing in Vim myself instead of trying to convince the agent.

My point being: I find agent coding quite helpful really, if you don't go overzealous with it.

replies(2): >>43962611 #>>43963200 #
2. Draiken ◴[] No.43962611[source]
Are you using these in your day job to complete real world tasks or in greenfield projects?

I simply cannot see how I can tell an agent to implement anything I have to do in a real day job unless it's a feature so simple I could do it in a few minutes. Even those the AI will likely screw it up since it sucks at dealing with existing code, best practices, library versions, etc.

replies(3): >>43962713 #>>43962883 #>>43963987 #
3. klinquist ◴[] No.43962713[source]
I am. I've spent some time developing cursor rules where I describe best practices, etc.
4. ativzzz ◴[] No.43962883[source]
I've found it useful for doing simple things in parallel. For instance, I'm working on a large typescript project and one file doesn't have types yet. So I tell the AI to add typing to it with a description while I go work on other things. I check back in 5-10 mins later and either commit the changes or correct it.

Or if I'm working on a full stack feature, and I need some boilerplate to process a new endpoint or new resource type on the frontend, I have the AI build the api call that's similar to the other calls and process the data while I work on business logic in the backend. Then when I'm done, the frontend API call is mostly set up already

I found this works rather well, because it's a list of things in my head that are "todo, in progress" but parallelizable so I can easily verify what its doing

5. ActionHank ◴[] No.43963200[source]
The few times I've tried to use an agent for anything slightly complex or on a moderately large code base it just proceeds to smeer poop all over the floor eventually backing itself into a corner.
6. int_19h ◴[] No.43963987[source]
SOTA LLMs are broadly much better at autonomous coding than they were even a few months ago. But also, it really depends on what it is exactly you're working on, and what tech is involved. Things are great if you're writing Python or TypeScript, less so with C++, and even less so with Rust and other emerging technologies.