←back to thread

310 points skarat | 4 comments | | HN request time: 0.883s | source

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
Show context
welder ◴[] No.43960527[source]
Neither? I'm surprised nobody has said it yet. I turned off AI autocomplete, and sometimes use the chat to debug or generate simple code but only when I prompt it to. Continuous autocomplete is just annoying and slows me down.
replies(27): >>43960550 #>>43960616 #>>43960839 #>>43960844 #>>43960845 #>>43960859 #>>43960860 #>>43960985 #>>43961007 #>>43961090 #>>43961128 #>>43961133 #>>43961220 #>>43961271 #>>43961282 #>>43961374 #>>43961436 #>>43961559 #>>43961887 #>>43962085 #>>43962163 #>>43962520 #>>43962714 #>>43962945 #>>43963070 #>>43963102 #>>43963459 #
elAhmo ◴[] No.43961128[source]
Cursor/Windsurf and similar IDEs and plugins are more than autocomplete on steroids.

Sure, you might not like it and think you as a human should write all code, but frequent experience in the industry in the past months is that productivity in the teams using tools like this has greatly increased.

It is not unreasonable to think that someone deciding not to use tools like this will not be competitive in the market in the near future.

replies(7): >>43961373 #>>43961386 #>>43961460 #>>43961538 #>>43961746 #>>43962265 #>>43962566 #
1. hn_throw2025 ◴[] No.43961460[source]
I think you’re right, and perhaps it’s time for the “autocomplete on steroids” tag to be retired, even if something approximating that is happening behind the scenes.

I was converting a bash script to Bun/TypeScript the other day. I was doing it the way I am used to… working on one file at a time, only bringing in the AI when helpful, reviewing every diff, and staying in overall control.

Out of curiosity, threw the whole task over to Gemini 2.5Pro in agentic mode, and it was able to refine to a working solution. The point I’m trying to make here is that it uses MCP to interact with the TS compiler and linters in order to automatically iterate until it has eliminated all errors and warnings. The MCP integrations go further, as I am able to use tools like Console Ninja to give the model visibility into the contents of any data structure at any line of code at runtime too. The combination of these makes me think that TypeScript and the tooling available is particularly suitable for agentic LLM assisted development.

Quite unsettling times, and I suppose it’s natural to feel disconcerted about how our roles will become different, and how we will participate in the development process. The only thing I’m absolutely sure about is that these things won’t be uninvented with the genie going back in the bottle.

replies(1): >>43962009 #
2. kaycey2022 ◴[] No.43962009[source]
How much did that cost you? How long did you spend reading and testing the results?
replies(1): >>43962291 #
3. hn_throw2025 ◴[] No.43962291[source]
That wasn’t really the point I was getting at, but as you asked… The reading doesn’t involve much more than a cursory (no pun intended) glance, and I didn’t test more than I would have tested something I had written manually.
replies(1): >>43962761 #
4. kaycey2022 ◴[] No.43962761{3}[source]
Maybe it wasn't your point. But cost of development is a very important factor, considering some of the thinking models burn tokens like no tomorrow. Accuracy is another. Maybe your script is kind of trivial/inconsequential so it doesn't matter if the output has some bugs as long as it seems to work. There are a lot of throwaway scripts we write, for which LLMs are an excellent tool to use.