←back to thread

310 points skarat | 8 comments | | HN request time: 0.438s | source | bottom

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
Show context
welder ◴[] No.43960527[source]
Neither? I'm surprised nobody has said it yet. I turned off AI autocomplete, and sometimes use the chat to debug or generate simple code but only when I prompt it to. Continuous autocomplete is just annoying and slows me down.
replies(27): >>43960550 #>>43960616 #>>43960839 #>>43960844 #>>43960845 #>>43960859 #>>43960860 #>>43960985 #>>43961007 #>>43961090 #>>43961128 #>>43961133 #>>43961220 #>>43961271 #>>43961282 #>>43961374 #>>43961436 #>>43961559 #>>43961887 #>>43962085 #>>43962163 #>>43962520 #>>43962714 #>>43962945 #>>43963070 #>>43963102 #>>43963459 #
1. nsteel ◴[] No.43961090[source]
I can't even get simple code generation to work for VHDL. It just gives me garbage that does not compile. I have to assume this is not the case for the majority of people using more popular languages? Is this because the training data for VHDL is far more limited? Are these "AIs" not able to consume the VHDL language spec and give me actual legal syntax at least?! Or is this because I'm being cheap and lazy by only trying free chatGPT and I should be using something else?
replies(4): >>43962127 #>>43962162 #>>43962183 #>>43962500 #
2. kaycey2022 ◴[] No.43962127[source]
Its all of that to some extent or the other. LLMs don't update overnight and as such lag behind innovations in major frameworks, even in web development. No matter what is said about augmenting their capabilities, their performance using techniques like RAG seem to be lacking. They don't work well with new frameworks either.

Any library that breaks backwards compatibility in major version releases will likely befuddle these models. That's why I have seen them pin dependencies to older versions, and more egregiously, default to using the same stack to generate any basic frontend code. This ignores innovations and improvements made in other frameworks.

For example, in Typescript there is now a new(ish) validation library call arktype. Gemini 2.5 pro straight up produces garbage code for this. The type generation function accepts an object/value. But gemini pro keeps insisting that it consumes a type.

So Gemini defines an optional property as `a?: string` which is similar to what you see in Typescript. But this will fail in arktype, because it needs it input as `'a?': 'string'`. Asking gemini to check again is a waste of time, and you will need enough familiarity with JS/TS to understand the error and move ahead.

Forcing development into an AI friendly paradigm seems to me a regressive move that will curb innovation in return for boosts in junior/1x engineer productivity.

replies(2): >>43962157 #>>43962187 #
3. cube00 ◴[] No.43962157[source]
It's fun watching the AI bros try to spin justifications for building (sorry, vibing) new apps using Ruby for no reason other then the model has so much content back to 2004 to train off.
4. drob518 ◴[] No.43962162[source]
The amount of training data available certainly is a big factor. If you’re programming in Python or JavaScript, I think the AIs do a lot better. I write in Clojure, so I have the same problem as you do. There is a lot less HDL code publicly available, so it doesn’t surprise me that it would struggle with VHDL. That said, from everything I’ve read, free ChatGPT doesn’t do as well on coding. OpenAI’s paid models are better. I’ve been using Anthropic’s Claude Sonnet 3.7. It’s paid but it’s very cost effective. I’m also playing around with the Gemini Pro preview.
5. TingPing ◴[] No.43962183[source]
It completely fails to be helpful as a C/C++. I don’t understand the positivity around it but it must be trained on a lot of web frameworks.
replies(1): >>43962388 #
6. drob518 ◴[] No.43962187[source]
Yep, management dreams of being able to make every programmer a 10x programmer by handing them an LLM, but the 10x programmers are laughing because they know how far off the rails the LLM will go. Debugging skills are the next frontier.
7. y-curious ◴[] No.43962388[source]
It's very helpful for low level chores. The bane of my existence is frontend, and generating UI elements for testing backend work on the fly rocks. I like the analogy of it being a junior dev; Perhaps even an intern. You should check their work constantly and give them extremely pedantic instructions
8. WD-42 ◴[] No.43962500[source]
They are probably really good at React. And because that ecosystem has been in a constant cycle of reinventing the wheel, they can easily pump out boilerplate code because there is just so much of it to train from.