About 1.5-2 years ago I was using GitHub Copilot to write code, mostly as a boilerplate completer, really, because eventually I realized I spent too much time reading the suggestions and/or fixing the end result when I should've just written it completely myself. I did try it out with a pretty wide scope, i.e. letting it do more or less and seeing what happened. All in all it was pretty cool, I definitely felt like there were some magic moments where it seems to put everything together and sort of read my mind.
Anyway, that period ended and I went until a few months ago without touching anything like this and I was hearing all these amazing things about using Cursor with Claude Sonnet 3.5, so I decided to try it out with a few use cases:
1. Have it write a tokenizer and parser from scratch for a made up Clojure-like language
2. Have it write the parser for the language given the tokenizer I had already written previously
3. Have it write only single parsing functions for very specific things with both the tokenizer and parsing code to look at to see how it works
#1 was a complete and utter failure, it couldn't even put together a simple tokenizer even if shown all of the relevant parts of the host language that would enable a reasonable tokenizer end result.
#2 was only slightly better, but the end results were nowhere near usable, and even after iteration it couldn't produce a runnable result.
#3 is the first one my previous experience with Copilot suggested to me should be doable. We started out pretty badly, it misunderstood one of the tokenizer functions it had examples for and used it in a way that doesn't really make sense given the example. After that it also wanted to add functions it had already added for some reason. I ran into myriad issues with just getting it to either correct, move on or do something productive until I just called it quits.
My personal conclusion from all of this is that yes, it's all incredibly incremental, any kind of "coding companion" or agent has basically the same failure modes/vectors they had years ago and much of that hasn't improved all that much.
The odds that I could do my regular work on 3D engines with the coding companions out there are slim to none when it can't even put together something as simple as a tokenizer together, or use an already existing one to write some simple tokenizer functions. For reference I know that it took my colleague who has never written either of those things 30 minutes until he productively and correctly used exactly the same libraries the LLM was given.