Those lamenting the loss of manual programming: we are free to hone our skills on personal projects, but for corporate/consulting work, you cannot ignore 5x speed advantage. It's over. AI-assisted coding won.
Those lamenting the loss of manual programming: we are free to hone our skills on personal projects, but for corporate/consulting work, you cannot ignore 5x speed advantage. It's over. AI-assisted coding won.
Otherwise, it can be 0.2x in some cases. And you should not use LLMs for anything security-related unless you are a security expert, otherwise you are screwed.
(this is SOTA as of April 2025, I expect things to become better in the near future)
If you know the programming language really well, that usually means you know what libraries are useful, memorized common patterns, and have some project samples laying out. The actual speed improvement would be on typing the code, but it's usually the activity that requires the least time on any successful project. And unless you're a slow typist, I can't see 5x there.
If you're lacking in fundamental, then it's just a skill issue, and I'd be suspicious of the result.
Everything boring can be automated and it takes five seconds compared to half an hour.
> Given this code, extract all entities and create the database schema from these
Sometimes, the best representation for storing and loading data is not the best for manipulating it and vice-versa. Directly mapping code entities to database relations (assuming it's SQL) is a sure way to land yourself in trouble later.
> write documentation for these methods
The intent of documentation is to explain how to use something and the multiple why's behind an implementation. What is there can be done using a symbol explorer. Repeating what is obvious from the name of the function is not helpful. And hallucinating something that is not there is harmful.
> write test examples
Again the type of tests matters more than the amount. So unless you're sure that the test is correct and the test suite really ensure that the code is viable, it's all for naught.
...
Your use cases assume that the output is correct. And as the hallucination risk from LLM models is non-zero, such assumption is harmful.
As for the documentation part — I infer that you hadn't used state of the art models, had you? They do not write symbol docs mechanistically. They understand what the code is _doing_. Up to their context limits, which are now 128k for most models. Feed them 128k of code and more often than not it will understand what it is about. In seconds (compared to hours for humans).
What the code is doing is important only when you intend to modify it. Normally, what's important is how to use it. That's the whole point of design: Presenting an API that hides how things happens in favor of making it easier (natural) to do something. The documentation should focus on that abstract design and the relation to the API. The concrete implementation rarely matters if you're on the other side of the API.