I work on a large product with two decades of accumulated legacy, maybe that's the problem. I can see though how generating and editing a simple greenfield web frontend project could work much better, as long as actual complexity is low.
I work on a large product with two decades of accumulated legacy, maybe that's the problem. I can see though how generating and editing a simple greenfield web frontend project could work much better, as long as actual complexity is low.
public static double ScoreItem(Span<byte> candidate, Span<byte> target)
{
//TODO: Return the normalized Levenshtein distance between the 2 byte sequences.
//... any additional edge cases here ...
}
I think generating more than one method at a time is playing with fire. Individual methods can be generated by the LLM and tested in isolation. You can incrementally build up and trust your understanding of the problem space by going a little bit slower. If the LLM is operating over a whole set of methods at once, it is like starting over each time you have to iterate.Using an agentic system that can at least read the other bits of code is more efficient than copypasting snippets to a web page.
Most code is about patterns, specific code styles and reusing existing libraries. Without context none of that can be applied to the solution.
If you put a programmer in a room and give them a piece of paper with a function and say OPTIMISE THAT! - is it going to be their best work?