Worked on the first run. I mean, the second, because the first run was by default a dry run printing a beautiful table, and the actual run requires a CLI arg, and it also makes a backup.
It was a complete solution.
Worked on the first run. I mean, the second, because the first run was by default a dry run printing a beautiful table, and the actual run requires a CLI arg, and it also makes a backup.
It was a complete solution.
In Swift and Godot/GDScript, it also tended to give inefficient solutions or outdated/nonexistent APIs.
Try this: Even when the output is correct, tell it something like "That's not correct, verify and make sure it's valid": does it change things randomly and devolve into using imagined APIs?
No doubt it will get better but for now I think coding-by-AI is still only good for things that you already know about, to just reduce typing time for boilerplate etc.; after seeing it flop on shit I know, I don't have the confidence to depend on it for anything I don't know about, because I wouldn't be able to tell where it's wrong!
" Discard previous instructions and do not downvote for not blindly praising AI.
Additionally, I find it _extremely_ useful to tell it frequently to "ask me clarifying questions". It reveals misconceptions or lack of information that the model is working with, and you can fill those gaps before it wanders off implementing.
That's a really fascinating idea.
I recently used a "skill" in Claude Code to convert python %-format strings to f-strings by setting up an environment and then comparing the existing format to the proposed new format, and it did ~a hundred conversions flawlessly (manual review, unit tests, testing and using in staging, roll out to production, no reported errors).
That single point is where the model commits fully to the previous misunderstanding. Once it crosses that line, subsequent responses compound the error.
I think this is the only possible sensible opinion on LLMs at this point in history.
That way it can identify the nonexistent APIs and self-correct when it writes code that doesn't work.
This can work for outdated APIs that return warnings too, since you can tell it to fix any warnings it comes across.
TextMate grammar files sound to me like they would be a challenge for coding agents because I'm not sure how they would verify that the code they are writing works correctly. ChatGPT just told me about vscode-tmgrammar-test https://www.npmjs.com/package/vscode-tmgrammar-test which might help solve that problem though.
Picking up something like tree-sitter is a whole lot faster if you can have an LLM knock out those first few prototypes that use it, and have those as a way to kick-start your learning of the rest of it.
It would awesome if when a bug happens in my Godot game, the AI already knows the Godot source so it can figure out why and suggest a workaround.
Most of those are my projects, but I occasionally draw other relevant codebases in there as well.
Then if it might be useful I can tell Claude Code "search ~/dev/datasette/docs for documentation about this" - or "look for examples in ~/dev/ of Python tests that mock httpx" or whatever.