I asked it to rename a global variable. It broke the application and failed to understand scoping rules.
Perhaps it is bad luck, or perhaps my Go code is weird, but I don't understand how y'all wanna trust this.
I asked it to rename a global variable. It broke the application and failed to understand scoping rules.
Perhaps it is bad luck, or perhaps my Go code is weird, but I don't understand how y'all wanna trust this.
Nah these things are all stupid as hell. Any back and forth between a human and an LLM in terms of problem solving coding tasks is an absolute disaster.
People here and certainly in the mainstream population see some knowledge and just naturally expect intelligence to go with it. But it doesn't. Wikipedia has knowledge. Books have knowledge. LLMs are just the latest iteration of how humans store knowledge. That's about it, everything else is a hyped up bubble. There's nothing in physics that stops us from creating an artificial, generally intelligent being, but it's NEVER going to be with auto-regressive next-token prediction.
In particular, this is one of the most important tips: Large changes are best performed as a sequence of thoughtful bite sized steps, where you plan out the approach and overall design. Walk GPT through changes like you might with a junior dev. Ask for a refactor to prepare, then ask for the actual change. Spend the time to ask for code quality/structure improvements.
Not sure if this was a factor in your attempts? I'd be happy to help you if you'd like to open an GitHub issue [1] our jump into our discord [2].
[0] https://github.com/paul-gauthier/aider#tips
[1] https://github.com/paul-gauthier/aider/issues/new/choose
I actually agree in the general case, but for specific applications these tools can be seriously awesome. Case in point - this repo of mine, which I think it's fair to say was 80% written by GPT-4 via Aider.
https://github.com/epiccoleman/scrapio
Now of course this is a very simple project, which is obviously going to have better results. And if you read through the commit history [1], you can see that I had to have a pretty good idea of what had to be done to get useful output from the LLM. There are places where I had to figure out something that the LLM was never going to get on its own, places where I made manual changes because directing the AI to do it would have been more trouble than it was worth, etc.
But to me, the cool thing about this project was that I just wouldn't have bothered to do it if I had to do all the work myself. Realistically I just wanted to download and process a list of like 15 urls, and I don't think the time invested in writing a scraper would have made sense for the level of time I would have saved if I had to figure it all out myself. But because I knew specifically what needed to happen, and was able to provide detailed requirements, I saved a ton of time and labor and wound up with something useful.
I've tried to use these sorts of tools for tasks in bigger and more complicated repos, and I agree that in those cases they really tend to swing and miss more often than not. But if you're smart enough to use it as the tool it is and recognize the limitations, LLM-aided dev can be seriously great.
[1]: https://github.com/epiccoleman/scrapio/commits/master/?befor...
Usually you do this with a human as an investment in their future performance, with the understanding that this is the least efficient way to get the job done in the short term.
Having to take a product that is already supposed to "grok code" and make a similar investment doesn't make any sense to me.
LLM foo is very much a real thing. They are surprisingly difficult to use well, but can be very powerful.