> when exploring new solutions and "unknown territory"
If it’s something I have no idea how to do I might describe the problem and just look at the code it spits out; not even copy pasting but just reading for a basic idea.
> how do you compare it with "regular search" via Google/Bing
Much worse if there’s a blog post or example in documentation that’s exactly what I’m looking for, but, if it’s something novel, much better.
An example:
Recently asked how I could convert pressure and temperature data to “skew T” coordinates for a meteorological plot. Not something easy to Google, and the answers the AI gave were slightly wrong, but it gave me a foot in the door.
I do agree though, these basic examples do seem quite pointless, if you already know what you’re doing. It’s just as pointless as telling another developer to “add a name param to ‘greeting’ function, add all types”, which you’d then have to review.
I think it comes down to your level of experience though. If you have years and years of experience and have honed your search skills and are perfectly comfortable, then I suspect there isn’t a lot that an LLM is going to do when it comes to writing chunks of code. That’s how I’ve felt about all these “write a chunk of code” tools.
In my case, apart from automating the kind of repetitive, mindless work I mentioned, it’s just been a glorified autocomplete. It works -really- well for that, especially with comments. Oftentimes I find myself adding a little comment that explains what I’m about to do, and then boop, I’ve got the next few lines autocompleted with no surprises.
I had to work without an internet connection a few days ago and it really, really hit me how much I’ve come to use that autocomplete - I barely ever type anything to completion anymore, it was jarring, having to type everything by hand. I didn’t realise how lazy my typing had become.
Can you explain more how "checking everything is right takes just a few seconds as well? A code review can't happen in "just a few seconds" so maybe I don't understand what the process your describing really is
We live in a world with everything from macro systems and code generation to higher-order functions and types... if you find yourself writing the same "boilerplate" enough times that you find it annoying, just automate it, the same way you can automate anything else we do using software. I have found myself writing very little "boilerplate" in my decades of software development, as I'd rather at the extreme (and it almost never comes to this) throw together a custom compiler than litter my code with a bunch of hopefully-the-same-every-time difficult-to-adjust-later "boilerplate".
There's zero technical challenge, almost no logic, super tedious for a human to do, not quite automatable since there could be any kind of code in those views, and it's very very unlikely that the LLM gets it wrong. I give it a quick look over, it looks right, the tests pass, it's not really a big deal.
And one nice thing I did as well was ask it to "move all logic to the top of the file", which makes it -very- easy to clean up all the "quick fix" cruft that's built up over years that needs to be cleaned up or refactored out.
In those cases the file might indeed need more time dedicated to it, but it would've needed it either way.
The current wave of coding assistants target junior programmers who don't know how to even start approaching a task. LLMs are quite good at spitting out code that will create a widget or instantiate a client for a given API, figuring out all the parameters and all the incantations that you'd otherwise need to copy paste from a documentation. In a way they are documentation "search and digest" tools.
While that's also useful for senior developers when they need to work outside of their particular focus area, it's not that useful to help you work on a mature codebase where you have your own abstractions and all sorts of custom things that have good reasons to be there but are project specific.
Sure, we could eventually have LLMs that can be fine tuned to your specific projects, company or personal style.
But there is also another area where we can use intelligent assistants: editors.
Right now editors offer powerful tools to move around and replace text, often in ways that respects the syntax of the language. But it's cumbersome to use and learn, relying on key bindings or complicated "refactoring" commands.
I wish there was a way for me to have a smarter editor. Something that understands the syntax and a bit the semantics of the code but also the general intent of the local change in working on and the wider context so it can help me apply the right edits.
For example right now I'm factoring out a part of a larger function into it's own function so it can be called independently.
I know there are editor features that predate AI that can do this work but for various reasons I can't us it. For example, you may have started to do it manually because it seemed simple and then you realize you have to factor out 5 parameters and it becomes a boring exercise of copy paste. Another example is that the function extraction refactoring tool of your IDE just can't handle that case, for example: func A(a Foo) { b := a.GetBar(); Baz(b.X, b.Y, c, d) } you'd want to extract a function func _A(b Bar) { Baz(b.X.... and have A call that. In some simple cases the IDE can do that. In other you need to do it manually.
I want an editor extension that can help me with the boring parts of shuffling parameters around, moving them in structures etc etc all the while I'm in control of the shape of the code but I don't have to remember the advanced editor commands but instead augment my actions with some natural language comments (written or even spoken!)
I paste the table definition into a comment, and let the LLM generate the model (if the ORM doesn't automate it), the list of validation rules, custom type casts, whatever specifics your project has. None of it is new or technically challenging, it's just autocompleting stuff I was going to write anyway.
It's not that you're writing "too much" boilerplate; this is a tiny part of my work as well. This is just the one part where I've actually found an LLM useful. Any time I feel like "yeah this doesn't require thought, just needs doing", I chuck it over to an LLM to do.
core programming hasn't really changed over the past years with good reason: you need. to. understand what you do. this is the bottleneck. not writing it.
Telling it what I want to do in a broader term and asking for code examples is a lot better., especially for something I don't know how to do.
Otherwise the autocomplete/suggestions in the editor is great for the minutia and tedious crap and utility functions. Probably saves me about 20% typing which is great on hands that have typing for 20 odd years.
It's also good for finding tools and libraries (when it doesn't hallucinate) since https://libs.garden disappeared inexplicably (dunno what to do on Friday nights now that I can't browse through that wonderful site till 2am)
I have trouble understanding the "boilerplate" thing because avoiding writing boilerplate is
1) already a solved "problem" long before AI
2) is it really a "problem"?
The first point: * If you find yourself writing the same piece of code over and over again in the same code it's the indication that you should abstract it away as a function / class / library.
* IDEs have had snippets / code completion for a long time to save you from writing the same pieces of code.
* Large piece of recycled functionalities are generally abstracted away in libraries of frameworks.
* Things like "writing similar static websites a million times" are the reason why solutions like WordPress exist: to take away the boilerplate part of writing websites. This of course applies to solutions / technologies / services that make "avoid writing boilerplate code" their core business
* The only type of real boilerplate that comes to my mind are things like "start a new React application" but that is a thing you do once per project and it's the reason why boostrappers exist so that you only really have to type "npx create-react-app my-app" once and the boilerplate part is taken care of.
The second point: Some mundane refactoring / translations of pieces of code from one technology to the other can actually be automated by AI (I think it's what you're talking about here, but how often does one really do such tasks?), but... Do you really want to? Automate, it, I mean?
I mean, yes "let AI do the boring staff so that I can concentrate on the most interesting parts" make sense, but it's not something I want to do. Maybe it's because I'm aging, but I don't have it in me to be concentrated on demanding, difficult, tiring tasks 8 hour straight a day. It's not something that I can and it's also something that I don't want to.
I much prefer alternating hard stuff that require 100% of my attention with lighter tasks that I can do while listening to a podcast and steam off in order to rest by brain before going back to a harder task. Honestly I don't think anyone is supposed to be concentrated on demanding stuff all day long all week long. That's the recipe for a burnout.
- create migration files locally, run statements against containerized local postgres instance - use a custom data extractor script in IntelliJ's data tool to generate r2dbc DAO files with a commented out CSV table containing column_name, data_type, kotlin_type, is_nullable as headers - let AI assistant handle the rest
I find it is faster in lots of cases where the solution is 'simple' but long and and a bit fiddly. As a concrete example from earlier today, I needed a function that took a polygon and returned a list of its internal angles. Can I write it myself, sure. Did copilot generate the code (and unit tests) for me in a fraction of the time it would have taken me to do it, absolutely.
That demo GIF is just showing a toy example. To see what it's like to work with aider on more complex changes you can check out the examples page [0].
The demo GIF was just intended to convey the general workflow that aider provides: you ask for some changes and aider shares your existing code base with the LLM, collects back the suggested code edits, applies them to your code and git commits with a sensible commit message.
This workflow is generally a big improvement over manually cutting and pasting bits of code back and forth between the ChatGPT UI and your IDE.
Beyond just sending the code that needs to be edited, aider also sends GPT a "repository map" [1] that gives it the overall context of your codebase. This makes aider more effective when working in larger code bases.
Me: Can you give me code for a simple image viewer in python? It should be able to open images via a file open dialog as well as show the previous and next image in the folder
GPT: [code doing that with tkinter]
Me: That code has a bug because the path handling is wrong on windows
GPT: [tries to convince me that the code isn't broken, fixes it regardless]
Me: Can you add keyboard shortcuts for the previous and next buttons
GPT: [adds keyboard shortcuts]
After that I did all development the old fashioned way, but that alone saved me a good chunk of time. Since it was just internal tooling for myself code quality didn't matter, and I wasn't too upset about the questionable error handling choices