Most active commenters
  • pocketarc(3)
  • vander_elst(3)

←back to thread

432 points tosh | 21 comments | | HN request time: 1.571s | source | bottom
Show context
vander_elst ◴[] No.39998806[source]
With all these AI tools requiring a prompt, does it really simplify/speed up things? From the example: I have to write "add a name param to the 'greeting' function, add all types", then wait for the result to be generated, read it carefully to be sure that it does what I want, probably reiterate if the result does not match the expectation. This seems to me more time consuming than actually do the work myself. Does anyone has examples where promoting and double checking is faster than doing it on your own? Is it faster when exploring new solutions and "unknown territory" and in this case, are the answers accurate (from what I tried so far they were far off)? In that case how do you compare it with "regular search" via Google/Bing/...? Sorry for the silly question but I'm genuinely trying to understand
replies(13): >>39998888 #>>39998953 #>>39998965 #>>39999501 #>>39999580 #>>39999752 #>>40000023 #>>40000260 #>>40000635 #>>40001009 #>>40001669 #>>40001763 #>>40002076 #
1. pocketarc ◴[] No.39998965[source]
Personally the use for me has been in writing boilerplate. As an example, one of my ongoing goals has been to port all the view code of a project to another framework, following its idioms. Using an LLM, I can process a file in a couple of seconds, and checking that everything is right takes just a few seconds as well. It’d take me hours to go through every file manually, and it’d be prone to human error. It’s not technically challenging stuff, just tedious and mind-numbing, which is perfect for an LLM.

I do agree though, these basic examples do seem quite pointless, if you already know what you’re doing. It’s just as pointless as telling another developer to “add a name param to ‘greeting’ function, add all types”, which you’d then have to review.

I think it comes down to your level of experience though. If you have years and years of experience and have honed your search skills and are perfectly comfortable, then I suspect there isn’t a lot that an LLM is going to do when it comes to writing chunks of code. That’s how I’ve felt about all these “write a chunk of code” tools.

In my case, apart from automating the kind of repetitive, mindless work I mentioned, it’s just been a glorified autocomplete. It works -really- well for that, especially with comments. Oftentimes I find myself adding a little comment that explains what I’m about to do, and then boop, I’ve got the next few lines autocompleted with no surprises.

I had to work without an internet connection a few days ago and it really, really hit me how much I’ve come to use that autocomplete - I barely ever type anything to completion anymore, it was jarring, having to type everything by hand. I didn’t realise how lazy my typing had become.

replies(8): >>39999242 #>>39999317 #>>39999370 #>>39999411 #>>39999436 #>>40000278 #>>40000389 #>>40001531 #
2. nox101 ◴[] No.39999242[source]
> Using an LLM, I can process a file in a couple of seconds, and checking that everything is right takes just a few seconds as well. It’d take me hours to go through every file manually

Can you explain more how "checking everything is right takes just a few seconds as well? A code review can't happen in "just a few seconds" so maybe I don't understand what the process your describing really is

replies(1): >>39999382 #
3. saurik ◴[] No.39999317[source]
> Personally the use for me has been in writing boilerplate.

We live in a world with everything from macro systems and code generation to higher-order functions and types... if you find yourself writing the same "boilerplate" enough times that you find it annoying, just automate it, the same way you can automate anything else we do using software. I have found myself writing very little "boilerplate" in my decades of software development, as I'd rather at the extreme (and it almost never comes to this) throw together a custom compiler than litter my code with a bunch of hopefully-the-same-every-time difficult-to-adjust-later "boilerplate".

replies(1): >>39999541 #
4. vander_elst ◴[] No.39999370[source]
thanks for the reply! I'll try it for commenting.
5. pocketarc ◴[] No.39999382[source]
In the example I gave, it was just porting the same view code from one framework's way of writing view code to another. It's a one-off task involving hundreds of different views.

There's zero technical challenge, almost no logic, super tedious for a human to do, not quite automatable since there could be any kind of code in those views, and it's very very unlikely that the LLM gets it wrong. I give it a quick look over, it looks right, the tests pass, it's not really a big deal.

And one nice thing I did as well was ask it to "move all logic to the top of the file", which makes it -very- easy to clean up all the "quick fix" cruft that's built up over years that needs to be cleaned up or refactored out.

In those cases the file might indeed need more time dedicated to it, but it would've needed it either way.

6. vander_elst ◴[] No.39999411[source]
I don't have the impression I'm writing too much boilerplate, but I am curious about this as I have heard it multiple times: are there more examples of boiler plate that an LLM is better/faster at generating than a couple of copy/paste? If it's more than a couple than copy/paste and it's time for a rewrite, do you leverage AI for this? how do you usually introduce the abstraction?
replies(1): >>39999502 #
7. mcluck ◴[] No.39999436[source]
Call me a caveman but the lack of an option to use AI tools offline is a massive downside to me. I am connected to the internet most of the time but I take comfort in knowing that, for most of my work, I could lose my connection and not even notice
replies(2): >>39999843 #>>40001497 #
8. pocketarc ◴[] No.39999502[source]
One example of boilerplate that I've been automating is when you're creating model code for your ORM.

I paste the table definition into a comment, and let the LLM generate the model (if the ORM doesn't automate it), the list of validation rules, custom type casts, whatever specifics your project has. None of it is new or technically challenging, it's just autocompleting stuff I was going to write anyway.

It's not that you're writing "too much" boilerplate; this is a tiny part of my work as well. This is just the one part where I've actually found an LLM useful. Any time I feel like "yeah this doesn't require thought, just needs doing", I chuck it over to an LLM to do.

replies(1): >>40001003 #
9. rolisz ◴[] No.39999541[source]
I'd say that using LLMs to write boilerplate falls under "automation"
replies(3): >>39999919 #>>39999975 #>>40000082 #
10. thomashop ◴[] No.39999843[source]
That's just not the reality anymore. You can run a decent open source coding language model on local hardware. Just needs a bit of work and it's not quite as seamless.
11. Liskni_si ◴[] No.39999919{3}[source]
Yes, except it's "bad automation", because as opposed to the automation referred to by GP, boilerplate written by an LLM (or an intern or whomever) is extra code that costs a lot of time to be maintained.
replies(2): >>40000752 #>>40001205 #
12. rsynnott ◴[] No.39999975{3}[source]
But, perhaps uniquely amongst all the systems for avoiding boilerplate since Lisp macros were introduced in the 1950s, it will sometimes make stuff up. I don't buy that "a worse way to write boilerplate" is going to revolutionise programming.
13. ◴[] No.40000082{3}[source]
14. pistacchioso ◴[] No.40000278[source]
Most of the discussions about AI applied to coding end up having someone who states that it's just not worth it (at least the moment) and someone else who then chimes in to say that they mostly use it for "boilerplate" code.

I have trouble understanding the "boilerplate" thing because avoiding writing boilerplate is

1) already a solved "problem" long before AI

2) is it really a "problem"?

The first point: * If you find yourself writing the same piece of code over and over again in the same code it's the indication that you should abstract it away as a function / class / library.

* IDEs have had snippets / code completion for a long time to save you from writing the same pieces of code.

* Large piece of recycled functionalities are generally abstracted away in libraries of frameworks.

* Things like "writing similar static websites a million times" are the reason why solutions like WordPress exist: to take away the boilerplate part of writing websites. This of course applies to solutions / technologies / services that make "avoid writing boilerplate code" their core business

* The only type of real boilerplate that comes to my mind are things like "start a new React application" but that is a thing you do once per project and it's the reason why boostrappers exist so that you only really have to type "npx create-react-app my-app" once and the boilerplate part is taken care of.

The second point: Some mundane refactoring / translations of pieces of code from one technology to the other can actually be automated by AI (I think it's what you're talking about here, but how often does one really do such tasks?), but... Do you really want to? Automate, it, I mean?

I mean, yes "let AI do the boring staff so that I can concentrate on the most interesting parts" make sense, but it's not something I want to do. Maybe it's because I'm aging, but I don't have it in me to be concentrated on demanding, difficult, tiring tasks 8 hour straight a day. It's not something that I can and it's also something that I don't want to.

I much prefer alternating hard stuff that require 100% of my attention with lighter tasks that I can do while listening to a podcast and steam off in order to rest by brain before going back to a harder task. Honestly I don't think anyone is supposed to be concentrated on demanding stuff all day long all week long. That's the recipe for a burnout.

replies(1): >>40007110 #
15. dwighttk ◴[] No.40000389[source]
>checking that everything is right takes just a few seconds as well. It’d take me hours to go through every file manually

how can you check it in a few seconds if it'd take you hours to change it manually?

16. kolme ◴[] No.40000752{4}[source]
And might be wrong.
17. ssousa666 ◴[] No.40001003{3}[source]
I've found this very useful as well. My typical workflow (server-side kotlin + spring) has been:

- create migration files locally, run statements against containerized local postgres instance - use a custom data extractor script in IntelliJ's data tool to generate r2dbc DAO files with a commented out CSV table containing column_name, data_type, kotlin_type, is_nullable as headers - let AI assistant handle the rest

18. navane ◴[] No.40001205{4}[source]
In many ways it's more akin to outsourcing then automating.
19. wsintra2022 ◴[] No.40001497[source]
Try ollama You can run models locally.
20. latexr ◴[] No.40001531[source]
> It’d take me hours to go through every file manually, and it’d be prone to human error.

The LLM output is also extremely prone to error, so it’s not like the second part of your sentence is a valid argument.

21. fragmede ◴[] No.40007110[source]
Because we're all working at different companies on different codebases in different languages doing different things, we're all talking abstractly about something we think is the same, when really it isn't. Obvious to every programmer is to make a library if you're copy and pasting code multiple times, but the overhead of that means you don't do that if you only do that once or twice. the problem, in humans, as with LLMs, is the context window. after running create-react-app or whatever, there are a number of steps to do that are repetitive, but because that doesn't happen often enough to fully automate it, you just do it manually. LLMs let you do that level of boilerplate without all the overhead of manually configuring snippets in the IDE.