←back to thread

432 points tosh | 2 comments | | HN request time: 0.621s | source
Show context
vander_elst ◴[] No.39998806[source]
With all these AI tools requiring a prompt, does it really simplify/speed up things? From the example: I have to write "add a name param to the 'greeting' function, add all types", then wait for the result to be generated, read it carefully to be sure that it does what I want, probably reiterate if the result does not match the expectation. This seems to me more time consuming than actually do the work myself. Does anyone has examples where promoting and double checking is faster than doing it on your own? Is it faster when exploring new solutions and "unknown territory" and in this case, are the answers accurate (from what I tried so far they were far off)? In that case how do you compare it with "regular search" via Google/Bing/...? Sorry for the silly question but I'm genuinely trying to understand
replies(13): >>39998888 #>>39998953 #>>39998965 #>>39999501 #>>39999580 #>>39999752 #>>40000023 #>>40000260 #>>40000635 #>>40001009 #>>40001669 #>>40001763 #>>40002076 #
pocketarc ◴[] No.39998965[source]
Personally the use for me has been in writing boilerplate. As an example, one of my ongoing goals has been to port all the view code of a project to another framework, following its idioms. Using an LLM, I can process a file in a couple of seconds, and checking that everything is right takes just a few seconds as well. It’d take me hours to go through every file manually, and it’d be prone to human error. It’s not technically challenging stuff, just tedious and mind-numbing, which is perfect for an LLM.

I do agree though, these basic examples do seem quite pointless, if you already know what you’re doing. It’s just as pointless as telling another developer to “add a name param to ‘greeting’ function, add all types”, which you’d then have to review.

I think it comes down to your level of experience though. If you have years and years of experience and have honed your search skills and are perfectly comfortable, then I suspect there isn’t a lot that an LLM is going to do when it comes to writing chunks of code. That’s how I’ve felt about all these “write a chunk of code” tools.

In my case, apart from automating the kind of repetitive, mindless work I mentioned, it’s just been a glorified autocomplete. It works -really- well for that, especially with comments. Oftentimes I find myself adding a little comment that explains what I’m about to do, and then boop, I’ve got the next few lines autocompleted with no surprises.

I had to work without an internet connection a few days ago and it really, really hit me how much I’ve come to use that autocomplete - I barely ever type anything to completion anymore, it was jarring, having to type everything by hand. I didn’t realise how lazy my typing had become.

replies(8): >>39999242 #>>39999317 #>>39999370 #>>39999411 #>>39999436 #>>40000278 #>>40000389 #>>40001531 #
1. nox101 ◴[] No.39999242[source]
> Using an LLM, I can process a file in a couple of seconds, and checking that everything is right takes just a few seconds as well. It’d take me hours to go through every file manually

Can you explain more how "checking everything is right takes just a few seconds as well? A code review can't happen in "just a few seconds" so maybe I don't understand what the process your describing really is

replies(1): >>39999382 #
2. pocketarc ◴[] No.39999382[source]
In the example I gave, it was just porting the same view code from one framework's way of writing view code to another. It's a one-off task involving hundreds of different views.

There's zero technical challenge, almost no logic, super tedious for a human to do, not quite automatable since there could be any kind of code in those views, and it's very very unlikely that the LLM gets it wrong. I give it a quick look over, it looks right, the tests pass, it's not really a big deal.

And one nice thing I did as well was ask it to "move all logic to the top of the file", which makes it -very- easy to clean up all the "quick fix" cruft that's built up over years that needs to be cleaned up or refactored out.

In those cases the file might indeed need more time dedicated to it, but it would've needed it either way.