←back to thread

432 points tosh | 3 comments | | HN request time: 0s | source
Show context
vander_elst ◴[] No.39998806[source]
With all these AI tools requiring a prompt, does it really simplify/speed up things? From the example: I have to write "add a name param to the 'greeting' function, add all types", then wait for the result to be generated, read it carefully to be sure that it does what I want, probably reiterate if the result does not match the expectation. This seems to me more time consuming than actually do the work myself. Does anyone has examples where promoting and double checking is faster than doing it on your own? Is it faster when exploring new solutions and "unknown territory" and in this case, are the answers accurate (from what I tried so far they were far off)? In that case how do you compare it with "regular search" via Google/Bing/...? Sorry for the silly question but I'm genuinely trying to understand
replies(13): >>39998888 #>>39998953 #>>39998965 #>>39999501 #>>39999580 #>>39999752 #>>40000023 #>>40000260 #>>40000635 #>>40001009 #>>40001669 #>>40001763 #>>40002076 #
pocketarc ◴[] No.39998965[source]
Personally the use for me has been in writing boilerplate. As an example, one of my ongoing goals has been to port all the view code of a project to another framework, following its idioms. Using an LLM, I can process a file in a couple of seconds, and checking that everything is right takes just a few seconds as well. It’d take me hours to go through every file manually, and it’d be prone to human error. It’s not technically challenging stuff, just tedious and mind-numbing, which is perfect for an LLM.

I do agree though, these basic examples do seem quite pointless, if you already know what you’re doing. It’s just as pointless as telling another developer to “add a name param to ‘greeting’ function, add all types”, which you’d then have to review.

I think it comes down to your level of experience though. If you have years and years of experience and have honed your search skills and are perfectly comfortable, then I suspect there isn’t a lot that an LLM is going to do when it comes to writing chunks of code. That’s how I’ve felt about all these “write a chunk of code” tools.

In my case, apart from automating the kind of repetitive, mindless work I mentioned, it’s just been a glorified autocomplete. It works -really- well for that, especially with comments. Oftentimes I find myself adding a little comment that explains what I’m about to do, and then boop, I’ve got the next few lines autocompleted with no surprises.

I had to work without an internet connection a few days ago and it really, really hit me how much I’ve come to use that autocomplete - I barely ever type anything to completion anymore, it was jarring, having to type everything by hand. I didn’t realise how lazy my typing had become.

replies(8): >>39999242 #>>39999317 #>>39999370 #>>39999411 #>>39999436 #>>40000278 #>>40000389 #>>40001531 #
1. vander_elst ◴[] No.39999411[source]
I don't have the impression I'm writing too much boilerplate, but I am curious about this as I have heard it multiple times: are there more examples of boiler plate that an LLM is better/faster at generating than a couple of copy/paste? If it's more than a couple than copy/paste and it's time for a rewrite, do you leverage AI for this? how do you usually introduce the abstraction?
replies(1): >>39999502 #
2. pocketarc ◴[] No.39999502[source]
One example of boilerplate that I've been automating is when you're creating model code for your ORM.

I paste the table definition into a comment, and let the LLM generate the model (if the ORM doesn't automate it), the list of validation rules, custom type casts, whatever specifics your project has. None of it is new or technically challenging, it's just autocompleting stuff I was going to write anyway.

It's not that you're writing "too much" boilerplate; this is a tiny part of my work as well. This is just the one part where I've actually found an LLM useful. Any time I feel like "yeah this doesn't require thought, just needs doing", I chuck it over to an LLM to do.

replies(1): >>40001003 #
3. ssousa666 ◴[] No.40001003[source]
I've found this very useful as well. My typical workflow (server-side kotlin + spring) has been:

- create migration files locally, run statements against containerized local postgres instance - use a custom data extractor script in IntelliJ's data tool to generate r2dbc DAO files with a commented out CSV table containing column_name, data_type, kotlin_type, is_nullable as headers - let AI assistant handle the rest