←back to thread

432 points tosh | 1 comments | | HN request time: 0.338s | source
Show context
vander_elst ◴[] No.39998806[source]
With all these AI tools requiring a prompt, does it really simplify/speed up things? From the example: I have to write "add a name param to the 'greeting' function, add all types", then wait for the result to be generated, read it carefully to be sure that it does what I want, probably reiterate if the result does not match the expectation. This seems to me more time consuming than actually do the work myself. Does anyone has examples where promoting and double checking is faster than doing it on your own? Is it faster when exploring new solutions and "unknown territory" and in this case, are the answers accurate (from what I tried so far they were far off)? In that case how do you compare it with "regular search" via Google/Bing/...? Sorry for the silly question but I'm genuinely trying to understand
replies(13): >>39998888 #>>39998953 #>>39998965 #>>39999501 #>>39999580 #>>39999752 #>>40000023 #>>40000260 #>>40000635 #>>40001009 #>>40001669 #>>40001763 #>>40002076 #
1. wongarsu ◴[] No.40001763[source]
One example where I successfully used an AI tool (plain ChatGPT) went a bit like this:

Me: Can you give me code for a simple image viewer in python? It should be able to open images via a file open dialog as well as show the previous and next image in the folder

GPT: [code doing that with tkinter]

Me: That code has a bug because the path handling is wrong on windows

GPT: [tries to convince me that the code isn't broken, fixes it regardless]

Me: Can you add keyboard shortcuts for the previous and next buttons

GPT: [adds keyboard shortcuts]

After that I did all development the old fashioned way, but that alone saved me a good chunk of time. Since it was just internal tooling for myself code quality didn't matter, and I wasn't too upset about the questionable error handling choices