←back to thread

432 points tosh | 1 comments | | HN request time: 0s | source
Show context
vander_elst ◴[] No.39998806[source]
With all these AI tools requiring a prompt, does it really simplify/speed up things? From the example: I have to write "add a name param to the 'greeting' function, add all types", then wait for the result to be generated, read it carefully to be sure that it does what I want, probably reiterate if the result does not match the expectation. This seems to me more time consuming than actually do the work myself. Does anyone has examples where promoting and double checking is faster than doing it on your own? Is it faster when exploring new solutions and "unknown territory" and in this case, are the answers accurate (from what I tried so far they were far off)? In that case how do you compare it with "regular search" via Google/Bing/...? Sorry for the silly question but I'm genuinely trying to understand
replies(13): >>39998888 #>>39998953 #>>39998965 #>>39999501 #>>39999580 #>>39999752 #>>40000023 #>>40000260 #>>40000635 #>>40001009 #>>40001669 #>>40001763 #>>40002076 #
swah ◴[] No.40000635[source]
I feel only a bit bad when deploying a billion dollar machine model to ask "how to rename a git a branch" every other week. Its the easiest way (https://github.com/tbckr/sgpt) compared to reading the manual, but reading the manual is the right way.
replies(1): >>40000651 #
1. nomoreipg ◴[] No.40000651[source]
Not sure if you're talking about chatgpt or google