←back to thread

323 points timbilt | 1 comments | | HN request time: 0.212s | source
Show context
RobinL ◴[] No.42129191[source]
I think this is pretty good advice.

I think often AI sceptics go too far in assuming users blindly use the AI to do everything (write all the code, write the whole essay). The advice in this article largely mirrors - by analogy - how I use AI for coding. To rubber duck, to generate ideas, to ask for feedback, to ask for alternatives and for criticism.

Usually it cannot write the whole thing (essay, program )in one go, but by iterating bewteen the AI and myself, I definitely end up with better results.

replies(6): >>42129299 #>>42129921 #>>42130127 #>>42132063 #>>42133352 #>>42133641 #
1. low_tech_love ◴[] No.42133352[source]
I teach basic statistics to computer scientists (in the context of quantitative research methods) and this year every single one of my group of 30+ students used ChatGPT to generate their final report (other than the obvious wording style, the visualizations all had the same visual language, so it was obvious). There were glaring, laughable errors in the analyses, graphs, conclusions, etc.

I remember when I was a student that my teachers would complain that we did “compilation-based programming” meaning we hit “compile” before we thought about the code we wrote, and let the compiler find the faults. ChatGPT is the new compiler: it creates results so fast that it’s literally more worth it to just turn them in and wait for the response than bothering to think about it. I’m sure a large amount of these students are passing their courses due to simple statistics (I.e. teachers being unable to catch every problematic submission).