←back to thread

323 points timbilt | 7 comments | | HN request time: 0.963s | source | bottom
Show context
RobinL ◴[] No.42129191[source]
I think this is pretty good advice.

I think often AI sceptics go too far in assuming users blindly use the AI to do everything (write all the code, write the whole essay). The advice in this article largely mirrors - by analogy - how I use AI for coding. To rubber duck, to generate ideas, to ask for feedback, to ask for alternatives and for criticism.

Usually it cannot write the whole thing (essay, program )in one go, but by iterating bewteen the AI and myself, I definitely end up with better results.

replies(6): >>42129299 #>>42129921 #>>42130127 #>>42132063 #>>42133352 #>>42133641 #
1. aaplok ◴[] No.42129921[source]
> I think often AI sceptics go too far in assuming users blindly use the AI to do everything

Users are not a monolithic group. Some users/students absolutely use AI blindly.

There are also many, many ways to use AI counterproductively. One of the most pernicious I have noticed is users who turn to AI for the initial idea without reflecting about the problem first. This removes a critical step from the creative process, and prevents practice of critical and analytical thinking. Struggling to come up with a solution first before seeing one (either from AI or another human) is essential for learning a skill.

The effect is that people end up lacking self confidence in their ability to solve problems on their own. They give up much too easily if they don't have a tool doing it for them.

replies(4): >>42130158 #>>42133072 #>>42133253 #>>42134700 #
2. RobinL ◴[] No.42130158[source]
Yeah, absolutely agree with that. Definitely has the potential to be particularly harmful in educational settings of users blindly trusting.

I guess it's just like many tools, they can be used well or badly, and people need to learn how to use the well to get value from them

replies(1): >>42131071 #
3. dmafreezone ◴[] No.42131071[source]
Fools need chatGPT most, but wise men only are the better for it. - Ben Franklin
4. fhd2 ◴[] No.42133072[source]
I'm terrified when I see people get a whiff of a problem, and immediately turn to ChatGPT. If you don't even think about the problem, you have a roundabout zero chance of understanding it - and a similar chance of solving it. I run into folks like that very rarely, but when I do, it gives me the creeps.

Then again, I bet some of these people were doing the same with Google in the past, landing on some low quality SEO article that sounds close enough.

Even earlier, I suppose they were asking somebody working for them to figure it out - likely somebody unqualified who babbled together something plausible sounding.

Technology changes, but I'm not sure people do.

replies(1): >>42137537 #
5. signaru ◴[] No.42133253[source]
It gets worst when these users/students run to others when the AI generated code doesn't work. Or with colleagues who think they already "wrote" the initial essay then pass it for others to edit and contribute. In such cases it is usually better to rewrite from scratch and tell them their initial work is not useful at all and not worth spending time improving upon.
6. Miraltar ◴[] No.42134700[source]
Using llm blindly will lead to poor results in complex tasks so I'm not sure how much of a problem it might be. I feel like students using it blindly won't get far but I might be wrong.

> One of the most pernicious I have noticed is users who turn to AI for the initial idea without reflecting about the problem first I've been doing that and it usually doesn't work. How can you ask an ai to solve a problem you don't understand at all ? More often than not, when you do that the ai throws a dumb response and you get back to thinking about how to present the problem in a clear way which makes you understand it better.

So you still end up learning to analyze a problem and solving it. But I can't tell if the solution comes up faster or not nor if it helps learning or not.

7. visarga ◴[] No.42137537[source]
> I'm terrified when I see people get a whiff of a problem, and immediately turn to ChatGPT.

Not a problem for me, I work on prompt development, I can't ask GPT how to fix its mistakes because it has no clue. Prompting will probably be the last defense of reasoning, the only place where you can't get AI help.