←back to thread

118 points soraminazuki | 1 comments | | HN request time: 0s | source
Show context
VariousPrograms ◴[] No.45080897[source]
Among many small examples at my job, an incident report summary used to be hand written with a current status and pending actions. Then it was heavily encouraged to start with LLM output and edit by hand. Now it’s automatically generated by an LLM. No one bothers to read the summary anymore because they’re verbose, unfocused, and can be inaccurate. But we’re all hitting our AI metrics now.
replies(4): >>45081002 #>>45081034 #>>45081059 #>>45081071 #
mcv ◴[] No.45081034[source]
The idea that there are even AI metrics to hit...

AI should not be a goal in itself, unless you make and sell AI. But for anyone else, you need to stick to your original quality and productivity metrics. If AI can help you improve those, that's great. But don't make AI use itself a goal.

I've got a coworker who complains she's getting pressured by management to use AI to write the documents she writes. She already uses AI to review them, and that works great, according to her. But they want her to use AI to write the whole thing, and she refuses, because the writing process is also how she organizes her own thinking around the content she's writing. If she does that, she's not building her own mental model of the processes she's describing, and soon she'd have no idea of what's going on anymore.

People ignore the importance of such mental models a lot. I recall a story of air traffic control that was automated, leading air traffic controllers to lose track in their heads of which plane was where. So they changed the system so they still had to manually move planes from one zone to another in an otherwise automated system, just to keep their mental models intact.

replies(4): >>45081072 #>>45081214 #>>45081313 #>>45089536 #
1. Towaway69 ◴[] No.45081313[source]
Really well said - it has put something I've been sensing/feeling into words.

It's also how I utilties AIs - summaries or rewrite text to make it sound better but never to create code or understand code. Nothing that requires deep understanding of the problem space.

Its the mental models in my head that don't jell with AI that prevent AI adoption for me.