←back to thread

117 points soraminazuki | 5 comments | | HN request time: 0.567s | source
Show context
VariousPrograms ◴[] No.45080897[source]
Among many small examples at my job, an incident report summary used to be hand written with a current status and pending actions. Then it was heavily encouraged to start with LLM output and edit by hand. Now it’s automatically generated by an LLM. No one bothers to read the summary anymore because they’re verbose, unfocused, and can be inaccurate. But we’re all hitting our AI metrics now.
replies(4): >>45081002 #>>45081034 #>>45081059 #>>45081071 #
1. mcv ◴[] No.45081034[source]
The idea that there are even AI metrics to hit...

AI should not be a goal in itself, unless you make and sell AI. But for anyone else, you need to stick to your original quality and productivity metrics. If AI can help you improve those, that's great. But don't make AI use itself a goal.

I've got a coworker who complains she's getting pressured by management to use AI to write the documents she writes. She already uses AI to review them, and that works great, according to her. But they want her to use AI to write the whole thing, and she refuses, because the writing process is also how she organizes her own thinking around the content she's writing. If she does that, she's not building her own mental model of the processes she's describing, and soon she'd have no idea of what's going on anymore.

People ignore the importance of such mental models a lot. I recall a story of air traffic control that was automated, leading air traffic controllers to lose track in their heads of which plane was where. So they changed the system so they still had to manually move planes from one zone to another in an otherwise automated system, just to keep their mental models intact.

replies(4): >>45081072 #>>45081214 #>>45081313 #>>45089536 #
2. BrenBarn ◴[] No.45081072[source]
> AI should not be a goal in itself

This is true of all technology, and it's weird to me to see all this happening with AI because it just makes me wonder what other nonsense bosses were insisting people use for no reason other than cargo culting. It just seems so wild to imagine someone saying "other people are using this so we should use it too" without that recommendation actually being based in any substantive way on the tool's functionality.

3. freehorse ◴[] No.45081214[source]
Stories like this don’t surprise me. Ime a lot of managers don’t have a good understanding of what their employees actually do. Which is not that terrible in itself unless they try also to micromanage how they should do their work etc.
4. Towaway69 ◴[] No.45081313[source]
Really well said - it has put something I've been sensing/feeling into words.

It's also how I utilties AIs - summaries or rewrite text to make it sound better but never to create code or understand code. Nothing that requires deep understanding of the problem space.

Its the mental models in my head that don't jell with AI that prevent AI adoption for me.

5. overfeed ◴[] No.45089536[source]
> If she does that, she's not building her own mental model of the processes she's describing, and soon she'd have no idea of what's going on anymore.

Which is fine by management, because the intent is to fire her and have AI generate the reports. The top-down diktats for AI maximization is to quickly figure out how much can be automated so companies can massively scale back on payroll before their competition does.