Most active commenters

    ←back to thread

    118 points soraminazuki | 15 comments | | HN request time: 0.564s | source | bottom
    1. VariousPrograms ◴[] No.45080897[source]
    Among many small examples at my job, an incident report summary used to be hand written with a current status and pending actions. Then it was heavily encouraged to start with LLM output and edit by hand. Now it’s automatically generated by an LLM. No one bothers to read the summary anymore because they’re verbose, unfocused, and can be inaccurate. But we’re all hitting our AI metrics now.
    replies(4): >>45081002 #>>45081034 #>>45081059 #>>45081071 #
    2. incompatible ◴[] No.45081002[source]
    If a report can be generated by an LLM and nobody cares about inaccuracies, why was it ever produced in the first place?
    replies(6): >>45081052 #>>45081054 #>>45081063 #>>45081282 #>>45082745 #>>45090261 #
    3. mcv ◴[] No.45081034[source]
    The idea that there are even AI metrics to hit...

    AI should not be a goal in itself, unless you make and sell AI. But for anyone else, you need to stick to your original quality and productivity metrics. If AI can help you improve those, that's great. But don't make AI use itself a goal.

    I've got a coworker who complains she's getting pressured by management to use AI to write the documents she writes. She already uses AI to review them, and that works great, according to her. But they want her to use AI to write the whole thing, and she refuses, because the writing process is also how she organizes her own thinking around the content she's writing. If she does that, she's not building her own mental model of the processes she's describing, and soon she'd have no idea of what's going on anymore.

    People ignore the importance of such mental models a lot. I recall a story of air traffic control that was automated, leading air traffic controllers to lose track in their heads of which plane was where. So they changed the system so they still had to manually move planes from one zone to another in an otherwise automated system, just to keep their mental models intact.

    replies(4): >>45081072 #>>45081214 #>>45081313 #>>45089536 #
    4. aniforprez ◴[] No.45081052[source]
    Cargo culting
    5. VariousPrograms ◴[] No.45081054[source]
    People read the summary to see the actual action items instead of reading the whole case. Now the action plan has constant random bullet points like “John Smith will add Mohammad to the email thread. Target date: Tuesday, July 20 2025. This will ensure all critical resources are engaged on the outage and reduce business impact.” or whatever because it’s literally summarizing every email rather than understanding the core of the work that needs doing.
    6. clickety_clack ◴[] No.45081059[source]
    I think a general, informal rule of thumb should be that you put in as much effort to write a thing as you expect from someone to read the thing. If you think I’m going to spend an hour figuring out what happened to you, you’d better have spent at least an hour actually trying to figure it out yourself.
    7. zdragnar ◴[] No.45081063[source]
    It's not that they weren't useful, it's that someone higher up has to justify the expensive enterprise contract that they've foisted upon everyone else with the vague promise of saving money by using it.

    The consumers of the incident report aren't the ones who had any say in using LLMs so they're stuck with less certainty.

    8. ludicrousdispla ◴[] No.45081071[source]
    Can't you just have the AI generate it's own AI metrics?
    9. BrenBarn ◴[] No.45081072[source]
    > AI should not be a goal in itself

    This is true of all technology, and it's weird to me to see all this happening with AI because it just makes me wonder what other nonsense bosses were insisting people use for no reason other than cargo culting. It just seems so wild to imagine someone saying "other people are using this so we should use it too" without that recommendation actually being based in any substantive way on the tool's functionality.

    10. freehorse ◴[] No.45081214[source]
    Stories like this don’t surprise me. Ime a lot of managers don’t have a good understanding of what their employees actually do. Which is not that terrible in itself unless they try also to micromanage how they should do their work etc.
    11. ironmagma ◴[] No.45081282[source]
    Perverse incentives.
    12. Towaway69 ◴[] No.45081313[source]
    Really well said - it has put something I've been sensing/feeling into words.

    It's also how I utilties AIs - summaries or rewrite text to make it sound better but never to create code or understand code. Nothing that requires deep understanding of the problem space.

    Its the mental models in my head that don't jell with AI that prevent AI adoption for me.

    13. dkiebd ◴[] No.45082745[source]
    It will be funny when one of those reports says that certain steps will be taken in the future to make sure the same incident doesn't occur again, nobody reads the report so nobody notices, and then when the same incident occurs again one of the clients sues.
    14. overfeed ◴[] No.45089536[source]
    > If she does that, she's not building her own mental model of the processes she's describing, and soon she'd have no idea of what's going on anymore.

    Which is fine by management, because the intent is to fire her and have AI generate the reports. The top-down diktats for AI maximization is to quickly figure out how much can be automated so companies can massively scale back on payroll before their competition does.

    15. rk06 ◴[] No.45090261[source]
    OP mentions it directly in the post. they were "heavily encouraged" and then "they met their AI metrics"

    now, this is wrong on so many levels. but that is a different discussion