←back to thread

117 points soraminazuki | 1 comments | | HN request time: 0.201s | source
Show context
VariousPrograms ◴[] No.45080897[source]
Among many small examples at my job, an incident report summary used to be hand written with a current status and pending actions. Then it was heavily encouraged to start with LLM output and edit by hand. Now it’s automatically generated by an LLM. No one bothers to read the summary anymore because they’re verbose, unfocused, and can be inaccurate. But we’re all hitting our AI metrics now.
replies(4): >>45081002 #>>45081034 #>>45081059 #>>45081071 #
incompatible ◴[] No.45081002[source]
If a report can be generated by an LLM and nobody cares about inaccuracies, why was it ever produced in the first place?
replies(6): >>45081052 #>>45081054 #>>45081063 #>>45081282 #>>45082745 #>>45090261 #
1. VariousPrograms ◴[] No.45081054[source]
People read the summary to see the actual action items instead of reading the whole case. Now the action plan has constant random bullet points like “John Smith will add Mohammad to the email thread. Target date: Tuesday, July 20 2025. This will ensure all critical resources are engaged on the outage and reduce business impact.” or whatever because it’s literally summarizing every email rather than understanding the core of the work that needs doing.