←back to thread

118 points soraminazuki | 1 comments | | HN request time: 0.207s | source
Show context
VariousPrograms ◴[] No.45080897[source]
Among many small examples at my job, an incident report summary used to be hand written with a current status and pending actions. Then it was heavily encouraged to start with LLM output and edit by hand. Now it’s automatically generated by an LLM. No one bothers to read the summary anymore because they’re verbose, unfocused, and can be inaccurate. But we’re all hitting our AI metrics now.
replies(4): >>45081002 #>>45081034 #>>45081059 #>>45081071 #
incompatible ◴[] No.45081002[source]
If a report can be generated by an LLM and nobody cares about inaccuracies, why was it ever produced in the first place?
replies(6): >>45081052 #>>45081054 #>>45081063 #>>45081282 #>>45082745 #>>45090261 #
1. dkiebd ◴[] No.45082745[source]
It will be funny when one of those reports says that certain steps will be taken in the future to make sure the same incident doesn't occur again, nobody reads the report so nobody notices, and then when the same incident occurs again one of the clients sues.