←back to thread

117 points soraminazuki | 1 comments | | HN request time: 0.217s | source
Show context
VariousPrograms ◴[] No.45080897[source]
Among many small examples at my job, an incident report summary used to be hand written with a current status and pending actions. Then it was heavily encouraged to start with LLM output and edit by hand. Now it’s automatically generated by an LLM. No one bothers to read the summary anymore because they’re verbose, unfocused, and can be inaccurate. But we’re all hitting our AI metrics now.
replies(4): >>45081002 #>>45081034 #>>45081059 #>>45081071 #
incompatible ◴[] No.45081002[source]
If a report can be generated by an LLM and nobody cares about inaccuracies, why was it ever produced in the first place?
replies(6): >>45081052 #>>45081054 #>>45081063 #>>45081282 #>>45082745 #>>45090261 #
1. aniforprez ◴[] No.45081052[source]
Cargo culting