←back to thread

117 points soraminazuki | 1 comments | | HN request time: 0.216s | source
Show context
VariousPrograms ◴[] No.45080897[source]
Among many small examples at my job, an incident report summary used to be hand written with a current status and pending actions. Then it was heavily encouraged to start with LLM output and edit by hand. Now it’s automatically generated by an LLM. No one bothers to read the summary anymore because they’re verbose, unfocused, and can be inaccurate. But we’re all hitting our AI metrics now.
replies(4): >>45081002 #>>45081034 #>>45081059 #>>45081071 #
incompatible ◴[] No.45081002[source]
If a report can be generated by an LLM and nobody cares about inaccuracies, why was it ever produced in the first place?
replies(6): >>45081052 #>>45081054 #>>45081063 #>>45081282 #>>45082745 #>>45090261 #
1. zdragnar ◴[] No.45081063[source]
It's not that they weren't useful, it's that someone higher up has to justify the expensive enterprise contract that they've foisted upon everyone else with the vague promise of saving money by using it.

The consumers of the incident report aren't the ones who had any say in using LLMs so they're stuck with less certainty.