Can we stop conflating LLM models with the companies that created them? It's "…Gemini made up…". Do we not value accuracy? It'd be a whole different story if a human defamed you, rather than a token predictor.
replies(4):
I mean, no, I don’t think some Google employee tuned the LLM to produce output like this, but it doesn’t matter. They are still responsible.
We do not blame computer programs when they have bugs or make mistakes - we blame the human being who made them.
This has always been the case since we have created anything, dating back even tens of thousands of years. You absolutely cannot just unilaterally decide to change that now based on a whim.
Companies are responsible for the bad things they make; the things themselves are, by definition, blameless.