←back to thread

Using LLMs at Oxide

(rfd.shared.oxide.computer)
694 points steveklabnik | 1 comments | | HN request time: 0.266s | source
Show context
an_ko ◴[] No.46178532[source]
I would have expected at least some consideration of public perception, given the extremely negative opinions many people hold about LLMs being trained on stolen data. Whether it's an ethical issue or a brand hazard depends on your opinions about that, but it's definitely at least one of those currently.
replies(2): >>46178539 #>>46178574 #
1. john01dav ◴[] No.46178539[source]
He speaks of trust and LLMs breaking that trust. Is this not what you mean, but by another name?

> First, to those who can recognize an LLM’s reveals (an expanding demographic!), it’s just embarrassing — it’s as if the writer is walking around with their intellectual fly open. But there are deeper problems: LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too? The reader can’t be sure — and increasingly, the hallmarks of LLM generation cause readers to turn off (or worse).

> Specifically, we must be careful to not use LLMs in such a way as to undermine the trust that we have in one another

> our writing is an important vessel for building trust — and that trust can be quickly eroded if we are not speaking with our own voice