←back to thread

Using LLMs at Oxide

(rfd.shared.oxide.computer)
694 points steveklabnik | 1 comments | | HN request time: 0s | source
Show context
peheje ◴[] No.46180755[source]
I know I'm walking into a den of wolves here and will probably get buried in downvotes, but I have to disagree with the idea that using LLMs for writing breaks some social contract.

If you hand me a financial report, I expect you used Excel or a calculator. I don't feel cheated that you didn't do long division by hand to prove your understanding. Writing is no different. The value isn't in how much you sweated while producing it. The value is in how clear the final output is.

Human communication is lossy. I think X, I write X' (because I'm imperfect), you understand Y. This is where so many misunderstandings and workplace conflicts come from. People overestimate how clear they are. LLMs help reduce that gap. They remove ambiguity, clean up grammar, and strip away the accidental noise that gets in the way of the actual point.

Ultimately, outside of fiction and poetry, writing is data transmission. I don't need to know that the writer struggled with the text. I need to understand the point clearly, quickly, and without friction. Using a tool that delivers that is the highest form of respect for the reader.

replies(7): >>46180767 #>>46180771 #>>46180927 #>>46181086 #>>46181406 #>>46182032 #>>46183095 #
1. MobiusHorizons ◴[] No.46183095[source]
The point made in the article was about social contract, not about efficacy. Basically if you use an llm in such a way that the reader detects the style, you lose the trust of the reader that you as the author rigorously understand what has been written, and the reader loses the incentive pay attention easily.

I would extend the argument further to say it applies to lots of human generated content as well. Especially sales and marketing information which similarly elicit very low trust.