←back to thread

Using LLMs at Oxide

(rfd.shared.oxide.computer)
694 points steveklabnik | 1 comments | | HN request time: 0.213s | source
Show context
rgoulter ◴[] No.46178575[source]
> LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well.

I think this points out a key point.. but I'm not sure the right way to articulate it.

A human-written comment may be worth something, but an LLM-generated is cheap/worthless.

The nicest phrase capturing the thought I saw was: "I'd rather read the prompt".

It's probably just as good to let an LLM generate it again, as it is to publish something written by an LLM.

replies(5): >>46178739 #>>46179142 #>>46179749 #>>46181070 #>>46184241 #
1. teaearlgraycold ◴[] No.46179749[source]
One thing I’ve noticed is that when writing something I consider insightful or creative with LLMs for autocompletion the machine can’t successfully predict any words in the sentence except maybe the last one.

They seem to be good at either spitting out something very average, or something completely insane. But something genuinely indicative of the spark of intelligence isn’t common at all. I’m happy to know that while my thoughts are likely not original, they are at least not statistically likely.