←back to thread

Using LLMs at Oxide

(rfd.shared.oxide.computer)
694 points steveklabnik | 1 comments | | HN request time: 0s | source
Show context
kace91 ◴[] No.46178637[source]
The guide is generally very well thought, but I see an issue in this part:

It sets the rule that things must be actually read when there’s a social expectation (code interviews for example) but otherwise… remarks that use of LLMs to assist comprehension has little downside.

I find two problems with this:

- there is incoherence there. If LLMs are flawless in reading and summarization, there is no difference with reading the original. And if they aren’t flawless, then that flaw also extends to non social stuff.

- in practice, I haven’t found LLMs so good as reading assistants. I’ve send them to check a linked doc and they’ve just read the index and inferred the context, for example. Just yesterday I asked for a comparison of three technical books on a similar topic, and it wrongly guessed the third one rather than follow the three links.

There is a significant risk in placing a translation layer between content and reader.

replies(2): >>46179069 #>>46179875 #
fastball ◴[] No.46179875[source]
> It sets the rule that things must be actually read when there’s a social expectation (code interviews for example) but otherwise… remarks that use of LLMs to assist comprehension has little downside.

I think you got this backwards, because I don't think the RFD said that at all. The point was about a social expectation for writing, not for reading.

replies(1): >>46180236 #
1. kace91 ◴[] No.46180236[source]
This is what I’m referencing:

>using LLMs to assist comprehension should not substitute for actually reading a document where such reading is socially expected.