←back to thread

Using LLMs at Oxide

(rfd.shared.oxide.computer)
694 points steveklabnik | 4 comments | | HN request time: 0.603s | source
1. forrestthewoods ◴[] No.46178758[source]
> When debugging a vexing problem one has little to lose by using an LLM — but perhaps also little to gain.

This probably doesn't give them enough credit. If you can feed an LLM a list of crash dumps it can do a remarkable job producing both analyses and fixes. And I don't mean just for super obvious crashes. I was most impressed with a deadlock where numerous engineers and tried and failed to understand exactly how to fix it.

replies(2): >>46179194 #>>46181931 #
2. nrhrjrjrjtntbt ◴[] No.46179194[source]
LLMs are good where there is a lot of detail but the answer to be found is simple.

This is sort of the opposite of vibe coding, but LLMs are OK at that too.

replies(1): >>46179362 #
3. forrestthewoods ◴[] No.46179362[source]
> LLMs are good where there is a lot of detail but the answer to be found is simple.

Oooo I like that. Will try and remember that one.

Amusingly, my experience is that the longer an issue takes me to debug the simpler and dumber the fix is. It's tragic really.

4. throwdbaaway ◴[] No.46181931[source]
After the latest production issue, I have a feeling that opus-4.5 and gpt-5.1-codex-max are perhaps better than me at debugging. Indeed my role was relegated to combing through the logs, finding the abnormal / suspicious ones, and feeding those to the models.