←back to thread

Using LLMs at Oxide

(rfd.shared.oxide.computer)
694 points steveklabnik | 2 comments | | HN request time: 0.422s | source
Show context
thundergolfer ◴[] No.46178458[source]
A measured, comprehensive, and sensible take. Not surprising from Bryan. This was a nice line:

> it’s just embarrassing — it’s as if the writer is walking around with their intellectual fly open.

I think Oxide didn't include this in the RFD because they exclusively hire senior engineers, but in an organization that contains junior engineers I'd add something specific to help junior engineers understand how they should approach LLM use.

Bryan has 30+ years of challenging software (and now hardware) engineering experience. He memorably said that he's worked on and completed a "hard program" (an OS), which he defines as a program you doubt you can actually get working.

The way Bryan approaches an LLM is super different to how a 2025 junior engineer does so. That junior engineer possibly hasn't programmed without the tantalizing, even desperately tempting option to be assisted by an LLM.

replies(9): >>46178592 #>>46178622 #>>46178776 #>>46179419 #>>46180863 #>>46180957 #>>46180987 #>>46181685 #>>46184735 #
1. govping ◴[] No.46180987[source]
The craft vs practical tension with LLMs is interesting. We've found LLMs excel when there's a clear validation mechanism - for security research, the POC either works or it doesn't. The LLM can iterate rapidly because success is unambiguous.

Where it struggles: problems requiring taste or judgment without clear right answers. The LLM wants to satisfy you, which works great for 'make this exploit work' but less great for 'is this the right architectural approach?'

The craftsman answer might be: use LLMs for the systematic/tedious parts (code generation, pattern matching, boilerplate) while keeping human judgment for the parts that matter. Let the tool handle what it's good at, you handle what requires actual thinking.

replies(1): >>46192390 #
2. jstrebel ◴[] No.46192390[source]
I am certain that LLMs can help you with judgment calls as well. I spent the last month tinkering with spec-driven development of a new Web app and I must say, the LLM was very helpful in identifying design issues in my requirements document and actively suggested sensible improvements. I did not agree to all of them, but the conversation around high-level technical design decisions was very interesting and fruitful (e.g. cache use, architectural patterns, trade-offs between speed and higher level of abstraction).