←back to thread

Using LLMs at Oxide

(rfd.shared.oxide.computer)
694 points steveklabnik | 1 comments | | HN request time: 0.001s | source
Show context
monkaiju ◴[] No.46178440[source]
Hmmm, I'm a bit confused of their conclusions (encouraging use) given some of the really damning caveats they point out. A tool they themselves determine to need such careful oversight probably just shouldn't be used near prod at all.
replies(7): >>46178488 #>>46178538 #>>46178540 #>>46178545 #>>46178605 #>>46178840 #>>46178971 #
gghffguhvc ◴[] No.46178488[source]
For the same quality and quantity output, if the cost of using LLMs + the cost of careful oversight is less than the cost of not using LLMs then the rational choice is to use them.

Naturally this doesn’t factor in things like human obsolescence, motivation and self-worth.

replies(2): >>46178560 #>>46178582 #
ahepp ◴[] No.46178582[source]
It seems like this would be a really interesting field to research. Does AI assisted coding result in fewer bugs, or more bugs, vs an unassisted human?

I've been thinking about this as I do AoC with Copilot enabled. It's been nice for those "hmm how do I do that in $LANGUAGE again?" moments, but it's also wrote some nice looking snippets that don't do quite what I want it to. And many cases of "hmmm... that would work, but it would read the entire file twice for no reason".

My guess, however, is that it's a net gain for quality and productivity. Humans make bugs too and there need to be processes in place to discover and remediate those regardless.

replies(2): >>46178946 #>>46181426 #
Yeask ◴[] No.46181426[source]
This companies have trillions and they are not doing that research. Why?
replies(1): >>46186666 #
1. ahepp ◴[] No.46186666[source]
I don't know. I guess the flip side applies too? Lots of people arguing either side, when it feels like it shouldn't be that difficult to provide some objective data.