←back to thread

321 points laserduck | 2 comments | | HN request time: 0.49s | source
Show context
klabb3 ◴[] No.42157457[source]
I don’t mind LLMs in the ideation and learning phases, which aren’t reproducible anyway. But I still find it hard to believe engineers of all people are eager to put a slow, expensive, non-deterministic black box right at the core of extremely complex systems that need to be reliable, inspectable, understandable…
replies(6): >>42157615 #>>42157652 #>>42158074 #>>42162081 #>>42166294 #>>42167109 #
brookst ◴[] No.42157652[source]
You find it hard to believe that non-deterministic black boxes at the core of complex systems are eager to put non-deterministic black boxes at the core of complex systems?
replies(7): >>42157709 #>>42157955 #>>42158073 #>>42159585 #>>42159656 #>>42171900 #>>42172228 #
beepbooptheory ◴[] No.42157709[source]
Can you actually like follow through with this line? I know there are literally tens of thousands of comments just like this at this point, but if you have chance, could you explain what you think this means? What should we take from it? Just unpack it a little bit for us.
replies(5): >>42157743 #>>42157792 #>>42157794 #>>42158219 #>>42158270 #
brookst ◴[] No.42158219[source]
Sure. I mean, humans are very good at building businesses and technologies that are resilient to human fallibility. So when we think of applications where LLMs might replace or augment humans, it’s unsurprising that their fallible nature isn’t a showstopper.

Sure, EDA tools are deterministic, but the humans who apply them are not. Introducing LLMs to these processes is not some radical and scary departure, it’s an iterative evolution.

replies(1): >>42159122 #
1. beepbooptheory ◴[] No.42159122[source]
Ok yeah. I think the thing that trips me up with this argument then is just, yes, when you regard humans in a certain neuroscientific frame and consider things like consciousness or language or will, they are fundamentally nondeterministic. But that isn't the frame of mind of the human engineer who does the work or even validates it. When the engineer is working, they aren't seeing themselves as some black box which they must feed input and get output, they are thinking about the things in themselves, justifying to themselves and others their work. Just because you can place yourself in some hypothetical third person here, one that oversees the model and the human and says "huh yeah they are pretty much the same, huh?", doesn't actually tell us anything about whats happening on the ground in either case, if you will. At the very least, this same logic would imply fallibility is one dimensional and always statistical; "the patient may be dead, but at least they got a new heart." Like isn't in important to be in love, not just be married? To borrow some Kant, shouldn't we still value what we can do when we think as if we aren't just some organic black box machines? Is there even a question there? How could it be otherwise?

Its really just that the "in principle" part of the overall implication with your comment and so many others just doesn't make sense. Its very much cutting off your nose to spite your face. How could science itself be possible, much less engineering, if this is how we decided things? If we regarded ourselves always from the outside? How could even be motivated to debate whether we get the computers to design their own chips? When would something actually happen? At some point, people do have ideas, in a full, if false, transparency to themselves, that they can write down and share and explain. This is not only the thing that has gotten us this far, it is the very essence of why these models are so impressive in the certain ways that they are. It doesn't make sense to argue for the fundamental cheapness of the very thing you are ultimately trying to defend. And it imposes this strange perspective where we are not even living inside our own (phenomenal) minds anymore, that it fundamentally never matters what we think, no matter our justification. Its weird!

I'm sure you have a lot of good points and stuff, I just am simply pointing out that this particular argument is maybe not the strongest.

replies(1): >>42165288 #
2. brookst ◴[] No.42165288[source]
We start from similar places but get to very different conclusions.

I accept that I’m fallible, both in my areas of expertise and in all the meta stuff around it. I code bugs. I omit requirements. Not often, and there are mental and technical means to minimize, but my work, my org’s structure, my company’s processes are all designed to mitigate human fallibility.

I’m not interested in “defending” AI models. I’m just saying that their weaknesses are qualitatively similar to human weaknesses, and as such, we are already prepared to deal with those weaknesses as long as we are aware of them, and as long as we don’t make the mistake of thinking that because they use transistors they should be treated like a mostly deterministic piece of software where one unit test pass means it is good.

I think you’re reading some kind of value judgement on consciousness into what is really just a pragmatic approach to slotting powerful but imperfect agents into complex systems. It seems obvious to me, and without any implications as to human agency.