Most active commenters
  • og_kalu(5)
  • klabb3(3)
  • brookst(3)
  • beepbooptheory(3)

←back to thread

317 points laserduck | 42 comments | | HN request time: 0.441s | source | bottom
1. klabb3 ◴[] No.42157457[source]
I don’t mind LLMs in the ideation and learning phases, which aren’t reproducible anyway. But I still find it hard to believe engineers of all people are eager to put a slow, expensive, non-deterministic black box right at the core of extremely complex systems that need to be reliable, inspectable, understandable…
replies(6): >>42157615 #>>42157652 #>>42158074 #>>42162081 #>>42166294 #>>42167109 #
2. childintime ◴[] No.42157615[source]
You mean, like humans have been for many decades now.

Edit: I believe that LLM's are eminently useful to replace experts (of all people) 90% of the time.

replies(5): >>42157661 #>>42157674 #>>42157685 #>>42157904 #>>42158502 #
3. brookst ◴[] No.42157652[source]
You find it hard to believe that non-deterministic black boxes at the core of complex systems are eager to put non-deterministic black boxes at the core of complex systems?
replies(7): >>42157709 #>>42157955 #>>42158073 #>>42159585 #>>42159656 #>>42171900 #>>42172228 #
4. datameta ◴[] No.42157661[source]
Change "replace" to "supplement" and I agree. The level of non-determinism is just too great at this stage, imo.
5. beepbooptheory ◴[] No.42157674[source]
I don't know if they "eminently" anything at the moment, thats why you feel the need to make the comment, right?
6. layer8 ◴[] No.42157685[source]
People believed that about expert systems in the 1980s as well.
7. beepbooptheory ◴[] No.42157709[source]
Can you actually like follow through with this line? I know there are literally tens of thousands of comments just like this at this point, but if you have chance, could you explain what you think this means? What should we take from it? Just unpack it a little bit for us.
replies(5): >>42157743 #>>42157792 #>>42157794 #>>42158219 #>>42158270 #
8. therealcamino ◴[] No.42157792{3}[source]
I took it to be a joke that the description "slow, expensive, non-deterministic black boxes" can apply to the engineers themselves. The engineers would be the ones who would have to place LLMs at the core of the system. To anyone outside, the work of the engineers is as opaque as the operation of LLMs.
9. croshan ◴[] No.42157794{3}[source]
An interpretation that makes sense to me: humans are non-deterministic black boxes already at the core of complex systems. So in that sense, replacing a human with AI is not unreasonable.

I’d disagree, though: humans are still easier to predict and understand (and trust) than AI, typically.

replies(2): >>42158005 #>>42158767 #
10. majormajor ◴[] No.42157904[source]
> Edit: I believe that LLM's are eminently useful to replace experts (of all people) 90% of the time.

What do you mean by "expert"?

Do you mean the pundit who goes on TV and says "this policy will be bad for the economy"?

Or do you mean the seasoned developer who you hire to fix your memory leaks? To make your service fast? Or cut your cloud bill from 10M a year to 1M a year?

11. klabb3 ◴[] No.42157955[source]
Yes I do! Is that some sort of gotcha? If I can choose between having a script that queries the db and generates a report and “Dave in marketing” who “has done it for years”, I’m going to pick the script. Who wouldn’t? Until machines can reliably understand, operate and self-correct independently, I’d rather not give up debuggability and understandability.
replies(2): >>42158254 #>>42158579 #
12. sdesol ◴[] No.42158005{4}[source]
With humans we have a decent understanding of what they are capable of. I trust a medical professional to provide me with medical advice and an engineer to provide me with engineering advice. With LLM, it can be unpredictable at times, and they can make errors in ways that you would not imagine. Take the following examples from my tool, which shows how GPT-4o and Claude 3.5 Sonnet can screw up.

In this example, GPT-4o cannot tell that GitHub is spelled correctly:

https://app.gitsense.com/?doc=6c9bada92&model=GPT-4o&samples...

In this example, Claude cannot tell that GitHub is spelled correctly:

https://app.gitsense.com/?doc=905f4a9af74c25f&model=Claude+3...

I still believe LLM is a game changer and I'm currently working on what I call a "Yes/No" tool which I believe will make trusting LLMs a lot easier (for certain things of course). The basic idea is the "Yes/No" tool will let you combine models, samples and prompts to come to a Yes or No answer.

Based on what I've seen so far, a model can easily screw up, but it is unlikely that all will screw up at the same time.

replies(1): >>42158221 #
13. talldayo ◴[] No.42158073[source]
In a reductive sense, this passage might as well read "You find it hard to believe that entropy is the source of other entropic reactions?"

No, I'm just disappointed in the decision of Black Box A and am bound to be even more disappointed by Black Box B. If we continue removing thoughtful design from our systems because thoughtlessness is the default, nobody's life will improve.

14. wslh ◴[] No.42158074[source]
100% agree. While I can’t find all the sources right now, [1] and its references could be a good starting point for further exploration. I recall there being a proof or conjecture suggesting that it’s impossible to build an "LLM firewall" capable of protecting against all possible prompts—though my memory might be failing me

[1] https://arxiv.org/abs/2410.07283

15. brookst ◴[] No.42158219{3}[source]
Sure. I mean, humans are very good at building businesses and technologies that are resilient to human fallibility. So when we think of applications where LLMs might replace or augment humans, it’s unsurprising that their fallible nature isn’t a showstopper.

Sure, EDA tools are deterministic, but the humans who apply them are not. Introducing LLMs to these processes is not some radical and scary departure, it’s an iterative evolution.

replies(1): >>42159122 #
16. visarga ◴[] No.42158221{5}[source]
It's actually a great topic - both humans and LLMs are black boxes. And both rely on patterns and abstractions that are leaky. And in the end it's a matter of trust, like going to the doctor.

But we have had extensive experience with humans, it is normal to have better defined trust, LLMs will be better understood as well. There is no central understander or truth, that is the interesting part, it's a "Blind men and the elephant" situation.

replies(1): >>42159435 #
17. og_kalu ◴[] No.42158254{3}[source]
>If I can choose between having a script that queries the db and generates a report and “Dave in marketing” who “has done it for years”

If you could that would be nice wouldn't it? And if you couldn't?

If people were saying, "let's replace Casio Calculators with interfaces to GPT" then that would be crazy and I would wholly agree with you but by and large, the processes people are scrambling to place LLMs in are ones that typical machines struggle or fail and humans excel or do decently (and that LLMs are making some headway in).

You're making the wrong distinction here. It's not Dave vs your nifty script. It's Dave or nothing at all.

There's no point comparing LLM performance to some hypothetical perfect understanding machine that doesn't exist.

You compare to the things its meant to replace - humans. How well can the LLM do this compared to Dave ?

replies(1): >>42158614 #
18. og_kalu ◴[] No.42158270{3}[source]
Because people are not saying "let's replace Casio Calculators with interfaces to GPT!"

By and large, the processes people are scrambling to place LLMs in are ones that typical machines struggle or fail and humans excel or do decently (and that LLMs are making some headway in).

There's no point comparing LLM performance to some hypothetical perfect understanding machine that doesn't exist. It's nonsensical actually. You compare it to the performance of the beings it's meant to replace or augment - humans.

Replacing non-deterministic black boxes with potentially better performing non-deterministic black boxes is not some crazy idea.

19. lxgr ◴[] No.42158502[source]
Experts of the kind that will be able to talk for hours about the academic consensus on the status quo without once considering how the question at hand might challenge it? Quite likely.

Experts capable of critical thinking and reflecting on evidence that contradicts their world model (and thereby retraining it on the fly)? Most likely not, at least not in their current architecture with all its limitations.

20. OkGoDoIt ◴[] No.42158579{3}[source]
I think this comment and the parent comment are talking about two different things. One of you is talking about using nondeterministic ML to implement the actual core logic (an automated script or asking Dave to do it manually), and one of you is talking about using it to design the logic (the equivalent of which is writing that automated script).

LLM’s are not good at actually doing the processing, they are not good at math or even text processing at a character level. They often get logic wrong. But they are pretty good at looking at patterns and finding creative solutions to new inputs (or at least what can appear creative, even if philosophically it’s more pattern matching than creativity). So an LLM would potentially be good at writing a first draft of that script, which Dave could then proofread/edit, and which a standard deterministic computer could just run verbatim to actually do the processing. Eventually maybe even Dave’s proofreading would be superfluous.

Tying this back to the original article, I don’t think anyone is proposing having an LLM inside a chip that processes incoming data in a non-deterministic way. The article is about using AI to design the chips in the first place. But the chips would still be deterministic, the equivalent of the script in this analogy. There are plenty of arguments to make about LLM‘s not being good enough for that, not being able to follow the logic or optimize it, or come up with novel architectures. But the shape of chip design/Verilog feels like something that with enough effort, an AI could likely be built that would be pretty good at it. All of the knowledge that those smart knowledgeable engineers which are good at writing Verilog have built up can almost certainly be represented in some AI form, and I wouldn’t bet against AI getting to a point where it can be helpful similarly to how Copilot currently is with code completion. Maybe not perfect anytime soon, but good enough that we could eventually see a path to 100%. It doesn’t feel like there’s a fundamental reason this is impossible on a long enough time scale.

replies(2): >>42159049 #>>42166300 #
21. kuhewa ◴[] No.42158614{4}[source]
> by and large, the processes people are scrambling to place LLMs in are ones that typical machines struggle or fail

I'm pretty sure they are scrambling to put them absolutely anywhere it might save or make a buck (or convince an investor that it could)

replies(2): >>42165625 #>>42170639 #
22. ◴[] No.42158767{4}[source]
23. klabb3 ◴[] No.42159049{4}[source]
> So an LLM would potentially be good at writing a first draft of that script, which Dave could then proofread/edit

Right, and there’s nothing fundamentally wrong with this, nor is it a novel method. We’ve been joking about copying code from stack overflow for ages, but at least we didn’t pretend that it’s the peak of human achievement. Ask a teacher the difference between writing an essay and proofreading it.

Look, my entire claim from the beginning is that understanding is important (epistemologically, it may be what separates engineering from alchemy, but I digress). Practically speaking, if we see larger and larger pieces of LLM written code, it will be similar to Dave and his incomprehensible VBA script. It works, but nobody knows why. Don’t get me wrong, this isn’t new at all. It’s an ever-present wet blanket that slowly suffocates engineering ventures who don’t pay attention and actively resist. In that context, uncritically inviting a second wave of monkeys to the nuclear control panels, that’s what baffles me.

replies(1): >>42159391 #
24. beepbooptheory ◴[] No.42159122{4}[source]
Ok yeah. I think the thing that trips me up with this argument then is just, yes, when you regard humans in a certain neuroscientific frame and consider things like consciousness or language or will, they are fundamentally nondeterministic. But that isn't the frame of mind of the human engineer who does the work or even validates it. When the engineer is working, they aren't seeing themselves as some black box which they must feed input and get output, they are thinking about the things in themselves, justifying to themselves and others their work. Just because you can place yourself in some hypothetical third person here, one that oversees the model and the human and says "huh yeah they are pretty much the same, huh?", doesn't actually tell us anything about whats happening on the ground in either case, if you will. At the very least, this same logic would imply fallibility is one dimensional and always statistical; "the patient may be dead, but at least they got a new heart." Like isn't in important to be in love, not just be married? To borrow some Kant, shouldn't we still value what we can do when we think as if we aren't just some organic black box machines? Is there even a question there? How could it be otherwise?

Its really just that the "in principle" part of the overall implication with your comment and so many others just doesn't make sense. Its very much cutting off your nose to spite your face. How could science itself be possible, much less engineering, if this is how we decided things? If we regarded ourselves always from the outside? How could even be motivated to debate whether we get the computers to design their own chips? When would something actually happen? At some point, people do have ideas, in a full, if false, transparency to themselves, that they can write down and share and explain. This is not only the thing that has gotten us this far, it is the very essence of why these models are so impressive in the certain ways that they are. It doesn't make sense to argue for the fundamental cheapness of the very thing you are ultimately trying to defend. And it imposes this strange perspective where we are not even living inside our own (phenomenal) minds anymore, that it fundamentally never matters what we think, no matter our justification. Its weird!

I'm sure you have a lot of good points and stuff, I just am simply pointing out that this particular argument is maybe not the strongest.

replies(1): >>42165288 #
25. crabmusket ◴[] No.42159391{5}[source]
> We’ve been joking about copying code from stack overflow for ages

Tangent for a slight pet peeve of mine:

"We" did joke about this, but probably because most of our jobs are not in chip design. "We" also know the limits of this approach.

The fact that Stack Overflow is the most SEO optimised result for "how to center div" (which we always forget how to do) doesn't have any bearing on the times when we have an actual problem requiring our attention and intellect. Say diagnosing a performance issue, negotiating requirements and how they subtly differ in an edge case from the current system behaviour, discovering a shared abstraction in 4 pieces of code that are nearly but not quite the same.

I agree with your posts here, the Stack Overflow thing in general is just a small hobby horse I have.

replies(1): >>42176887 #
26. sdesol ◴[] No.42159435{6}[source]
We are entering the nondeterministic programming era in my opinion. LLM applications will be designed with the idea that we can't be 100% sure and what ever solution can provide the most safe guards, will probably be the winner.
27. ithkuil ◴[] No.42159585[source]
I'm a non-deterministic black box who teaches complex deterministic machines to do stuff and leverages other deterministic machines as tools to do the job.

I like my job.

My job also involves cooperating with other non-deterministic black boxes (colleagues).

I can totally see how artificial non-deterministic black boxes (artificial colleagues) may be useful to replace/augment the biological ones.

For one, artificial colleagues don't get tired and I don't accidentally hurt their feelings or whatnot.

In any case, I'm not looking forward to replacing my deterministic tools with the fuzzy AI stuff.

Intuitively at least it seems to me that these non-deterministic black boxes could really benefit from using the deterministic tools for pretty much the same reasons we do as well.

28. xg15 ◴[] No.42159656[source]
Yes. One does not have to do with the other.
29. numpad0 ◴[] No.42162081[source]
I think I've come to terms with it: engineering and making money from engineering are two completely unrelated things, the latter don't even need technology(but scamming is unethical)
replies(1): >>42170867 #
30. brookst ◴[] No.42165288{5}[source]
We start from similar places but get to very different conclusions.

I accept that I’m fallible, both in my areas of expertise and in all the meta stuff around it. I code bugs. I omit requirements. Not often, and there are mental and technical means to minimize, but my work, my org’s structure, my company’s processes are all designed to mitigate human fallibility.

I’m not interested in “defending” AI models. I’m just saying that their weaknesses are qualitatively similar to human weaknesses, and as such, we are already prepared to deal with those weaknesses as long as we are aware of them, and as long as we don’t make the mistake of thinking that because they use transistors they should be treated like a mostly deterministic piece of software where one unit test pass means it is good.

I think you’re reading some kind of value judgement on consciousness into what is really just a pragmatic approach to slotting powerful but imperfect agents into complex systems. It seems obvious to me, and without any implications as to human agency.

31. blincoln ◴[] No.42165625{5}[source]
100%, and a lot of them are truly terrible use cases for LLMs.

For example, using a LLM to transform structured data into JSON, and doing it with two LLMs in parallel to try to catch the inevitable failures, instead of just writing code that outputs JSON.

replies(1): >>42170642 #
32. zh2408 ◴[] No.42166294[source]
Human itself is like a slow, expensive, non-deterministic black box...
33. hulitu ◴[] No.42166300{4}[source]
> So an LLM would potentially be good at writing a first draft of that script, which Dave could then proofread/edit, and which a standard deterministic computer could just run verbatim to actually do the processing

Or Dave could write a first draft of that script, saving him the time needed to translate what the LLM composed.

34. ein0p ◴[] No.42167109[source]
LLMs can be fully deterministic BTW, depending on the sampling method used. Some methods do not have a random component. As to the rest, yeah - they aren't inspectable or understandable yet.
35. og_kalu ◴[] No.42170639{5}[source]
If your task was being solved well by a deterministic script/algorithm, you are not going to save money porting to LLMs even if you use Open Source models.
replies(1): >>42170696 #
36. og_kalu ◴[] No.42170642{6}[source]
Your example does not make much sense (in response to OP). That's not saving anybody any money.
37. kuhewa ◴[] No.42170696{6}[source]
'could' is doing a whole lot of work in that sentence, I'm being charitable. Reality is LLMs are being crammed in places where it isn't very sensible under thin justifications, just like the last few big ideas were (c.f. blockchain)
replies(1): >>42172376 #
38. blitzar ◴[] No.42170867[source]
Zero dollars isn't cool. You know what is? Hundreds of billions of dollars. (quote rescaled for engineering wealth and LLM wealth)
39. snowwrestler ◴[] No.42171900[source]
One great thing about humans is that we have developed ways to be deterministic when we want to. That’s what math is for.

Does an LLM know math? Not like we do. There’s no deductive logic in there; it’s all statistical inferences from language. An LLM doesn’t “work through” a circuit diagram systematically the way a physics student would. It observes the entire diagram at once, and then guesses the most likely next token.

40. staticman2 ◴[] No.42172228[source]
>You find it hard to believe that non-deterministic black boxes at the core of complex systems are eager to put non-deterministic black boxes at the core of complex systems?

Hello, fellow tech enthusiasts, just stopping by to announce I performatively can't tell the difference between "Latest big tech product (TM)" and Homo Sapiens Sapiens!!!

I'll be seeing you in the next LLM related message thread with the same exact comment!!! As you were!!!

41. og_kalu ◴[] No.42172376{7}[source]
If it can't be solved by a script then what's problem with seeing if you can use LLMs ?

I guess I just don't see your point. So a few purported applications are not very sensible. So what ? This is every breakthrough ever.

42. mrguyorama ◴[] No.42176887{6}[source]
Also the Stack Overflow thing has more to do with all of us being generalists, not incompetent.

I look up "how do I sort a list in language X" because I know from school that there IS a defined good way to do it, probably built into the language, and it will be extremely idiomatic, but I haven't used language X in five years and the specifics might have changed and I don't remember the specific punctuation.