←back to thread

317 points laserduck | 6 comments | | HN request time: 3.766s | source | bottom
Show context
spamizbad ◴[] No.42157629[source]
LLMs have a long way to go in the world of EDA.

A few months ago I saw a post on LinkedIn where someone fed the leading LLMs a counter-intuitively drawn circuit with 3 capacitors in parallel and asked what the total capacitance was. Not a single one got it correct - not only did they say the caps were in series (they were not) it even got the series capacitance calculations wrong. I couldn’t believe they whiffed it and had to check myself and sure enough I got the same results as the author and tried all types of prompt magic to get the right answer… no dice.

I also saw an ad for an AI tool that’s designed to help you understand schematics. In its pitch to you, it’s showing what looks like a fairly generic guitar distortion pedal circuit and does manage to correctly identify a capacitor as blocking DC but failed to mention it also functions as a component in an RC high-pass filter. I chuckled when the voice over proudly claims “they didn’t even teach me this in 4 years of Electrical Engineering!” (Really? They don’t teach how capacitors block DC and how RC filters work????)

If you’re in this space you probably need to compile your own carefully curated codex and train something more specialized. The general purpose ones struggle too much.

replies(5): >>42158126 #>>42158298 #>>42158327 #>>42160968 #>>42168494 #
1. nerdponx ◴[] No.42160968[source]
> I couldn’t believe they whiffed it

Why should we expect a general-purpose instruction-tuned LLM to get this right in the first place? I am not at all surprised it didn't work, and I would be more than a little surprised if it did.

replies(1): >>42161341 #
2. sangnoir ◴[] No.42161341[source]
> Why should we expect a general-purpose instruction-tuned LLM to get this right in the first place?

The argument goes: Language encodes knowledge, so from the vast reams of training data, the model will have encoded the fundamentals of electromagnetism. This is based in the belief that LLMs being adept at manipulating language, are therefore inchoate general intelligences, and indeed, attaining AGI is a matter of scaling parameters and/or training data on the existing LLM foundations.

replies(3): >>42165653 #>>42165974 #>>42170311 #
3. TheOtherHobbes ◴[] No.42165653[source]
Which is like saying that if you read enough textbooks you'll become an engineer/physicist/ballerina/whatever.
replies(1): >>42172022 #
4. garyfirestorm ◴[] No.42165974[source]
This could be up for debate -

https://www.scientificamerican.com/article/you-dont-need-wor...

5. Vampiero ◴[] No.42170311[source]
Yeah but language sucks at encoding the locality relations that represent a 2D picture such as a circuit diagram. Language is a fundamentally 1D concept.

And I'm baffled that HN is not picking up on that and ACTUALLY BELIEVES that you can achieve AGI with a simple language model scaled to billions of parameters.

It's as futile as trying to explain vision to a blind man using "only" a few billion words. There's simply no string of words that can create a meaningful representation in the mind of the blind man.

6. sph ◴[] No.42172022{3}[source]
A huge number of people in academia believe so. The entire self-help literary genre is based upon this concept.

In reality, and with my biases as self-taught person, experience is crucial. Learning on the field. 10,000 hours of practice. Something LLMs are not very good at. You train them a priori, then it's a relatively static product compared to how human brains operate and self-adjust.