←back to thread

317 points laserduck | 1 comments | | HN request time: 0.216s | source
Show context
aubanel ◴[] No.42158417[source]
I know nothing about chip design. But saying "Applying AI to field X won't work, because X is complex, and LLMs currently have subhuman performance at this" always sounds dubious.

VCs are not investing in the current LLM-based systems to improve X, they're investing in a future where LLM based systems will be 100x more performant.

Writing is complex, LLMs once had subhuman performance, and yet. Digital art. Music (see suno.AI) There is a pattern here.

replies(7): >>42158545 #>>42158550 #>>42158576 #>>42159935 #>>42160061 #>>42165587 #>>42169569 #
zachbee ◴[] No.42158545[source]
I didn't get into this in the article, but one of the major challenges with achieving superhuman performance on Verilog is the lack of high-quality training data. Most professional-quality Verilog is closed source, so LLMs are generally much worse at writing Verilog than, say, Python. And even still, LLMs are pretty bad at Python!
replies(3): >>42158764 #>>42159143 #>>42160983 #
e_y_ ◴[] No.42159143[source]
That's probably where there's a big advantage to being a company like Nvidia, which has both the proprietary chip design knowledge/data and the resources/money and AI/LLM expertise to work on something specialized like this.
replies(1): >>42159803 #
1. DannyBee ◴[] No.42159803[source]
I strongly doubt this - they don't have enough training data either - you are confusing (i think) the scale of their success with the amount of verilog they possess.

IE I think you are wildly underestimating both the scale of training data needing, and wildly overestimating the amount of verilog code possessed by nvidia.

GPU's work by having moderate complexity cores (in the scheme of things) that are replicated 8000 times or whatever. That does not require having 8000 times as much useful verilog, of course.

The folks who have 8000 different chips, or 100 chips that each do 1000 things, would probably have orders of magnitude more verilog to use for training