←back to thread

317 points laserduck | 1 comments | | HN request time: 0.216s | source
Show context
aubanel ◴[] No.42158417[source]
I know nothing about chip design. But saying "Applying AI to field X won't work, because X is complex, and LLMs currently have subhuman performance at this" always sounds dubious.

VCs are not investing in the current LLM-based systems to improve X, they're investing in a future where LLM based systems will be 100x more performant.

Writing is complex, LLMs once had subhuman performance, and yet. Digital art. Music (see suno.AI) There is a pattern here.

replies(7): >>42158545 #>>42158550 #>>42158576 #>>42159935 #>>42160061 #>>42165587 #>>42169569 #
kuhewa ◴[] No.42158576[source]
> Writing is complex, LLMs once had subhuman performance,

And now they can easily replace mediocre human performance, and since they are tuned to provide answers that appeal to humans that is especially true for these subjective value use cases. Chip design doesn't seem very similar. Seems like a case where specifically trained tools would be of assistance. For some things, as much as generalist LLMs have surprised at skill in specific tasks, it is very hard to see how training on a broad corpus of text could outperform specific tools — for first paragraph do you really think it is not dubious to think a model trained on text would outperform Stockfish at chess?

replies(1): >>42163499 #
1. tim333 ◴[] No.42163499[source]
When people say LLM I think they are often thinking of neural network approaches in general rather than just text based even if the letters do stand for language model. And there's overlap eg. Gemini does language but is multi modal. If you skip that you get things like AlphaZero which did beat Stockfish https://en.wikipedia.org/wiki/AlphaZero