←back to thread

321 points laserduck | 1 comments | | HN request time: 0.199s | source
Show context
seizethecheese ◴[] No.42160807[source]
> (quoting YC) We know there is a clear engineering trade-off: it is possible to optimize especially specialized algorithms or calculations such as cryptocurrency mining, data compression, or special-purpose encryption tasks such that the same computation would happen faster (5x to 100x), and using less energy (10x to 100x).

> If Gary Tan and YC believe that LLMs will be able to design chips 100x better than humans currently can, they’re significantly underestimating the difficulty of chip design, and the expertise of chip designers.

I may be confused, but isn’t the author fundamentally misunderstanding YC’s point? I read YC as simply pointing out the benefit of specialized compute, like GPUs, not making any point about the magnitude of improvement LLMs could achieve over humans.

replies(1): >>42161087 #
1. alephnerd ◴[] No.42161087[source]
I think the issue is Garry Tan's video RFS merged "LLMs for EDA" with "Purpose Built Compute" for specialized usecases. The title "LLMs for Chip Design" doesn't help either.

From my reading of the RFS (not the video) it appears they are essentially asking for the next Groq or SambaNova.

Personally, this kind of communication issue would give me a long pause if I was considering YC for this segment, as this is a fairly basic thesis to communicate, and if a basic thesis can be muddled, can the advice provided be strong as well, especially compared to peer early stage funders in this space?