←back to thread

317 points laserduck | 1 comments | | HN request time: 0.369s | source
1. zhyder ◴[] No.42158634[source]
As a former chip designer (been 16 years, but looks like tools and our arguments about them haven't changed much), I'm both more and less optimistic than OP:

1. More because fine-tuning with enough good Verilog as data should let the LLMs do better at avoiding mediocre Verilog (existing chip companies have more of this data already though). Plus non-LLM tools will remain, so you can chain those tools to test that the LLM hasn't produced Verilog that synthesizes to a large area, etc

2. Less because when creating more chips for more markets (if that's the interpretation of YC's RFS), the limiting factor will become the cost of using a fab (mask sets cost millions), and then integrating onto a board/system the customer will actually use. A half-solution would be if FPGAs embedded in CPUs/GPUs/SiPs on our existing devices took off