This heavily overlaps with my current research focus for my Ph.D., so I wanted to provide some additional perspective to the article. I have worked with Vitis HLS and other HLS tools in the past to build deep learning hardware accelerators. Currently, I am exploring deep learning for design automation and using large language models (LLMs) for hardware design, including leveraging LLMs to write HLS code. I can also offer some insight from the academic perspective.
First, I agree that the bar for HLS tools is relatively low, and they are not as good as they could be. Admittedly, there has been significant progress in the academic community to develop open-source HLS tools and integrations with existing tools like Vitis HLS to improve the HLS development workflow. Unfortunately, substantial changes are largely in the hands of companies like Xilinx, Intel, Siemens, Microchip, MathWorks (yes, even Matlab has an HLS tool), and others that produce the "big-name" HLS tools. That said, academia has not given up, and there is considerable ongoing HLS tooling research with collaborations between academia and industry. I hope that one day, some lab will say "enough is enough" and create a open-source, modular HLS compiler in Rust that is easy to extend and contribute to—but that is my personal pipe dream. However, projects like BambuHLS, Dynamatic, MLIR+CIRCT, and XLS (if Google would release more of their hardware design research and tooling) give me some hope.
When it comes to actually using HLS to build hardware designs, I usually suggest it as a first-pass solution to quickly prototype designs for accelerating domain-specific applications. It provides a prototype that is often much faster or more power-efficient than a CPU or GPU solution, which you can implement on an FPGA as proof that a new architectural change has an advantage in a given domain (genomics, high-energy physics, etc.). In this context, it is a great tool for academic researchers. I agree that companies producing cutting-edge chips are probably not using HLS for the majority of their designs. Still, HLS has its niche in FPGA and ASIC design (with Siemens's Catapult being a popular option for ASIC flows). However, the gap between an initial, naive HLS design implementation and one refined by someone with expert HLS knowledge is enormous. This gap is why many of us in academia view the claim that "HLS allows software developers to do hardware development" as somewhat moot (albeit still debatable—there is ongoing work on new DSLs and abstractions for HLS tooling which are quite slick and promising). Because of this gap, unless you have team members or grad students familiar with optimizing and rewriting designs to fully exploit HLS benefits while avoiding the tools' quirks and bugs, you won't see substantial performance gains. Al that to say, I don't think it is fair to comply write off HLS as a lost cause or not sucesfull.
Regarding LLMs for Verilog generation and verification, there's an important point missing from the article that I've been considering since around 2020 when the LLM-for-chip-design trend began. A significant divide exists between the capabilities of commercial companies and academia/individuals in leveraging LLMs for hardware design. For example, Nvidia released ChipNeMo, an LLM trained on their internal data, including HDL, tool scripts, and issue/project/QA tracking. This gives Nvidia a considerable advantage over smaller models trained in academia, which have much more limited data in terms of quantity, quality, and diversity. It's frustrating to see companies like Nvidia presenting their LLM research at academic conferences without contributing back meaningful technology or data to the community. While I understand they can't share customer data and must protect their business interests, these closed research efforts and closed collaborations they have with academic groups hinder broader progress and open research. This trend isn't unique to Nvidia; other companies follow similar practices.
On a more optimistic note, there are now strong efforts within the academic community to tackle these problems independently. These efforts include creating high-quality, diverse hardware design datasets for various LLM tasks and training models to perform better on a wider range of HLS-related tasks. As mentioned in the article, there is also exciting work connecting LLMs with the tools themselves, such as using tool feedback to correct design errors and moving towards even more complex and innovative workflows. These include in-the-loop verification, hierarchical generation, and ML-based performance estimation to enable rapid iteration on designs and debugging with a human in the loop. This is one area I'm actively working on, both at the HDL and HLS levels, so I admit my bias toward this direction.
For more references on the latest research in this area, check out the proceedings from the LLM-Aided Design Workshop (now evolving into a conference, ICLAD: https://iclad.ai/), as well as the MLCAD conference (https://mlcad.org/symposium/2024/). Established EDA conferences like DAC and ICCAD have also included sessions and tracks on these topics in recent years. All of this falls within the broader scope of generative AI, which remains a smaller subset of the larger ML4EDA and deep learning for chip design community. However, LLM-aided design research is beginning to break out into its own distinct field, covering a wider range of topics such as LLM-aided design for manufacturing, quantum computing, and biology—areas that the ICLAD conference aims to expand on in future years.