I built this because I was tired of shipping boards with avoidable mistakes — hopefully it saves you from a re-spin too!
I built this because I was tired of shipping boards with avoidable mistakes — hopefully it saves you from a re-spin too!
> Of course, Jack. I can understand the schematic from the provided JSON file. It describes an RS485 to TTL Converter Module. > Here is a detailed breakdown of the circuit's design and functionality
...followed by an absolutely reasonable description of the whole board. It was imprecise, but with some guidance (and by putting together my basic skills with Gemini's vast but unreliable knowledge) I was able to figure out a few things I needed to know about the board. Quite impressive.
If doing industrial work, than consumer-grade workmanship / LLM-slop is usually unacceptable. Start with the FTDI firmware tool and an isolation chip App-note...
https://www.analog.com/en/products/adm2895e-1.html
Best of luck =3
Also good call on processing EasyEDA schematics. I hadn’t considered that initially, but I’m definitely going to add support for it.
I know a brilliant PCB engineer whose first major multimillion dollar R&D corporate design (decades ago) resulted in production of a modular product which couldn't physically plug in with the rest of the system (because of above issues)... I'll send him this link to see if he'll give you feedback, but that's going to be how he'd initially test your AI system (he considers it a humbling lifetime blunder).
Without any PCB design experience, my presumption is that OP's "AI product" is more of just a "fundamentals of circuit board design"[0] and not an all-expansive "how did no human ever catch such a simple multi-dimensional clash"[1]
[0] isolated voltage areas; trace attenuation avoidance; signal protection
[1] the darn thing won't even plug in, because the plug is pin'd-out backwards
Anybody can send a PCB description/schematic into an LLM, with a prompt suggesting it generate an analysis and it will diligently produce a document that perceptually resembles an analysis of that PCB. It will do that approximately 100% of the time.
But making an LLM actually deliver a sound, useful, accurate analysis would be quite an accomplishment! Is that really what you've done? How did you know you got it right? How right did you get it?
To sell an analysis tool, I'd expect to see some kind of comparison against other tooling and techniques. General success rate? False negative rate? False positive rate? How does it do against simple schematics vs large ones? What IC's and components will it recognize and which will it fail to recognize? Does it throw an error if it encounters something it doesn't recognize? When? Do you have testimonials? Examples?
But it sounds like in this case the root cause was more of a footprint/layout issue rather than a schematic one. I’m hoping to add footprint-level checks later on, once I can ingest full board files and mechanical data.
I see this idea as a sort of AI ERC/DRC checker that offers some incredible opportunities. Even if it only catches one small, it could save thousand of dollars down the line.
It's another tool in the toolbox for hardware designers.
Or it could send a design team down thousands of dollars in false positives/false negatives. With zero benchmarks provided, it is very fair to question a product that could have material negative impacts on a hardware team.
Pinouts... there is a reason we try to get all pinouts tested as early as possible, preferably on the first non-form-factor prototype spin if we can. In no event should key pinouts be first assigned or major changes made without a planned spin in the schedule following them....
Datasheets themselves are inconsistent and incomplete so I’m wondering how you evaluated the accuracy of the import and what your acceptance criteria is.
I’m not against more automated checkers, I’m very much for automated checkers, but I’m curious how you plan to not repeat the mistakes of the past.
Benchmarking is tricky right now because there aren’t many true “LLM ERC” systems to compare against. You could compare against traditional ERC, but this tool is meant to complement that workflow, not replace it. For this initial MVP, most of the accuracy work has come from collecting real shipped-board schematics (mine and friends’) with known issues and iterating until the tool consistently detected them. A practical way to evaluate it yourself is to upload designs you already know have issues, along with the relevant datasheets, and see how well it picks them up. Additionally, If you have a schematic with known mistakes and are open to sharing it, feel free to reach out to through the "contact us" page. Contributions like that are incredibly helpful, and I’d be happy to provide additional free usage in return.
I’ll also be publishing case studies soon with concrete examples: the original schematics, the tool’s output, what it caught (and what it missed), and comparisons against general-purpose chat LLM responses.
The goal isn’t to replace a designer’s judgment, but to surface potential issues that are easy to miss. Similar to how AI coding tools flag things you still have to evaluate yourself. Ultimately the designer decides what’s valid and what isn’t.
I really appreciate the push for rigor, and I’ll follow up once the case studies are live.
I’d really recommend trying it with one of your designs: upload the netlist + a component’s datasheet and ask a specific question about the part in the design. It’s the easiest way to see how well the ingestion works in practice. Would love to hear your feedback after you try it!
I'm always looking for workflow and automation improvements and the new wave of tooling has been useful for datasheet extraction/OCR, rubber-ducking calculations, or custom one-off scripts which interact with KiCAD's S-Expression file formats. However I've seen minimal improvements across my private suite of electronics reasoning/design tests since GPT4 so I'm very skeptical of review tooling actually achieving anything useful.
Testing with a prior version of a power board that had a few simple issues that were found and fixed during bringup. Uploaded the KiCAD netlist, PDFs for main IC's, and also included my internal design validation datasheet which _includes the answers to the problems I'm testing against_. There were three areas I'd expect easy identification and modelling on:
- Resistor values for a non-inverting amplifier's gain were swapped leading to incorrect gain.
- A voltage divider supplying a status/enable pin was drawing somewhat more current than it needed to.
- The power rating of a current-sense shunt is marginal for some design conditions.
For the first test, the prompt was an intentionally naiive "Please validate enable turn on voltage conditions across the power input paths". The reasoning steps appeared to search datasheets, but on what I'd have considered the 'design review' step it seems like something got stuck/hung and no results after 10min. A second user input to get it to continue did get an output, and my comments: - Just this single test consumed 100% of the chat's 330k token limit and 85% of free tier capacity, so I can't even re-evaluate the capability with a more reasonable/detailed prompt, or even giving it the solution.
- A mid-step section calculates the UV/OV behaviour of a input protection device correctly, but mis-states the range in the summary.
- There were several structural errors in the analysis, including assuming that the external power supply and lithium battery share the same input path, even though the netlist and components obviously have the battery 'inside' the power management circuit. As a result most downstream analysis is completely invalid.
- The inline footnotes for datasheets output `4 [blocked]` which is a bare-minimum UI bug that you must have known about?
- The problem and solution were in the context and weren't found/used.
- Summary was sycophantic and incorrect.
You're leaving a huge amount of useful context on the table by relying on netlist upload. The hierarchy in the schematic, comments/tables and inlined images are lost. A large chunk of useful information in datasheets is graphs/diagrams/equations which aren't ingested as text. Netlist don't include the comments describing the expected input voltage range on a net, an output load's behaviour, or why a particular switching frequency is chosen for example.In contrast, GPT5.1 API with a single relevant screenshot of the schematic, with zero developer prompt and the same starting user message:
- Worked through each leg of the design and compared it's output to my annotated comments (and was correct).
- Added commentary about possible leakage through a TVS diode, calculated time-constants, part tolerance, and pin loadings which are the kinds of details that can get missed outside of exhaustive review.
- Hallucinated a capacitor that doesn't exist in the design, likely due to OCR error. Including the raw netlist and an unrelated in-context learning example in the dev-message resolved that issue.
So from my perspective, the following would need to happen before I'd consider a tool like this: - Walk back your data collection terms, I don't feel they're viable for any commercial use in this space without changes.
- An explicit listing of the downstream model provider(s) and any relevant terms that flow to my data.
- I understand the technical side of "Some metadata or backup copies may persist for a limited period for security, audit, and operational continuity" but I want a specific timeline and what that metadata is. Do better and provide examples.
- I'm not going to get into the strategy side of 'paying for tokens'. but your usage limits are too vague to know what I'm getting. If I'm paying for your value add, let me bring an API key (esp if you're not using frontier models).
- My netlist includes PDF datasheet links for every part. You should be able to fetch datasheets as needed without upload.
- Literally 5 minutes of thinking about how this tool is useful for fault-finding or review would have led you to a bare-minimum set of checklist items that I could choose to run on a design automatically.
- Going further, a chat UX is horrible for this review use-case. Condensing it into a high level review of requirements and goals, with a list of review tasks per page/sub-circuit would make more sense. From there, then calculations and notes for each item can be grouped instead of spread randomly through the output summary. Output should be more like an annotated PDF.The real question is whether this has enough value to justify the pricing model [1] - I think so for a company, but would be difficult to justify for a hobby. One thing that should be defined is what "usage limit" actually is.
Comments in Show HN threads are generally curious and supportive. Yes, there are notable exceptions.
We detached this comment from https://news.ycombinator.com/item?id=46081918 and marked it off topic.
As a reference for the OP I did a public professional-informal-mini-design-review over here a while ago: https://news.ycombinator.com/item?id=44651770 . I didn't pull any of those datasheets because I didn't need to. It would be interesting to see what your tool says about that design, and compare it to the types of things I thought needed attention.
It burnt a bunch of tokens and filled the context reading all datasheet files, whereas documentation should be queried to answer specific details connected to relevant netlist/sch nodes.