I kinda suspect that things that are expressed better with symbols and connections than with text will always be a poor fit to large LANGUAGE models. Turning what is basically a graph into a linear steam of text descriptions to tokenize and jam into an LLM has to be an incredibly inefficient and not very performant way of letting “AI” do magic on your circuits.
Ever try to get ChatGPT to play scrabble? Ever try to describe the board to it and then all the letters available to you? Even its fancy pants o1 preview performs absolutely horrible. Either my prompting completely sucks or an LLM is just the wrong tool for the job.
It’s great for asking you to score something you just created provided you tell it what bonuses apply to which words and letters. But it has absolutely no concept of the board at all. You cannot use to optimize your next move based on the board and the letters.
… I mean you might if you were extremely verbose about every letter on the board and every available place to put your tiles, perhaps avoiding coordinates and instead describing each word, its neighbors and relationships to bonus squares. But that just highlights how bad a tool an LLM is for scrabble.
Anyway, I’m sure schematics are very similar. Maybe somebody we will invent good machine learning models for such things but an LLM isn’t it.