If this is how you're modelling the problem, then I don't think you learned the right lesson from the PB&J "parable."
Here's a timeless bit of wisdom, several decades old at this point:
Managers think that if you can just replace code with something else that isn't text with formal syntax, then all the sudden "regular people" (like them, maybe?) will be able to "program" a system. But it never works. And the reason it never works is fundamental to how humans relate to computers.
Hucksters continually reinvent the concept of "business rules engines" to sell to naive CTOs. As a manager, you might think it's a great idea to encode logic/constraints into some kind of database — maybe one you even "program" visually like UML or something! — and to then have some tool run through and interpret those. You can update business rules "live and on the fly", without calling a programmer!
They think it's a great idea... until the first time they try to actually use such a system in anger to encode a real business process. Then they hit the PB&J problem. And, in the end, they must get programmers to interface with the business rules engine for them.
What's going on there? What's missing in the interaction between a manager and a business rules engine, that gets fixed by inserting a programmer?
There are actually two things:
1. Mechanical sympathy. The programmer knows the solution domain — and so the programmer can act as an advocate for the solution domain (in the same way that a compiler does, but much more human-friendly and long-sighted/predictive/10k-ft-view-architectural). The programmer knows enough about the machine and about how programs should be built to know what just won't work — and so will push back on a half-assed design, rather than carrying the manager through on a shared delusion that what they're trying to do is going to work out.
2. Iterative formalization. The programmer knows what information is needed by a versatile union/superset of possible solution architectures in the solution space — not only to design a particular solution, but also to "work backward", comparing/contrasting which solution architectures might be a better fit given the design's parameters. And when the manager hasn't provided this information — the programmer knows to ask questions.
Asking the right questions to get the information needed to determine the right architecture and design a solution — that's called requirements analysis.
And no matter what fancy automatic "do what I mean" system you put in place between a manager and a machine — no matter how "smart" it might be — if it isn't playing the role of a programmer, both in guiding the manager through the requirements analysis process, and in pushing back through knowledge of mechanical sympathy... then you get PB&J.
That being said: LLMs aren't fundamentally incapable of "doing what programmers do", I don't think. The current generation of LLMs is just seemingly
1. highly sycophantic and constitutionally scared of speaking as an authority / pushing back / telling the user they're wrong; and
2. trained to always try to solve the problem as stated, rather than asking questions "until satisfied."