LLMs are not very predictable. And that's not just true for the output. Each change to the model impacts how it parses and computes the input. For someone claiming to be a "Prompt Engineer", this cannot work. There are so many variables that are simply unknown to the casual user: training methods, the training set, biases, ...
If I get the feeling I am creating good prompts for Gemini 2.5 Pro, the next version might render those prompts useless. And that might get even worse with dynamic, "self-improving" models.
So when we talk about "Vibe coding", aren't we just doing "Vibe prompting", too?
If you run an open source model from the same seed on the same hardware they are completely deterministic. It will spit out the same answer every time. So it’s not an issue with the technology and there’s nothing stopping you from writing repeatable prompts and promoting techniques.
Whenever people talk about "prompt engineering", they're referring to randomly changing these kinds of things, in hopes of getting a query pattern where you get meaningful results 90% of the time.
The reason changing one word in a prompt to a close synonym changes the reply is because it is the specific words used in a series that is how information is embedded and recovered by LLMs. The 'in a series' aspect is subtle and important. The same topic is in the LLM multiple times, with different levels of treatment from casual to academic. Each treatment from casual to formal uses different words, similar words, but different and that difference is very meaningful. That difference is how seriously the information is being handled. The use of one term versus another term causes a prompt to index into one treatment of the subject versus another. The more formal the terms used, meaning the synonyms used by experts of that area of knowledge, generate the more accurate replies. While the close synonyms generate replies from outsiders of that knowledge, those not using the same phrases as those with the most expertise, the phrases used by those perhaps trying to understand but do not yet?
It is not randomly changing things in one's prompts at all. It's understanding the knowledge space one is prompting within such that the prompts generate accurate replies. This requires knowing the knowledge space one prompts within, so one knows the correct formal terms that unlock accurate replies. Plus, knowing that area, one is in a better position to identify hallucination.
Higher level programming languages may make choices for coders regarding lower level functionality, but they have syntactic and semantic rules that produce logically consistent results. Claiming that such rules exist for LLMs but are so subtle that only the ultra-enlightened such as yourself can understand them begs the question: If hardly anyone can grasp such subtlety, then who exactly are all these massive models being built for?
> It's not reproducible according to any useful logical framework that could be generally applied to other cases.
It absolutely is, you are refusing to accept that natural language contains this type of logical structure. You are repeatedly trying to project "magic incantations" allusions, when it is simply that you do not understand. Plus, you're openly hostile to the idea that this is a subtle logic you are not seeing.
It is a simple mechanism: multiple people treat the same subjects differently, with different words. Those that are professionally experts in an area tend to use the same words to describe their work. Use those words of you want the LLM to reply from their portion of the LLM's training. This is not any form of "magical incantation" it is knowing what you are referencing by using the formal terminology.
This is not magic, nor is it some kind of elite knowledge. Drop your anger and just realize that it's subtle, that's all. It is difficult to see, that is all. Why this causes developers to get so angry is beyond me.
The syntax writers may say: "I do more than write syntax! I think in systems, logic, processes, limits, edge cases, etc."
The response to that is: you don't need syntax to do that, yet until now syntax was the barrier to technical expression.
So ironically, when they show anger it is a form of hypocrisy: they already know that knowing how to write specific words is power. They're just upset that the specific words that matter have changed.
The point of coding, and what developers are paid for, is taking a vision of a final product which receives input and returns output, and making that perfectly consistent with the express desire of whoever is paying to build that system. Under all use cases. Asking questions about what should happen if a hundred different edge cases arise, before they do, is 99% of the job. Development is a job well suited to students of logic, poorly suited to memorizers and mathematicians, and obscenely ill suited to LLMs and those who attempt to follow the supposed reasoning that arises from gradient descent through a language's structure. Even in the best case scenario, edge case analysis will never be possible for AIs that are built like LLMs, because they demonstrate a lack of abstract thought.
I'm not hostile to LLMs so much as toward the implication that they do anything remotely similar to what we do as developers. But you're welcome to live in a fantasy world where they "make apps". I suppose it's always obnoxious to hear someone tout a quick way to get rich or to cook a turkey in 25 minutes, no knowledge required. Just do be aware that your intetnet fame and fortune will be no reflection on whether your method will actually work. Those of us in the industry are already acutely aware that it doesn't work, and that some folks are just leading children down a lazy pied piper's path rather than teaching them how to think. That's where the assumption comes from that anyone promoting what you're promoting is selling snake oil.
But all of that is an aside from the essential nature of using them, which far too many use them to think for them, in place of their thinking, and that is also a subtle aspect of LLMs - using them to think for you damages your own ability to critically think. That's why understanding them is so important, so one does not anthropomorphize them to trust them, which is a dangerous behavior. They are idiot savants, and get that much trust: nearly none.
I also do not believe that LLMs are even remotely capable of anything close to what software engineers do. That's why I am a strong advocate of not using them to write code. Use them to help one understand, but know that the "understanding" that they can offer is of limited scope. That's their weakness: they can't encompass scope. Detailed nuance they get, but two detailed nuances in a single phenomenon and they only focus on one and drop the surrounding environment. They are idiots drawn to shiny complexity, with savant-like abilities. They are closer to a demonic toy for programmers than anything else we have..