So you either need lots of extra text to remove the ambiguity of natural language if you use AI or you need a special precise subset to communicate with AI and that’s just programming with extra steps.
So you either need lots of extra text to remove the ambiguity of natural language if you use AI or you need a special precise subset to communicate with AI and that’s just programming with extra steps.
The struggle is to provide a context that disambiguates the way you want it to.
LLMs solve this problem by avoiding it entirely: they stay ambiguous, and just give you the most familiar context, letting you change direction with more prompts. It's a cool approach, but it's often not worth the extra steps, and sometimes your context window can't fit enough steps anyway.
My big idea (the Story Empathizer) is to restructure this interaction such that the only work left to the user is to decide which context suits their purpose best. Given enough context instances (I call them backstories), this approach to natural language processing could recursively eliminate much of its own ambiguity, leaving very little work for us to do in the end.
Right now my biggest struggle is figuring out what the foundational backstories will be, and writing them.
const a = “abcd”
That is called semantics. Programming is mostly fitting the vagueness inherent to natural languages to the precise context of the programming language.The advantage of natural language is that we can write ambiguously defined expressions, and infer their meaning arbitrarily with context. This means that we can write with fewer unique expressions. It also means that context itself can be more directly involved in the content of what we write.
In context-free grammar, we can only express "what" and "how"; never "why". Instead, the "why" is encoded into every decision of the design and implementation of what we are writing.
If we could leverage ambiguous language, then we could factor out the "why", and implement it later using context.