So you either need lots of extra text to remove the ambiguity of natural language if you use AI or you need a special precise subset to communicate with AI and that’s just programming with extra steps.
So you either need lots of extra text to remove the ambiguity of natural language if you use AI or you need a special precise subset to communicate with AI and that’s just programming with extra steps.
Joking aside, this is likely where we will end up, just with a slightly higher programming interface, making developers more productive.
Real projects don't require an infinitely detailed specification either, you usually stop where it no longer meaningfully moves you towards the goal.
The whole premise of AI developer automation, IMO, is that if a human can develop a thing, then AI should be able too, given the same input.
Having a feedback loop is the only way viable for this. Sure, the client could give you a book on what they want, but often people do not know their edge cases, what issues may arise/etc.
All the same buzzwords, including "AI"! In 1981!
By the way, remind me why you need design meetings in that ideal world?:)
> Real projects don't require an infinitely detailed specification either, you usually stop where it no longer meaningfully moves you towards the goal.
The point was that specification is not detailed enough in practice. Precise enough specification IS code. And the point is literally that natural language is just not made to be precise enough. So you are back where you started
So you waste time explaining in detail and rehashing requirements in this imprecise language until you see what code you want to see. Which was faster to just... idk.. type.
If you use cline with any large context model the results can be pretty amazing. It's not close to self guiding, You still need to break down and analyze the problem and provide clear and relevant instructions. IE you need to be a great architect. Once you are stable on the direction, its awe inspiring to watch it do the bulk if the implementation.
I do agree that there is space to improve over embedded chat windows in IDEs. Solutions will come in time.
If you know how to program, then I agree and part of why I don't see the point. If you don't know how to program, than the prompt isn't much different than providing the specs/requirements to a programmer.
An LLM can do increasingly well as a fly on the wall, but it’s common for people using an LLM to be less collaborative with an LLM and for them to expect the LLM to structure the conversation. Hence the suggestion to be careful in your prompting.
Right. On one side you have programming language and on the other natural language.
They can intermingle, if that is what you are trying to say? You can see this even in traditional computer programming. One will often switch between deliberate expression and casual, natural expression (what often get called comments in that context).
haha, I just imagined sending TypeScript to ChatGPT and having it spit my TypeScript back to me. "See guys, if you just use Turing-complete logically unambiguous input, you get perfect output!"
Which model are you talking about here? Because with ChatGPT, I struggle with getting it to ask any clarifying questions before just dumping code filled with placeholders I don't want, even when I explicitly prompt it to ask for clarification.
The struggle is to provide a context that disambiguates the way you want it to.
LLMs solve this problem by avoiding it entirely: they stay ambiguous, and just give you the most familiar context, letting you change direction with more prompts. It's a cool approach, but it's often not worth the extra steps, and sometimes your context window can't fit enough steps anyway.
My big idea (the Story Empathizer) is to restructure this interaction such that the only work left to the user is to decide which context suits their purpose best. Given enough context instances (I call them backstories), this approach to natural language processing could recursively eliminate much of its own ambiguity, leaving very little work for us to do in the end.
Right now my biggest struggle is figuring out what the foundational backstories will be, and writing them.
In other word, complex applications can still be fully specified in plain English, even if it might take more words.
const a = “abcd”
That is called semantics. Programming is mostly fitting the vagueness inherent to natural languages to the precise context of the programming language.In plain English, of course, but not in natural English. When using language naturally one will leave out details, relying on other inputs, such as shared assumptions, to fill in the gaps. Programming makes those explicit.
They all usually build down to a subset of english, because near caveman speak is enough to define things with precision.
The advantage of natural language is that we can write ambiguously defined expressions, and infer their meaning arbitrarily with context. This means that we can write with fewer unique expressions. It also means that context itself can be more directly involved in the content of what we write.
In context-free grammar, we can only express "what" and "how"; never "why". Instead, the "why" is encoded into every decision of the design and implementation of what we are writing.
If we could leverage ambiguous language, then we could factor out the "why", and implement it later using context.
Likewise for English: one can use natural English to add as many details as necessary, depending on who you're talking to, e.g. "Make an outline around the input field, and color the outline #ff0000." You can then add, if necessary, "Make the corners of the outline rounded with a 5 pixel radius."
In this respect, complex applications can be fully specified in English; we usually call those documents "formal specifications." You can write it terse, non-natural language with consistent, defined terminology to save room (as most specs are), or colloquial (natural) language if you really want. I wouldn't recommend the latter, but it's definitely useful when presenting specs to a less technically informed audience.
Of course. We established that at the beginning. The entire discussion is about exactly that. It was confirmed again in the previous comment. However, that is not natural. I expect most native English speakers would be entirely incapable of fully specifying a complex application or anything else of similar complexity. That is not natural use.
While the words, basic syntax, etc. may mirror that found in natural language, a specification is really a language of its own. It is nothing like the language you will find people speaking at the bar or when writing pointless comments on Reddit. And that's because it is a programming language.
Your original postulation was that it simply wasn't possible, implying nobody could do it. The fact that most native English speakers wouldn't be able to do it doesn't mean nobody can do it.
I agree that most native English speakers wouldn't be able to write a reasonably complete spec in any type of language, not just because they lack the language skill, but because they simply wouldn't have the imagination and knowledge of what to create to begin with, let alone how to express it.
Then you must have mistakenly replied to the wrong comment, I guess? My original comment, and every one that followed, postulated that so-called "careful use of English", as in what you are talking about and what we have always been talking about, is a programming language. Given that it is a programming language, how could it not be used in that way? That's what programming languages do best.
> The fact that most native English speakers wouldn't be able to do it doesn't mean nobody can do it.
Of course. But the fact that most native English speakers wouldn't be able to do it proves that it is isn't natural language. This "technical English" language may resemble the natural language also known as English in many ways, but, as even you pointed out earlier, it is not the same language.