Most active commenters
  • 9rx(8)
  • oxfordmale(3)
  • benatkin(3)
  • kmoser(3)

←back to thread

858 points cryptophreak | 49 comments | | HN request time: 0.437s | source | bottom
1. croes ◴[] No.42934439[source]
Natural language isn’t made to be precise that’s why we use a subset in programming languages.

So you either need lots of extra text to remove the ambiguity of natural language if you use AI or you need a special precise subset to communicate with AI and that’s just programming with extra steps.

replies(10): >>42934517 #>>42934537 #>>42934619 #>>42934632 #>>42934651 #>>42934686 #>>42934747 #>>42934909 #>>42935464 #>>42936139 #
2. oxfordmale ◴[] No.42934517[source]
Yes, let's devise a more precise way to give AI instructions. Let's call it pAIthon. This will allow powers that be, like Zuckerberg to save face and claim that AI has replaced mid-level developers and enable developers to rebrand themselves as pAIthon programmers.

Joking aside, this is likely where we will end up, just with a slightly higher programming interface, making developers more productive.

replies(1): >>42934935 #
3. empath75 ◴[] No.42934537[source]
AIs actually are very good at this. They wouldn't be able to write code at all otherwise. If you're careful in your prompting, they'll make fewer assumptions and ask clarifying questions before going ahead and writing code.
replies(5): >>42934703 #>>42934744 #>>42935525 #>>42938037 #>>42943211 #
4. Klaster_1 ◴[] No.42934619[source]
A lot of extra text usually means prior requirements, meeting transcripts, screen share recordings, chat history, Jira tickets and so on - the same information developers use to produce a result that satisfies the stakeholders and does the job. This seems like a straightforward direction solvable with more compute and more efficient memory. I think this will be the way it pans outs.

Real projects don't require an infinitely detailed specification either, you usually stop where it no longer meaningfully moves you towards the goal.

The whole premise of AI developer automation, IMO, is that if a human can develop a thing, then AI should be able too, given the same input.

replies(3): >>42934735 #>>42934760 #>>42936203 #
5. 65 ◴[] No.42934632[source]
We're going to create SQL all over again, aren't we?
replies(1): >>42935086 #
6. ◴[] No.42934651[source]
7. spacemanspiff01 ◴[] No.42934686[source]
Or a proposal/feedback process. Ala you are hired by non technical person to build something, you generate requirements and a proposed solution. You then propose that solution, they give feedback.

Having a feedback loop is the only way viable for this. Sure, the client could give you a book on what they want, but often people do not know their edge cases, what issues may arise/etc.

8. 9rx ◴[] No.42934703[source]
> If you're careful in your prompting

In other words, if you replace natural language with a programming language then the computer will do a good job of interpreting your intent. But that's always been true, so...

replies(1): >>42934782 #
9. cube2222 ◴[] No.42934735[source]
We are kind of actually there already.

With a 200k token window like Claude has you can already dump a lot of design docs / transcripts / etc. at it.

replies(2): >>42934887 #>>42934908 #
10. oxfordmale ◴[] No.42934744[source]
AI is very good at this. Unfortunately, humans tend to be super bad at providing detailed verbal instructions.
replies(2): >>42934864 #>>42938638 #
11. pjc50 ◴[] No.42934747[source]
There was a wave of this previously in programming: https://en.wikipedia.org/wiki/The_Last_One_(software)

All the same buzzwords, including "AI"! In 1981!

12. throwaway290 ◴[] No.42934760[source]
idk if you think all those jira tickets and meetings are precise enough (IMO sometimes the opposite)

By the way, remind me why you need design meetings in that ideal world?:)

> Real projects don't require an infinitely detailed specification either, you usually stop where it no longer meaningfully moves you towards the goal.

The point was that specification is not detailed enough in practice. Precise enough specification IS code. And the point is literally that natural language is just not made to be precise enough. So you are back where you started

So you waste time explaining in detail and rehashing requirements in this imprecise language until you see what code you want to see. Which was faster to just... idk.. type.

replies(2): >>42934814 #>>42934892 #
13. benatkin ◴[] No.42934782{3}[source]
Being careful in your prompting doesn’t imply that. That can also be thought of as just using natural language well.
replies(1): >>42934796 #
14. 9rx ◴[] No.42934796{4}[source]
What separates natural language from programming language is that natural language doesn't have to be careful. Once you have to be careful, you are programming.
replies(2): >>42934824 #>>42937380 #
15. falcor84 ◴[] No.42934814{3}[source]
Even if you have superhuman AI designers, you still need buy-in.
replies(1): >>42934859 #
16. benatkin ◴[] No.42934824{5}[source]
It does have to be careful at times if you’re going to be effective with natural language.
replies(1): >>42934843 #
17. 9rx ◴[] No.42934843{6}[source]
Certainly there is a need for care outside of computers too, like in law, but legal documents are a prime example of programs. That's programming, written using a programming language, not natural language. It is decidedly not the same language you would use for casual conversation and generally requires technical expertise to understand.
replies(2): >>42935176 #>>42936643 #
18. uoaei ◴[] No.42934859{4}[source]
There's a nice thought, that anyone with that kind of power would share it.
19. indymike ◴[] No.42934864{3}[source]
Languages used for day to day communication between humans do not have the specificity needed for detailed instructions... even to other humans. We out of band context (body language, social norms, tradition, knowledge of a person) quite a bit more than you would think.
replies(1): >>42938663 #
20. rightisleft ◴[] No.42934887{3}[source]
Its all about the context window. Even the new Mistral Codestral-2501 256K CW does a great job.

If you use cline with any large context model the results can be pretty amazing. It's not close to self guiding, You still need to break down and analyze the problem and provide clear and relevant instructions. IE you need to be a great architect. Once you are stable on the direction, its awe inspiring to watch it do the bulk if the implementation.

I do agree that there is space to improve over embedded chat windows in IDEs. Solutions will come in time.

replies(1): >>42935476 #
21. Klaster_1 ◴[] No.42934892{3}[source]
That's a fair point, I'd love to see Copilot come to a conclusion that they can't resolve a particular conundrum and communicates with other people so everyone makes a decision together.
22. mollyporph ◴[] No.42934908{3}[source]
And Gemini has 2m token window. Which is about 10 minutes of video for example.
23. dylan604 ◴[] No.42934909[source]
> and that’s just programming with extra steps.

If you know how to program, then I agree and part of why I don't see the point. If you don't know how to program, than the prompt isn't much different than providing the specs/requirements to a programmer.

24. dylan604 ◴[] No.42934935[source]
man, pAIthon was just sitting right there for the taking
replies(2): >>42935619 #>>42937458 #
25. lelanthran ◴[] No.42935086[source]
A more modern COBOL maybe.
replies(1): >>42935926 #
26. benatkin ◴[] No.42935176{7}[source]
People can often be observed to be deliberately making an effort in casual, social, natural language conversation. It flows for some people more than others. Try watching Big Bang Theory and see characters at times being deliberate with their words and at other times responding automatically.

An LLM can do increasingly well as a fly on the wall, but it’s common for people using an LLM to be less collaborative with an LLM and for them to expect the LLM to structure the conversation. Hence the suggestion to be careful in your prompting.

replies(1): >>42935298 #
27. 9rx ◴[] No.42935298{8}[source]
> at times being deliberate with their words and at other times responding automatically.

Right. On one side you have programming language and on the other natural language.

They can intermingle, if that is what you are trying to say? You can see this even in traditional computer programming. One will often switch between deliberate expression and casual, natural expression (what often get called comments in that context).

28. kokanee ◴[] No.42935464[source]
> or you need a special precise subset to communicate with AI

haha, I just imagined sending TypeScript to ChatGPT and having it spit my TypeScript back to me. "See guys, if you just use Turing-complete logically unambiguous input, you get perfect output!"

replies(1): >>42947232 #
29. selectodude ◴[] No.42935476{4}[source]
Issue I have with Cline that I don't run into with, say, Aider, is that I find Cline to be like 10x more expensive. The number of tokens it blows through is incredible. Is that just me?
30. LordDragonfang ◴[] No.42935525[source]
> they'll make fewer assumptions and ask clarifying questions before going ahead and writing code.

Which model are you talking about here? Because with ChatGPT, I struggle with getting it to ask any clarifying questions before just dumping code filled with placeholders I don't want, even when I explicitly prompt it to ask for clarification.

31. ◴[] No.42935619{3}[source]
32. 9rx ◴[] No.42935926{3}[source]
So SQL?
33. thomastjeffery ◴[] No.42936139[source]
Natural language can be precise, but only in context.

The struggle is to provide a context that disambiguates the way you want it to.

LLMs solve this problem by avoiding it entirely: they stay ambiguous, and just give you the most familiar context, letting you change direction with more prompts. It's a cool approach, but it's often not worth the extra steps, and sometimes your context window can't fit enough steps anyway.

My big idea (the Story Empathizer) is to restructure this interaction such that the only work left to the user is to decide which context suits their purpose best. Given enough context instances (I call them backstories), this approach to natural language processing could recursively eliminate much of its own ambiguity, leaving very little work for us to do in the end.

Right now my biggest struggle is figuring out what the foundational backstories will be, and writing them.

replies(1): >>42938199 #
34. layer8 ◴[] No.42936203[source]
This premise in your last paragraph can only work with AGI, and we’re probably not close to that yet.
35. kmoser ◴[] No.42936643{7}[source]
Even lawyers agree that legalese is no more accurate than plain English if used properly: https://www.scientificamerican.com/article/even-lawyers-dont...

In other word, complex applications can still be fully specified in plain English, even if it might take more words.

replies(1): >>42938554 #
36. xboxnolifes ◴[] No.42937380{5}[source]
Does this mean that good communication skills is equivalent to programming?
37. oxfordmale ◴[] No.42937458{3}[source]
Thanks for pointing it out :-)
38. croes ◴[] No.42938037[source]
AI is a little bit like Occam‘s razor, when you say hoofbeats, you get horses. Bad if you need Zebras.
39. skydhash ◴[] No.42938199[source]
That’s what programming languages are: You define a context, then you see that you can shorten the notation to symbol character: Like “The symbol a will refer to the value of type string and content ‘abcd’ and cannot refer to anything else for its life time” get you:

  const a = “abcd”
That is called semantics. Programming is mostly fitting the vagueness inherent to natural languages to the precise context of the programming language.
replies(1): >>42939688 #
40. 9rx ◴[] No.42938554{8}[source]
> complex applications can still be fully specified in plain English

In plain English, of course, but not in natural English. When using language naturally one will leave out details, relying on other inputs, such as shared assumptions, to fill in the gaps. Programming makes those explicit.

replies(1): >>42943617 #
41. nomel ◴[] No.42938638{3}[source]
Then those same humans won't be able to reason about code, or the problem spaces they're working in, regardless, since it's all fundamentally about precise specifics.
42. nomel ◴[] No.42938663{4}[source]
Programming languages, which are human language, are purpose built for this. Anyone working in the domain of precise specifications uses them, or something very similar (for example, engineering, writing contracts, etc), often daily. ;)

They all usually build down to a subset of english, because near caveman speak is enough to define things with precision.

43. thomastjeffery ◴[] No.42939688{3}[source]
Yes, but programming languages are categorically limited to context-free grammar. This means that every expression written in a programming language is explicitly defined to have precisely one meaning.

The advantage of natural language is that we can write ambiguously defined expressions, and infer their meaning arbitrarily with context. This means that we can write with fewer unique expressions. It also means that context itself can be more directly involved in the content of what we write.

In context-free grammar, we can only express "what" and "how"; never "why". Instead, the "why" is encoded into every decision of the design and implementation of what we are writing.

If we could leverage ambiguous language, then we could factor out the "why", and implement it later using context.

44. foobiekr ◴[] No.42943211[source]
I don’t think I’ve ever seen an llm in any context ask for clarification. Is that a real thing?
45. kmoser ◴[] No.42943617{9}[source]
Programming only needs to make things as explicit as necessary based on the developer's desires and the system's assumptions. Where more detail is necessary, the programmer can add more code. For example, there's no need to tell a browser explicit rules for how users should be able to interact with an input field, since that's the browser's default behavior; you only need to specify different behavior when you want it to differ from the default.

Likewise for English: one can use natural English to add as many details as necessary, depending on who you're talking to, e.g. "Make an outline around the input field, and color the outline #ff0000." You can then add, if necessary, "Make the corners of the outline rounded with a 5 pixel radius."

In this respect, complex applications can be fully specified in English; we usually call those documents "formal specifications." You can write it terse, non-natural language with consistent, defined terminology to save room (as most specs are), or colloquial (natural) language if you really want. I wouldn't recommend the latter, but it's definitely useful when presenting specs to a less technically informed audience.

replies(1): >>42944067 #
46. 9rx ◴[] No.42944067{10}[source]
> complex applications can be fully specified in English

Of course. We established that at the beginning. The entire discussion is about exactly that. It was confirmed again in the previous comment. However, that is not natural. I expect most native English speakers would be entirely incapable of fully specifying a complex application or anything else of similar complexity. That is not natural use.

While the words, basic syntax, etc. may mirror that found in natural language, a specification is really a language of its own. It is nothing like the language you will find people speaking at the bar or when writing pointless comments on Reddit. And that's because it is a programming language.

replies(1): >>42944941 #
47. kmoser ◴[] No.42944941{11}[source]
> I expect most native English speakers would be entirely incapable of fully specifying a complex application or anything else of similar complexity.

Your original postulation was that it simply wasn't possible, implying nobody could do it. The fact that most native English speakers wouldn't be able to do it doesn't mean nobody can do it.

I agree that most native English speakers wouldn't be able to write a reasonably complete spec in any type of language, not just because they lack the language skill, but because they simply wouldn't have the imagination and knowledge of what to create to begin with, let alone how to express it.

replies(1): >>42949076 #
48. charlieyu1 ◴[] No.42947232[source]
I guess we could have LLM to translate natural language to some precise subset, get it processed, then translate the output back to natural language
49. 9rx ◴[] No.42949076{12}[source]
> Your original postulation was that it simply wasn't possible, implying nobody could do it.

Then you must have mistakenly replied to the wrong comment, I guess? My original comment, and every one that followed, postulated that so-called "careful use of English", as in what you are talking about and what we have always been talking about, is a programming language. Given that it is a programming language, how could it not be used in that way? That's what programming languages do best.

> The fact that most native English speakers wouldn't be able to do it doesn't mean nobody can do it.

Of course. But the fact that most native English speakers wouldn't be able to do it proves that it is isn't natural language. This "technical English" language may resemble the natural language also known as English in many ways, but, as even you pointed out earlier, it is not the same language.