←back to thread

858 points cryptophreak | 10 comments | | HN request time: 0.209s | source | bottom
Show context
croes ◴[] No.42934439[source]
Natural language isn’t made to be precise that’s why we use a subset in programming languages.

So you either need lots of extra text to remove the ambiguity of natural language if you use AI or you need a special precise subset to communicate with AI and that’s just programming with extra steps.

replies(10): >>42934517 #>>42934537 #>>42934619 #>>42934632 #>>42934651 #>>42934686 #>>42934747 #>>42934909 #>>42935464 #>>42936139 #
1. Klaster_1 ◴[] No.42934619[source]
A lot of extra text usually means prior requirements, meeting transcripts, screen share recordings, chat history, Jira tickets and so on - the same information developers use to produce a result that satisfies the stakeholders and does the job. This seems like a straightforward direction solvable with more compute and more efficient memory. I think this will be the way it pans outs.

Real projects don't require an infinitely detailed specification either, you usually stop where it no longer meaningfully moves you towards the goal.

The whole premise of AI developer automation, IMO, is that if a human can develop a thing, then AI should be able too, given the same input.

replies(3): >>42934735 #>>42934760 #>>42936203 #
2. cube2222 ◴[] No.42934735[source]
We are kind of actually there already.

With a 200k token window like Claude has you can already dump a lot of design docs / transcripts / etc. at it.

replies(2): >>42934887 #>>42934908 #
3. throwaway290 ◴[] No.42934760[source]
idk if you think all those jira tickets and meetings are precise enough (IMO sometimes the opposite)

By the way, remind me why you need design meetings in that ideal world?:)

> Real projects don't require an infinitely detailed specification either, you usually stop where it no longer meaningfully moves you towards the goal.

The point was that specification is not detailed enough in practice. Precise enough specification IS code. And the point is literally that natural language is just not made to be precise enough. So you are back where you started

So you waste time explaining in detail and rehashing requirements in this imprecise language until you see what code you want to see. Which was faster to just... idk.. type.

replies(2): >>42934814 #>>42934892 #
4. falcor84 ◴[] No.42934814[source]
Even if you have superhuman AI designers, you still need buy-in.
replies(1): >>42934859 #
5. uoaei ◴[] No.42934859{3}[source]
There's a nice thought, that anyone with that kind of power would share it.
6. rightisleft ◴[] No.42934887[source]
Its all about the context window. Even the new Mistral Codestral-2501 256K CW does a great job.

If you use cline with any large context model the results can be pretty amazing. It's not close to self guiding, You still need to break down and analyze the problem and provide clear and relevant instructions. IE you need to be a great architect. Once you are stable on the direction, its awe inspiring to watch it do the bulk if the implementation.

I do agree that there is space to improve over embedded chat windows in IDEs. Solutions will come in time.

replies(1): >>42935476 #
7. Klaster_1 ◴[] No.42934892[source]
That's a fair point, I'd love to see Copilot come to a conclusion that they can't resolve a particular conundrum and communicates with other people so everyone makes a decision together.
8. mollyporph ◴[] No.42934908[source]
And Gemini has 2m token window. Which is about 10 minutes of video for example.
9. selectodude ◴[] No.42935476{3}[source]
Issue I have with Cline that I don't run into with, say, Aider, is that I find Cline to be like 10x more expensive. The number of tokens it blows through is incredible. Is that just me?
10. layer8 ◴[] No.42936203[source]
This premise in your last paragraph can only work with AGI, and we’re probably not close to that yet.