←back to thread

858 points cryptophreak | 1 comments | | HN request time: 0s | source
Show context
croes ◴[] No.42934439[source]
Natural language isn’t made to be precise that’s why we use a subset in programming languages.

So you either need lots of extra text to remove the ambiguity of natural language if you use AI or you need a special precise subset to communicate with AI and that’s just programming with extra steps.

replies(10): >>42934517 #>>42934537 #>>42934619 #>>42934632 #>>42934651 #>>42934686 #>>42934747 #>>42934909 #>>42935464 #>>42936139 #
Klaster_1 ◴[] No.42934619[source]
A lot of extra text usually means prior requirements, meeting transcripts, screen share recordings, chat history, Jira tickets and so on - the same information developers use to produce a result that satisfies the stakeholders and does the job. This seems like a straightforward direction solvable with more compute and more efficient memory. I think this will be the way it pans outs.

Real projects don't require an infinitely detailed specification either, you usually stop where it no longer meaningfully moves you towards the goal.

The whole premise of AI developer automation, IMO, is that if a human can develop a thing, then AI should be able too, given the same input.

replies(3): >>42934735 #>>42934760 #>>42936203 #
cube2222 ◴[] No.42934735[source]
We are kind of actually there already.

With a 200k token window like Claude has you can already dump a lot of design docs / transcripts / etc. at it.

replies(2): >>42934887 #>>42934908 #
1. mollyporph ◴[] No.42934908[source]
And Gemini has 2m token window. Which is about 10 minutes of video for example.