←back to thread

728 points freetonik | 1 comments | | HN request time: 0.21s | source
Show context
Waterluvian ◴[] No.44976790[source]
I’m not a big AI fan but I do see it as just another tool in your toolbox. I wouldn’t really care how someone got to the end result that is a PR.

But I also think that if a maintainer asks you to jump before submitting a PR, you politely ask, “how high?”

replies(16): >>44976860 #>>44976869 #>>44976945 #>>44977015 #>>44977025 #>>44977121 #>>44977142 #>>44977241 #>>44977503 #>>44978050 #>>44978116 #>>44978159 #>>44978240 #>>44978311 #>>44978533 #>>44979437 #
sheepscreek ◴[] No.44978159[source]
We keep talking about “AI replacing coders,” but the real shift might be that coding itself stops looking like coding. If prompts become the de facto way to create applications/developing systems in the future, maybe programming languages will just be baggage we’ll need to unlearn.

Programming languages were a nice abstraction to accommodate our inability to comprehend complexity - current day LLMs do not have the same limitations as us.

The uncomfortable part will be what happens to PRs and other human-in-the-loop checks. It’s worthwhile to consider that not too far into the future, we might not be debugging code anymore - we’ll be debugging the AI itself. That’s a whole different problem space that will need an entirely new class of solutions and tools.

replies(2): >>44978213 #>>44978727 #
ryoshu ◴[] No.44978213[source]
All we need to do is prompt an LLM with such specificity that it does exactly what we want the machine to do.
replies(1): >>44980878 #
1. kentm ◴[] No.44980878[source]
Good idea! We can have some sort of standard grammar that we use to prompt the LLM such that it deterministically gives us the result we ask for. We then constrain all prompts to match that grammar. Some sort of language describing programs.