←back to thread

1479 points sandslash | 3 comments | | HN request time: 0.413s | source
Show context
blixt ◴[] No.44319312[source]
If we extrapolate these points about building tools for AI and letting the AI turn prompts into code I can’t help but reach the conclusion that future programming languages and their runtimes will be heavily influenced by the strengths and weaknesses of LLMs.

What would the code of an application look like if it was optimized to be efficiently used by LLMs and not humans?

* While LLMs do heavily tend towards expecting the same inputs/outputs as humans because of the training data I don’t think this would inhibit co-evolution of novel representations of software.

replies(5): >>44319394 #>>44319410 #>>44319413 #>>44319434 #>>44320601 #
1. mythrwy ◴[] No.44319434[source]
It does seem a bit silly long term to have something like Python which was developed as a human friendly language written by LLMs.

If AI is going to write all the code going forward, we can probably dispense with the user friendly part and just make everything efficient as possible for machines.

replies(2): >>44319625 #>>44319644 #
2. doug_durham ◴[] No.44319625[source]
I don't agree. Important code will need to be audited. I think the language of the future will be easy to read by human reviewers but deterministic. It won't be a human language. Instead it will be computer language with horrible ergonomics. I think Python or straight up Java would be a good start. Things like templates wouldn't be necessary since you could express that deterministically in a higher level syntax (e.g. A list of elements that can accept any type). It would be an interesting exercise.
3. mostlysimilar ◴[] No.44319644[source]
If humans don't understand it to write the data the LLM is trained on, how will the LLM be able to learn it?