https://epoch.ai/blog/can-ai-scaling-continue-through-2030
https://epoch.ai/blog/what-will-ai-look-like-in-2030
There's a good chance that eventually reading code will become like inspecting assembly.
Over the year, I've been doing a tonne of consulting. The last three months I've watched at least 8 companies embrace AI generated pip for coding, testing, and code reviews. Honestly, the best suggestions I've seen are found by linters in CI, and spell checkers. Is this what we've come to?
My question for my fellow HNers.. is this what the future holds? Is this everywhere? I think I'm finally ready to get off the ride.
https://epoch.ai/blog/can-ai-scaling-continue-through-2030
https://epoch.ai/blog/what-will-ai-look-like-in-2030
There's a good chance that eventually reading code will become like inspecting assembly.
We don’t read assembly because we read the higher level code, which deterministically is compiled to lower level code.
The equivalent situation for LLMs would be if we were reviewing the prompts only, and if we had 100% confidence that the prompt resulted in code that does exactly what the prompt asks.
Otherwise we need to inspect the generated code. So the situation isn’t the same, at least not with current LLMs and current LLM workflows.
I think the reason "we" don't read, or write, assembly is that it takes a lot of effort and a detailed understanding of computer architecture that are simply not found in the majority of programmers, e.g. those used to working with javascript frameworks on web apps etc.
There are of course many "we" who work with assembly every day: as a for instance, people working with embedded systems, or games programmers as another.