←back to thread

190 points baruchel | 1 comments | | HN request time: 0s | source
Show context
gwbas1c ◴[] No.44424165[source]
One thing that's surprised me throughout my career is just how inefficient most of the code that I've inherited is. I suspect we could do a lot more with the hardware we have if we simply became better at programming.

(Perhaps more AI coaching could help?)

replies(3): >>44424202 #>>44424776 #>>44428402 #
HideousKojima ◴[] No.44424202[source]
AI is trained on the same shitty code you're inheriting, so probably not.
replies(1): >>44424294 #
humanfromearth9 ◴[] No.44424294[source]
It's also trained on all best practices and algorithms that you don't know exist, so it is able to do better - provided you know to ask and how to ask/what to ask.
replies(1): >>44424721 #
1. HideousKojima ◴[] No.44424721[source]
It's not simply a matter of knowing what/how to ask. LLMs are essentially statistical regressions on crack. This is a gross oversimplification, but the point is that what they generate is based on statistical likelihoods, and if 90%+ of the code they were trained on was shit you're not going to get the good stuff very often. And if you need an AI to help you do it you won't even be able to recognize the good stuff when it does get generated.