←back to thread

66 points zdw | 1 comments | | HN request time: 0s | source
Show context
siliconc0w ◴[] No.46187816[source]
I was working on a new project and I wanted to try out a new frontend framework (data-star.dev). What you quickly find out is that LLMs are really tuned to like react and their frontend performance drops pretty considerably if you aren't using it. Like even pasting the entire documentation in context, and giving specific examples close to what I wanted, SOTA models still hallucinated the correct attributes/APIs. And it isn't even that you have to use Framework X, it's that you need to use X as of the date of training.

I think this is one of the reasons we don't see huge productivity gains. Most F500 companies have pretty proprietary gnarly codebases which are going to be out-of-distribution. Context-engineering helps but you still don't get near the performance you get with in-distribution. It's probably not unsolvable but it's a pretty big problem ATM.

replies(6): >>46188076 #>>46188172 #>>46188177 #>>46188540 #>>46188662 #>>46189279 #
ehnto ◴[] No.46188177[source]
That is the "big issue" I have found as well. Not only are enterprise codebases often proprietary, ground up architectures, the actual hard part is business logic, locating required knowledge, and taking into account a decade of changing business requirements. All of that information is usually inside a bunch of different humans heads and by the time you get it all out and processed, code is often a small part of the task.
replies(1): >>46189060 #
1. theshrike79 ◴[] No.46189060[source]
AI is an excellent reason/excuse to have resources allocated to documenting these things

“Hey boss we can use AI more if we would document these business requirements in a concise and clear way”

Worst case: humans get proper docs :)