←back to thread

66 points zdw | 1 comments | | HN request time: 0s | source
Show context
siliconc0w ◴[] No.46187816[source]
I was working on a new project and I wanted to try out a new frontend framework (data-star.dev). What you quickly find out is that LLMs are really tuned to like react and their frontend performance drops pretty considerably if you aren't using it. Like even pasting the entire documentation in context, and giving specific examples close to what I wanted, SOTA models still hallucinated the correct attributes/APIs. And it isn't even that you have to use Framework X, it's that you need to use X as of the date of training.

I think this is one of the reasons we don't see huge productivity gains. Most F500 companies have pretty proprietary gnarly codebases which are going to be out-of-distribution. Context-engineering helps but you still don't get near the performance you get with in-distribution. It's probably not unsolvable but it's a pretty big problem ATM.

replies(6): >>46188076 #>>46188172 #>>46188177 #>>46188540 #>>46188662 #>>46189279 #
1. Teknoman117 ◴[] No.46188540[source]
As someone who works at an F100 company with massive proprietary codebases that also requires our users to sign NDAs even see API docs and code examples, to say that the output of LLMs for work tasks is comically bad would be an understatement even with feeding it code and documentation as memory items for projects...