←back to thread

66 points zdw | 2 comments | | HN request time: 0.001s | source
Show context
siliconc0w ◴[] No.46187816[source]
I was working on a new project and I wanted to try out a new frontend framework (data-star.dev). What you quickly find out is that LLMs are really tuned to like react and their frontend performance drops pretty considerably if you aren't using it. Like even pasting the entire documentation in context, and giving specific examples close to what I wanted, SOTA models still hallucinated the correct attributes/APIs. And it isn't even that you have to use Framework X, it's that you need to use X as of the date of training.

I think this is one of the reasons we don't see huge productivity gains. Most F500 companies have pretty proprietary gnarly codebases which are going to be out-of-distribution. Context-engineering helps but you still don't get near the performance you get with in-distribution. It's probably not unsolvable but it's a pretty big problem ATM.

replies(6): >>46188076 #>>46188172 #>>46188177 #>>46188540 #>>46188662 #>>46189279 #
1. NewsaHackO ◴[] No.46188076[source]
I use it with Angular and Svelte and it works pretty well. I used to use Lit, which at least the older models did pretty bad at, but it is less known so expected.
replies(1): >>46188131 #
2. JimDabell ◴[] No.46188131[source]
Yes, Claude Opus 4.5 recently scored 100% on SvelteBench:

https://khromov.github.io/svelte-bench/benchmark-results-mer...

I found that LLMs sometimes get confused by Lit because they don’t understand the limitations of the shadow DOM. So they’ll do something like throw an event and try to catch it from a parent and treat it normally, not realising that the shadow DOM screws that all up, or they assume global / reset CSS will apply globally when you actually need to reapply it to every single component.

What I find interesting is all the platforms like Lovable etc. seem to be choosing Supabase, and LLMs are pretty terrible with that – constantly getting RLS wrong etc.