←back to thread

421 points briankelly | 1 comments | | HN request time: 0.201s | source
Show context
only-one1701 ◴[] No.43575480[source]
Increasingly I’m realizing that in most cases there is a SIGNIFICANT difference between how useful AI is on greenfield projects vs how useful it is on brownfield projects. For the former: pretty good! For the brownfield, it’s often worse than useless.
replies(7): >>43575563 #>>43575575 #>>43575797 #>>43576773 #>>43577248 #>>43577399 #>>43578794 #
Aurornis ◴[] No.43576773[source]
It’s also interesting to see how quickly the greenfield progress rate slows down as the projects grow.

I skimmed the vibecoding subreddits for a while. It was common to see frustrations about how coding tools (Cursor, Copilot, etc) were great last month but terrible now. The pattern repeats every month, though. When you look closer it’s usually people who were thrilled when their projects were small but are now frustrated when they’re bigger.

replies(1): >>43587425 #
1. Workaccount2 ◴[] No.43587425[source]
The real issue is context size. You kinda need to know what you are doing in order to construct the project in pieces, and know what to tell the LLM when you spin up a new instance with fresh context to work on a single subsection. It's unwieldy and inefficient, and the model inevitably gets confused when it can effectively look at the whole code base.

Gemini 2.5 is much better in this regard, it can make decent output up to around 100k tokens compared to claude 3.7 starting to choke around 32k. Long term it remains to see if this will remain an issue. If models can get to 5M context and perform like current model with 5k context, it would be a total game changer.