←back to thread

310 points skarat | 5 comments | | HN request time: 0.818s | source

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
1. pembrook ◴[] No.43960296[source]
For a time windsurf was way ahead of cursor in full agentic coding, but now I hear cursor has caught up. I have yet to switch back to try out cursor again but starting to get frustrated with Windsurf being restricted to gathering context only 100-200 lines at a time.

So many of the bugs and poor results that it can introduce are simply due to improper context. When forcibly giving it the necessary context you can clearly see it’s not a model problem but it’s a problem with the approach of gathering disparate 100 line snippets at a time.

Also, it struggles with files over 800ish lines which is extremely annoying

We need some smart deepseek-like innovation in context gathering since the hardware and cost of tokens is the real bottleneck here.

replies(2): >>43965526 #>>43966209 #
2. evolve2k ◴[] No.43965526[source]
Wait, are these 800 lines of code? Am I the only one seeing that as a major code smell? Assuming these are code files, the issue is not AI processing power but rather bread and butter coding practices related to file organisation and modularisation.
replies(2): >>43965878 #>>43982641 #
3. kypro ◴[] No.43965878[source]
I agree, but I've worked with many people now who seem to prefer one massive file. Specifically Python and React people seem to do this a lot.

Frustrates the hell out of me as someone who thinks at 300-400 lines generally you should start looking at breaking things up.

4. falleng0d ◴[] No.43966209[source]
you can use the filesystem mcp and have it use the read file tool to read the files in full on call
5. pembrook ◴[] No.43982641[source]
I agree if the point is to write code for human consumption, but the point of vibe coding tools like Windsurf is to let the LLMs handle everything with occasional direction. And the LLMs will create 2000+ line files when asking them to generate anything from scratch.

To generate such files and then not be able to read them is pure stupidity.