←back to thread

310 points skarat | 1 comments | | HN request time: 0.207s | source

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
Show context
pembrook ◴[] No.43960296[source]
For a time windsurf was way ahead of cursor in full agentic coding, but now I hear cursor has caught up. I have yet to switch back to try out cursor again but starting to get frustrated with Windsurf being restricted to gathering context only 100-200 lines at a time.

So many of the bugs and poor results that it can introduce are simply due to improper context. When forcibly giving it the necessary context you can clearly see it’s not a model problem but it’s a problem with the approach of gathering disparate 100 line snippets at a time.

Also, it struggles with files over 800ish lines which is extremely annoying

We need some smart deepseek-like innovation in context gathering since the hardware and cost of tokens is the real bottleneck here.

replies(2): >>43965526 #>>43966209 #
evolve2k ◴[] No.43965526[source]
Wait, are these 800 lines of code? Am I the only one seeing that as a major code smell? Assuming these are code files, the issue is not AI processing power but rather bread and butter coding practices related to file organisation and modularisation.
replies(2): >>43965878 #>>43982641 #
1. pembrook ◴[] No.43982641[source]
I agree if the point is to write code for human consumption, but the point of vibe coding tools like Windsurf is to let the LLMs handle everything with occasional direction. And the LLMs will create 2000+ line files when asking them to generate anything from scratch.

To generate such files and then not be able to read them is pure stupidity.