I have tried aider/copilot/continue/etc. But they lack in one way or the other.
I have tried aider/copilot/continue/etc. But they lack in one way or the other.
Copilot used to be useless, but over the last few months has become quite excellent once edit mode was added.
You can choose files to include and they don't appear to be truncated in any way. Though to be fair, I haven't checked the network traffic, but it appears to operate in this fashion from day to day use.
Claude Projects, chatgpt projects, Sourcegraph Cody context building, MCP file systems, all of these are black boxes of what I can only describe as lossy compression of context.
Each is incentivized to deliver ~”pretty good” results at the highest token compression possible.
The best way around this I’ve found is to just own the web clients by including structured, concatenation related files directly in chat contexts.
Self plug but super relevant: I built FileKitty specifically to aid this, which made HN front page and I’ve continued to improve:
https://news.ycombinator.com/item?id=40226976
If you can prepare your file system context yourself using any workflow quickly, and pair it with appropriate additional context such as run output, problem description etc, you can get excellent results and you can pound away at OpenAI or Anthropic subscription refining the prompt or updating the file context.
I have been finding myself spending more time putting together prompt complexity for big difficult problems, they would not make sense to solve in the IDE.
Same. I used to run a bash script that concatenates files I'm interested in and annotates their path/name to the top in a comment. I haven't needed that recently as I think the # of attachments for Claude has increased (or I haven't needed as many small disparate files at once)
I agree though that a lot of those agents are black boxes and hard to even learn how to best combine .rules, llms.txt, prd, mcp, web search, function call, memory. Most IDEs don't provide output where you can inspect final prompts etc to see how those are executed - maybe you have to use some MITMproxy to inspect requests etc but some tool would be useful to learn best practices.
I will be trying more roo code and cline since they open source and you can at least see system prompts etc.
I have encountered this issue of reincorporation of LLM code recommendations back into a project so I’m interested in exploring your take.
I told a colleague that I thought excellent use of copy paste and markdown were some of the chief skills of working with gen AI for code right now.
This and context management are as important as prompting.
It makes the details of the UI choices for copying web chat conversations or their segments so strangely important.