←back to thread

514 points mfiguiere | 1 comments | | HN request time: 1.332s | source
Show context
asadm ◴[] No.43711158[source]
These days, I usually paste my entire (or some) repo into gemini and then APPLY changes back into my code using this handy script i wrote: https://github.com/asadm/vibemode

I have tried aider/copilot/continue/etc. But they lack in one way or the other.

replies(4): >>43711176 #>>43711235 #>>43711331 #>>43716444 #
brandall10 ◴[] No.43711176[source]
Why not just select Gemini Pro 2.5 in Copilot with Edit mode? Virtually unlimited use without extra fees.

Copilot used to be useless, but over the last few months has become quite excellent once edit mode was added.

replies(1): >>43711216 #
asadm ◴[] No.43711216[source]
copilot (and others) try to be too smart and do context reduction (to save their own wallets). I want ENTIRETY of the files I attached to context, not RAG-ed version of it.
replies(7): >>43711284 #>>43711344 #>>43711358 #>>43711390 #>>43711512 #>>43714121 #>>43714629 #
nowittyusername ◴[] No.43711344[source]
I believe this is the root of the problem for all agentic coding solutions. They are gimping the full context through fancy function calling and tool use to reduce the full context that is being sent through the API. Problem with this is you can never know what context is actually needed for the problem to be solved in the best way. The funny thing is, this type of behavior actually leads many people to believe these models are LESS capable then they actually are, because people don't realize how restricted these models are behind the scenes by the developers. Good news is, we are entering the era of large context windows and we will all see a huge performance increase in coding as a results of these advancement.
replies(3): >>43711466 #>>43711708 #>>43712977 #
1. pzo ◴[] No.43712977[source]
OpenAI shared chart about performance drop with large context like 500k tokens etc. So you still want to limit the context not only for the cost but performance as well. You also probably want to limit context to speedup inference and get reponse faster.

I agree though that a lot of those agents are black boxes and hard to even learn how to best combine .rules, llms.txt, prd, mcp, web search, function call, memory. Most IDEs don't provide output where you can inspect final prompts etc to see how those are executed - maybe you have to use some MITMproxy to inspect requests etc but some tool would be useful to learn best practices.

I will be trying more roo code and cline since they open source and you can at least see system prompts etc.