←back to thread

Gemini CLI

(blog.google)
1342 points sync | 1 comments | | HN request time: 0.297s | source
Show context
wohoef ◴[] No.44378022[source]
A few days ago I tested Claude Code by completely vibe coding a simple stock tracker web app in streamlit python. It worked incredibly well, until it didn't. Seems like there is a critical project size where it just can't fix bugs anymore. Just tried this with Gemini CLI and the critical project size it works well for seems to be quite a bit bigger. Where claude code started to get lost, I simply told Gemini CLI to "Analyze the codebase and fix all bugs". And after telling it to fix a few more bugs, the application simply works.

We really are living in the future

replies(8): >>44378198 #>>44378469 #>>44378677 #>>44378994 #>>44379068 #>>44379186 #>>44379685 #>>44384682 #
agotterer ◴[] No.44379685[source]
I wonder how much of this had to do with the context window size? Gemini’s window is 5x larger than Cladue’s.

I’ve been using Claude for a side project for the past few weeks and I find that we really get into a groove planning or debugging something and then by the time we are ready to implement, we’ve run out of context window space. Despite my best efforts to write good /compact instructions, when it’s ready to roll again some of the nuance is lost and the implementation suffers.

I’m looking forward to testing if that’s solved by the larger Gemini context window.

replies(3): >>44382389 #>>44383702 #>>44386731 #
1. seunosewa ◴[] No.44386731[source]
I've found that I can quickly get a new AI session up to speed by adding critical context that it's missing. In my largest codebase it's usually a couple of critical functions.once they have the key context, they can do the rest. This of course doesn't work when you can't view their thinking process and interrupt it to supply them with the context that they are missing. Opacity doesn't work unless the agent does the right thing every time.