←back to thread

Gemini CLI

(blog.google)
1336 points sync | 6 comments | | HN request time: 1.784s | source | bottom
Show context
ipsum2 ◴[] No.44379036[source]
If you use this, all of your code data will be sent to Google. From their terms:

https://developers.google.com/gemini-code-assist/resources/p...

When you use Gemini Code Assist for individuals, Google collects your prompts, related code, generated output, code edits, related feature usage information, and your feedback to provide, improve, and develop Google products and services and machine learning technologies.

To help with quality and improve our products (such as generative machine-learning models), human reviewers may read, annotate, and process the data collected above. We take steps to protect your privacy as part of this process. This includes disconnecting the data from your Google Account before reviewers see or annotate it, and storing those disconnected copies for up to 18 months. Please don't submit confidential information or any data you wouldn't want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.

replies(20): >>44379046 #>>44379132 #>>44379301 #>>44379405 #>>44379410 #>>44379497 #>>44379544 #>>44379636 #>>44379643 #>>44380425 #>>44380586 #>>44380762 #>>44380864 #>>44381305 #>>44381716 #>>44382190 #>>44382418 #>>44382537 #>>44383744 #>>44384828 #
1. FL410 ◴[] No.44380586[source]
To be honest this is by far the most frustrating part of the Gemini ecosystem, to me. I think 2.5 pro is probably the best model out there right now, and I'd love to use it for real work, but their privacy policies are so fucking confusing and disjointed that I just assume there is no privacy whatsoever. And that's with the expensive Pro Plus Ultra MegaMax Extreme Gold plan I'm on.

I hope this is something they're working on making clearer.

replies(3): >>44380953 #>>44380968 #>>44381248 #
2. dmbche ◴[] No.44380953[source]
If I'm being cynical, it's easy to either say "we use it" or "we don't touch it" but they'd lose everyone that cares about this question if they just said "we use it" - most beneficial position is to keep it as murky as possible.

If I were you I'd assume they're using all of it for everything forever and act accordingly.

3. ipsum2 ◴[] No.44380968[source]
In my own experience, 2.5 Pro 03-26 was by far the best LLM model at the time.

The newer models are quantized and distilled (I confirmed this with someone who works on the team), and are a significantly worse experience. I prefer OpenAI O3 and o4-mini models to Gemini 2.5 Pro for general knowledge tasks, and Sonnet 4 for coding.

replies(1): >>44382348 #
4. UncleOxidant ◴[] No.44381248[source]
For coding in my experience Claude Sonnet/Opus 4.0 is hands down better than Gemini 2.5. pro. I just end up fighting with Claude a lot less than I do with Gemini. I had Gemini start a project that involved creating a recursive descent parser for a language in C. It was full of segfaults. I'd ask Gemini to fix them and it would end up breaking something else and then we'd get into a loop. Finally I had Claude Sonnet 4.0 take a look at the code that Gemini had created. It fixed the segfaults in short order and was off adding new features - even anticipating features that I'd be asking for.
replies(1): >>44382423 #
5. happycube ◴[] No.44382348[source]
Gah, enforced enshittification with model deprecation is so annoying.
6. cma ◴[] No.44382423[source]
Did you try Gemini with a fresh prompt too when comparing against Claude? Sometimes you just get better results starting over with any leading model, even if it gets access to the old broken code to fix.

I haven't tried Gemini since the latest updates, but earlier ones seemed on par with opus.