←back to thread

Gemini CLI

(blog.google)
1364 points sync | 6 comments | | HN request time: 0.933s | source | bottom
Show context
ipsum2 ◴[] No.44379036[source]
If you use this, all of your code data will be sent to Google. From their terms:

https://developers.google.com/gemini-code-assist/resources/p...

When you use Gemini Code Assist for individuals, Google collects your prompts, related code, generated output, code edits, related feature usage information, and your feedback to provide, improve, and develop Google products and services and machine learning technologies.

To help with quality and improve our products (such as generative machine-learning models), human reviewers may read, annotate, and process the data collected above. We take steps to protect your privacy as part of this process. This includes disconnecting the data from your Google Account before reviewers see or annotate it, and storing those disconnected copies for up to 18 months. Please don't submit confidential information or any data you wouldn't want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.

replies(20): >>44379046 #>>44379132 #>>44379301 #>>44379405 #>>44379410 #>>44379497 #>>44379544 #>>44379636 #>>44379643 #>>44380425 #>>44380586 #>>44380762 #>>44380864 #>>44381305 #>>44381716 #>>44382190 #>>44382418 #>>44382537 #>>44383744 #>>44384828 #
jart ◴[] No.44379046[source]
Mozilla and Google provide an alternative called gemmafile which gives you an airgapped version of Gemini (which Google calls Gemma) that runs locally in a single file without any dependencies. https://huggingface.co/jartine/gemma-2-27b-it-llamafile It's been deployed into production by 32% of organizations: https://www.wiz.io/reports/the-state-of-ai-in-the-cloud-2025
replies(2): >>44379221 #>>44379814 #
1. nicce ◴[] No.44379221[source]
That is just Gemma model. Most people seek capabilities equivalent for Gemini 2.5 Pro if they want to do any kind of coding.
replies(1): >>44379286 #
2. jart ◴[] No.44379286[source]
Gemma 27b can write working code in dozens of programming languages. It can even translate between languages. It's obviously not as good as Gemini, which is the best LLM in the world, but Gemma is built from the same technology that powers Gemini and Gemma is impressively good for something that's only running locally on your CPU or GPU. It's a great choice for airgapped environments. Especially if you use old OSes like RHEL5.
replies(3): >>44379899 #>>44379902 #>>44380997 #
3. seunosewa ◴[] No.44379899[source]
The technology that powers Gemini created duds until Gemini 2.5 Pro; 2.5 Pro is the prize.
4. nicce ◴[] No.44379902[source]
It may be sufficient for generating serialized data and for some level of autocomplete but not for any serious agentic coding where you won't end up wasting time. Maybe some junior level programmers may find it still fascinating but senior level programmers end up fighting with bad design choices, poor algorithms and other verbose garbage most of the time. This happens even with the best models.
replies(1): >>44380006 #
5. diggan ◴[] No.44380006{3}[source]
> senior level programmers end up fighting with bad design choices, poor algorithms and other verbose garbage most of the time. This happens even with the best models.

Even senior programmers can misuse tools, happens to all of us. LLMs sucks at software design, choosing algorithms and are extremely crap unless you exactly tell them what to do and what not to do. I leave the designing to myself, and just use OpenAI and local models for implementation, and with proper system prompting you can get OK code.

But you need to build up a base-prompt you can reuse, by basically describing what is good code for you, as it differs quite a bit from person to person. This is what I've been using as a base for agent use: https://gist.github.com/victorb/1fe62fe7b80a64fc5b446f82d313..., but need adjustments depending on the specific use case

Although I've tried to steer Google's models in a similar way, most of them are still overly verbose and edit-happy, not sure if it's some Google practice that leaked through or something. Other models are way easier to stop from outputting so much superfluous code, and overall following system prompts.

6. ipsum2 ◴[] No.44380997[source]
I've spent a long time with models, gemma-3-27b feels distilled from Gemini 1.5. I think the useful coding abilities really started to emerge with 2.5.