←back to thread

Gemini CLI

(blog.google)
1342 points sync | 7 comments | | HN request time: 1.419s | source | bottom
1. nprateem ◴[] No.44380943[source]
Please, for the love of God, stop your models always answering with essays or littering code with tutorial style comments. Almost every task devolves into "now get rid of the comments". It seems impossible to prevent this.

And thinking is stupid. "Show me how to generate a random number in python"... 15s later you get an answer.

replies(2): >>44380960 #>>44381300 #
2. msgodel ◴[] No.44380960[source]
They have to do that, it's how they think. If they were trained not to do that they'd produce lower quality code.
replies(1): >>44384052 #
3. mpalmer ◴[] No.44381300[source]
Take some time to understand how the technology works, and how you can configure it yourself when it comes to thinking budget. None of these problems sound familiar to me as a frequent user of LLMs.
replies(1): >>44384058 #
4. nprateem ◴[] No.44384052[source]
So why don't Claude and Open AI models do this?
replies(1): >>44384121 #
5. nprateem ◴[] No.44384058[source]
Take some time to compare the output of Gemini vs other models instead of patronising people.
6. 8n4vidtmkvmk ◴[] No.44384121{3}[source]
O3 does, no? 2.5 Pro is a thinking model. Try flash if you want faster responses
replies(1): >>44385118 #
7. nprateem ◴[] No.44385118{4}[source]
No. We're not talking a few useful comments, but verbosity where typically the number of comments exceeds the actual code written. It must think we're all stupid or it's documenting a tutorial. Telling it not to has no effect.