←back to thread

Gemini CLI

(blog.google)
1336 points sync | 4 comments | | HN request time: 0.001s | source
Show context
cperry ◴[] No.44377336[source]
Hi - I work on this. Uptake is a steep curve right now, spare a thought for the TPUs today.

Appreciate all the takes so far, the team is reading this thread for feedback. Feel free to pile on with bugs or feature requests we'll all be reading.

replies(38): >>44377379 #>>44377463 #>>44377522 #>>44377570 #>>44377664 #>>44377874 #>>44378010 #>>44378095 #>>44378282 #>>44378477 #>>44378483 #>>44378624 #>>44378661 #>>44378918 #>>44378935 #>>44379294 #>>44379599 #>>44379809 #>>44379831 #>>44380039 #>>44380415 #>>44380918 #>>44380943 #>>44381462 #>>44381702 #>>44382998 #>>44383306 #>>44383505 #>>44384705 #>>44385575 #>>44385992 #>>44386024 #>>44386107 #>>44386388 #>>44387079 #>>44387136 #>>44387432 #>>44388145 #
nprateem ◴[] No.44380943[source]
Please, for the love of God, stop your models always answering with essays or littering code with tutorial style comments. Almost every task devolves into "now get rid of the comments". It seems impossible to prevent this.

And thinking is stupid. "Show me how to generate a random number in python"... 15s later you get an answer.

replies(2): >>44380960 #>>44381300 #
1. msgodel ◴[] No.44380960[source]
They have to do that, it's how they think. If they were trained not to do that they'd produce lower quality code.
replies(1): >>44384052 #
2. nprateem ◴[] No.44384052[source]
So why don't Claude and Open AI models do this?
replies(1): >>44384121 #
3. 8n4vidtmkvmk ◴[] No.44384121[source]
O3 does, no? 2.5 Pro is a thinking model. Try flash if you want faster responses
replies(1): >>44385118 #
4. nprateem ◴[] No.44385118{3}[source]
No. We're not talking a few useful comments, but verbosity where typically the number of comments exceeds the actual code written. It must think we're all stupid or it's documenting a tutorial. Telling it not to has no effect.