←back to thread

Gemini CLI

(blog.google)
1428 points sync | 4 comments | | HN request time: 1.295s | source
Show context
cperry ◴[] No.44377336[source]
Hi - I work on this. Uptake is a steep curve right now, spare a thought for the TPUs today.

Appreciate all the takes so far, the team is reading this thread for feedback. Feel free to pile on with bugs or feature requests we'll all be reading.

replies(40): >>44377379 #>>44377463 #>>44377522 #>>44377570 #>>44377664 #>>44377874 #>>44378010 #>>44378095 #>>44378282 #>>44378477 #>>44378483 #>>44378624 #>>44378661 #>>44378918 #>>44378935 #>>44379294 #>>44379599 #>>44379809 #>>44379831 #>>44380039 #>>44380415 #>>44380918 #>>44380943 #>>44381462 #>>44381702 #>>44382998 #>>44383306 #>>44383505 #>>44384705 #>>44385575 #>>44385992 #>>44386024 #>>44386107 #>>44386388 #>>44387079 #>>44387136 #>>44387432 #>>44388145 #>>44397173 #>>44430520 #
nprateem ◴[] No.44380943[source]
Please, for the love of God, stop your models always answering with essays or littering code with tutorial style comments. Almost every task devolves into "now get rid of the comments". It seems impossible to prevent this.

And thinking is stupid. "Show me how to generate a random number in python"... 15s later you get an answer.

replies(2): >>44380960 #>>44381300 #
msgodel ◴[] No.44380960[source]
They have to do that, it's how they think. If they were trained not to do that they'd produce lower quality code.
replies(1): >>44384052 #
1. nprateem ◴[] No.44384052[source]
So why don't Claude and Open AI models do this?
replies(1): >>44384121 #
2. 8n4vidtmkvmk ◴[] No.44384121[source]
O3 does, no? 2.5 Pro is a thinking model. Try flash if you want faster responses
replies(1): >>44385118 #
3. nprateem ◴[] No.44385118[source]
No. We're not talking a few useful comments, but verbosity where typically the number of comments exceeds the actual code written. It must think we're all stupid or it's documenting a tutorial. Telling it not to has no effect.
replies(1): >>44410653 #
4. krzyk ◴[] No.44410653{3}[source]
Maybe you hit a specific use case where the LLM part turns into its roots?

I had a somehos similar problem with Claude 3.7, where I had a class named "Workflow" and it got nuts, producing code/comments I didn't ask for, all related to some "workflow" that it tried to replicate and not my code, it was strange.