←back to thread

Gemini CLI

(blog.google)
1348 points sync | 2 comments | | HN request time: 0.404s | source
Show context
cperry ◴[] No.44377336[source]
Hi - I work on this. Uptake is a steep curve right now, spare a thought for the TPUs today.

Appreciate all the takes so far, the team is reading this thread for feedback. Feel free to pile on with bugs or feature requests we'll all be reading.

replies(38): >>44377379 #>>44377463 #>>44377522 #>>44377570 #>>44377664 #>>44377874 #>>44378010 #>>44378095 #>>44378282 #>>44378477 #>>44378483 #>>44378624 #>>44378661 #>>44378918 #>>44378935 #>>44379294 #>>44379599 #>>44379809 #>>44379831 #>>44380039 #>>44380415 #>>44380918 #>>44380943 #>>44381462 #>>44381702 #>>44382998 #>>44383306 #>>44383505 #>>44384705 #>>44385575 #>>44385992 #>>44386024 #>>44386107 #>>44386388 #>>44387079 #>>44387136 #>>44387432 #>>44388145 #
nprateem ◴[] No.44380943[source]
Please, for the love of God, stop your models always answering with essays or littering code with tutorial style comments. Almost every task devolves into "now get rid of the comments". It seems impossible to prevent this.

And thinking is stupid. "Show me how to generate a random number in python"... 15s later you get an answer.

replies(2): >>44380960 #>>44381300 #
1. mpalmer ◴[] No.44381300[source]
Take some time to understand how the technology works, and how you can configure it yourself when it comes to thinking budget. None of these problems sound familiar to me as a frequent user of LLMs.
replies(1): >>44384058 #
2. nprateem ◴[] No.44384058[source]
Take some time to compare the output of Gemini vs other models instead of patronising people.