←back to thread

Gemini CLI

(blog.google)
1339 points sync | 6 comments | | HN request time: 0.839s | source | bottom
Show context
ipsum2 ◴[] No.44379036[source]
If you use this, all of your code data will be sent to Google. From their terms:

https://developers.google.com/gemini-code-assist/resources/p...

When you use Gemini Code Assist for individuals, Google collects your prompts, related code, generated output, code edits, related feature usage information, and your feedback to provide, improve, and develop Google products and services and machine learning technologies.

To help with quality and improve our products (such as generative machine-learning models), human reviewers may read, annotate, and process the data collected above. We take steps to protect your privacy as part of this process. This includes disconnecting the data from your Google Account before reviewers see or annotate it, and storing those disconnected copies for up to 18 months. Please don't submit confidential information or any data you wouldn't want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.

replies(20): >>44379046 #>>44379132 #>>44379301 #>>44379405 #>>44379410 #>>44379497 #>>44379544 #>>44379636 #>>44379643 #>>44380425 #>>44380586 #>>44380762 #>>44380864 #>>44381305 #>>44381716 #>>44382190 #>>44382418 #>>44382537 #>>44383744 #>>44384828 #
1. FiberBundle ◴[] No.44379497[source]
Do you honestly believe that the opt-out by Anthropic and Cursor means your code won't be used for training their models? Seems likely that they would rather just risk taking a massive fine for potentially solving software development than to let some competitor try it instead.
replies(2): >>44379621 #>>44380382 #
2. rudedogg ◴[] No.44379621[source]
Yes.

The resulting class-action lawsuit would bankrupt the company, along with the reputation damage, and fines.

replies(1): >>44380738 #
3. olejorgenb ◴[] No.44380382[source]
> For API users, we automatically delete inputs and outputs on our backend within 30 days of receipt or generation, except when you and we have agreed otherwise (e.g. zero data retention agreement), if we need to retain them for longer to enforce our Usage Policy (UP), or comply with the law.

If this is due to compliance with law I wonder how they can make the zero-data-retention agreement work... The companies I've seen have this have not mention that they themself retain the data...

4. pera ◴[] No.44380738[source]
> Anthropic cut up millions of used books to train Claude — and downloaded over 7 million pirated ones too, a judge said

https://www.businessinsider.com/anthropic-cut-pirated-millio...

It doesn't look like they care at all about the law though

replies(2): >>44382063 #>>44382332 #
5. pbhjpbhj ◴[] No.44382063{3}[source]
>Anthropic spent "many millions of dollars" buying used print books, then stripped off the bindings, cut the pages, and scanned them into digital files.

The judge, Alsup J, ruled that this was lawful.

So they cared at least a bit, enough to spend a lot of money buying books. But they didn't care enough not to acquire online libraries held apparently without proper licensing.

>Alsup wrote that Anthropic preferred to "steal" books to "avoid 'legal/practice/business slog,' as cofounder and CEO Dario Amodei put it."

Aside: using the term steal for copyright infringement is a particularly egregious misuse for a judge who should know that stealing requires denying others of the use of the stolen articles; something which copyright infringement via an online text repository simple could not do.

6. dghlsakjg ◴[] No.44382332{3}[source]
Using torrented books in a way that possibly (well, almost certainly) violates copyright law is a world of difference from going after your own customers (and revenue) in a way that directly violates the contract that you wrote and had them agree to.