←back to thread

390 points meetpateltech | 5 comments | | HN request time: 0.965s | source
1. bionhoward ◴[] No.44008015[source]
What about privacy, training opt out?

What about using it for AI / developing models that compete with our new overlords?

Seems like using this is just asking to get rug pulled for competing with em when they release something that competes with your thing. Am I just an old who’s crowing about nothing? It’s ok for them to tell us we own outputs we can’t use to compete with em?

replies(1): >>44008133 #
2. piskov ◴[] No.44008133[source]
What the video: there is an explicit switch at one of the steps about (not) allowing to train on your repo.
replies(1): >>44008606 #
3. lurking_swe ◴[] No.44008606[source]
That’s nice. And we trust that it does what it says because…? The AI company (openai, anthropic, etc) pinky promised? Have we seen their source code? How do you know they don’t train?

Facebook has been caught in recent DOJ hearings breaking the law with how they run their business, just as one example. They claimed under oath, previously, to not be doing X, and then years later there was proof they did exactly that.

https://youtu.be/7ZzxxLqWKOE?si=_FD2gikJkSH1V96r

A companies “word” means nothing imo. None of this makes sense if i’m being honest. Unless you personally have a negotiated contract with the provider, and can somehow be certain they are doing what they claim, and can later sue for damages, all of this is just crossing your fingers and hoping for the best.

replies(2): >>44008654 #>>44010167 #
4. tough ◴[] No.44008654{3}[source]
On the other hand you can enable explicit sharing of your data and get a few million free tokens daily
5. wilg ◴[] No.44010167{3}[source]
If you don't trust the company your opt-out strategy is much easier, you simply do not authorize them to access your code.