←back to thread

237 points meetpateltech | 1 comments | | HN request time: 0.217s | source
Show context
rpdillon ◴[] No.45900911[source]
I wouldn't want to make it out like I think OpenAI is the good guy here. I don't.

But conversations people thought they were having with OpenAI in private are now going to be scoured by the New York Times' lawyers. I'm aware of the third party doctrine and that if you put something online it can never be actually private. But I think this also runs counter to people's expectations when they're using the product.

In copyright cases, typically you need to show some kind of harm. This case is unusual because the New York Times can't point to any harm, so they have to trawl through private conversations OpenAI's customers have had with their service to see if they can find any.

It's quite literally a fishing expedition.

replies(10): >>45900955 #>>45901081 #>>45901082 #>>45901111 #>>45901248 #>>45901282 #>>45901672 #>>45901852 #>>45903876 #>>45906668 #
jcranmer ◴[] No.45901282[source]
> In copyright cases, typically you need to show some kind of harm.

NYT is suing for statutory copyright infringement. That means you only need to demonstrate that the copyright infringement, since the infringement alone is considered harm; the actual harm only matters if you're suing for actual damages.

This case really comes down to the very unsolved question of whether or not AI training and regurgitation is copyright infringement, and if so, if it's fair use. The actual ways the AI is being used is thus very relevant for the case, and totally within the bounds of discovery. Of course, OpenAI has also been engaging this lawsuit with unclean hands in the first place (see some of their earlier discovery dispute fuckery), and they're one of the companies with the strongest "the law doesn't apply to US because we're AI and big tech" swagger.

replies(1): >>45902239 #
Workaccount2 ◴[] No.45902239[source]
NYT doesn't care about regurgitation. When it was doable, it was spotty enough that no one would rely on it. But now the "trick" doesn't even work anymore (you would paste the start of an article and chatgpt would continue it).

What they want is to kill training, and more over, prevent the loss of being the middle-man between events and users.

replies(5): >>45903898 #>>45904149 #>>45904291 #>>45904764 #>>45905032 #
1. sfink ◴[] No.45904149[source]
> What they want is to kill training, and more over, prevent the loss of being the middle-man between events and users.

So... they want to continue reporting news, and they don't want their news reports to be presented to users in a place where those users are paying someone else and not them. How horrible of them?

If NYT is not reporting news, then NYT news reports will not be available for AIs to ingest. They can perhaps still get some of that data from elsewhere, perhaps from places that don't worry about the accuracy of the news (or intentionally produces inaccurate news). You have to get signal from somewhere, just the noise isn't enough, and killing off the existing sources of signal (the few remaining ones) is going to make that a lot harder.

The question is, does journalism have a place in a world with AIs, and should OpenAI be the one deciding the answer to that question?