Most active commenters
  • 1718627440(8)
  • TFYS(7)
  • rpdillon(5)
  • lcnielsen(4)
  • FirmwareBurner(4)
  • (4)
  • deanc(4)
  • CamperBob2(4)
  • dmix(4)
  • adastra22(4)

←back to thread

297 points rntn | 167 comments | | HN request time: 3.232s | source | bottom
1. ankit219 ◴[] No.44608660[source]
Not just Meta, 40 EU companies urged EU to postpone roll out of the ai act by two years due to it's unclear nature. This code of practice is voluntary and goes beyond what is in the act itself. EU published it in a way to say that there would be less scrutiny if you voluntarily sign up for this code of practice. Meta would anyway face scrutiny on all ends, so does not seem to a plausible case to sign something voluntary.

One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way. For open source, it's a very hard requirement[1].

> GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.

[1] https://www.lw.com/en/insights/2024/11/european-commission-r...

replies(7): >>44610592 #>>44610641 #>>44610669 #>>44611112 #>>44612330 #>>44613357 #>>44617228 #
2. dmix ◴[] No.44610592[source]
Lovely when they try to regulate a burgeoning market before we have any idea what the market is going to look like in a couple years.
replies(8): >>44610676 #>>44610940 #>>44610948 #>>44611033 #>>44611210 #>>44611955 #>>44612758 #>>44614808 #
3. t0mas88 ◴[] No.44610641[source]
Sounds like a reasonable guideline to me. Even for open source models, you can add a license term that requires users of the open source model to take "appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works"

This is European law, not US. Reasonable means reasonable and judges here are expected to weigh each side's interests and come to a conclusion. Not just a literal interpretation of the law.

replies(4): >>44613578 #>>44614324 #>>44614949 #>>44615016 #
4. ◴[] No.44610669[source]
5. remram ◴[] No.44610676[source]
The whole point of regulating it is to shape what it will look like in a couple of years.
replies(8): >>44610764 #>>44610961 #>>44611052 #>>44611090 #>>44611379 #>>44611534 #>>44611915 #>>44613903 #
6. dmix ◴[] No.44610764{3}[source]
Regulators often barely grasp how current markets function and they are supposed to be futurists now too? Government regulatory interests almost always end up lining up with protecting entrenched interests, so it's essentially asking for a slow moving group of the same mega companies. Which is very much what Europes market looks like today. Stasis and shifting to a stagnating middle.
replies(3): >>44610790 #>>44612672 #>>44613460 #
7. krainboltgreene ◴[] No.44610790{4}[source]
So the solution is to allow the actual entrenched interests to determine the future of things when they also barely grasp how the current markets function and are currently proclaiming to be futurists?
replies(4): >>44611061 #>>44611137 #>>44611732 #>>44616373 #
8. ekianjo ◴[] No.44610940[source]
they dont want a marlet. They want total control, as usual for control freaks.
9. ◴[] No.44610948[source]
10. olalonde ◴[] No.44610961{3}[source]
You're both right, and that's exactly how early regulation often ends up stifling innovation. Trying to shape a market too soon tends to lock in assumptions that later prove wrong.
replies(2): >>44612297 #>>44613233 #
11. amelius ◴[] No.44611033[source]
We know what the market will look like. Quasi monopoly and basic user rights violated.
12. felipeerias ◴[] No.44611052{3}[source]
The experience with other industries like cars (specially EV) shows that the ability of EU regulators to shape global and home markets is a lot more limited than they like to think.
replies(1): >>44611976 #
13. betaby ◴[] No.44611061{5}[source]
Won't somebody please think of the children?
replies(1): >>44614311 #
14. jabjq ◴[] No.44611090{3}[source]
What will happen, like every time a market is regulated in the EU, is that the market will move on without the EU.
15. zizee ◴[] No.44611112[source]
It doesn't seem unreasonable. If you train a model that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed? Just because it's a fancy Ai model it is ok?
replies(2): >>44611371 #>>44611463 #
16. buggyinout ◴[] No.44611137{5}[source]
They’re demanding collective conversation. You don’t have to be involved if you prefer to be asocial except to post impotent rage online.

Same way the pols aren’t futurists and perfect neither is anyone else. Everyone should sit at the table and discuss this like adults.

You want to go live in the hills alone, go for it, Dick Proenneke. Society is people working collectively.

17. ulfw ◴[] No.44611210[source]
Regulating it while the cat is out of the bag leads to monopolistic conglomerates like Meta and Google. Meta shouldn't have been allowed to usurp instagram and whatsapp, Google shouldn't have been allowed to bring Youtube into the fold. Now it's too late to regulate a way out of this.
replies(2): >>44612368 #>>44613954 #
18. CamperBob2 ◴[] No.44611371[source]
I have a Xerox machine that can reliably reproduce copyrighted works. Is that a problem, too?

Blaming tools for the actions of their users is stupid.

replies(4): >>44611396 #>>44611501 #>>44612409 #>>44614295 #
19. CamperBob2 ◴[] No.44611379{3}[source]
If the regulators were qualified to work in the industry, then guess what: they'd be working in the industry.
20. threetonesun ◴[] No.44611396{3}[source]
If the Xerox machine had all of the copyrighted works in it and you just had to ask it nicely to print them I think you'd say the tool is in the wrong there, not the user.
replies(5): >>44611403 #>>44611469 #>>44611489 #>>44613191 #>>44616639 #
21. CamperBob2 ◴[] No.44611403{4}[source]
You'd think wrong.
22. Aurornis ◴[] No.44611463[source]
> that can reliably reproduce thousands/millions of copyrighted works, you shouldn't be distributibg it. If it were just regular software that had that capability, would it be allowed?

LLMs are hardly reliable ways to reproduce copyrighted works. The closest examples usually involve prompting the LLM with a significant portion of the copyrighted work and then seeing it can predict a number of tokens that follow. It’s a big stretch to say that they’re reliably reproducing copyrighted works any more than, say, a Google search producing a short excerpt of a document in the search results or a blog writer quoting a section of a book.

It’s also interesting to see the sudden anti-LLM takes that twist themselves into arguing against tools or platforms that might reproduce some copyrighted content. By this argument, should BitTorrent also be banned? If someone posts a section of copyrighted content to Hacker News as a comment, should YCombinator be held responsible?

replies(3): >>44611545 #>>44612224 #>>44614212 #
23. Aurornis ◴[] No.44611469{4}[source]
LLMs do not have all copyrighted works in them.

In some cases they can be prompted to guess a number of tokens that follow an excerpt from another work.

They do not contain all copyrighted works, though. That’s an incorrect understanding.

24. monetus ◴[] No.44611489{4}[source]
Are there any LLMs available with a, "give me copyrighted material" button? I don't think that is how they work.

Commercial use of someone's image also already has laws concerning that as far as I know, don't they?

25. zeta0134 ◴[] No.44611501{3}[source]
Helpfully the law already disagrees. That Xerox machine tampers with the printed result, leaving a faint signature that is meant to help detect forgeries. You know, for when users copy things that are actually illegal to copy. Xerox machine (and every other printer sold today) literally leaves a paper trail to trace it back to them.

https://en.wikipedia.org/wiki/Printer_tracking_dots

replies(1): >>44611509 #
26. ChadNauseam ◴[] No.44611509{4}[source]
i believe only color printers are known to have this functionality, and it’s typically used for detecting counterfeit, not for enforcing copyright
replies(1): >>44611532 #
27. zeta0134 ◴[] No.44611532{5}[source]
You're quite right. Still, it's a decent example of blaming the tool for the actions of its users. The law clearly exerted enough pressure to convince the tool maker to modify that tool against the user's wishes.
replies(1): >>44611561 #
28. Jensson ◴[] No.44611545{3}[source]
> LLMs are hardly reliable ways to reproduce copyrighted works

Only because the companies are intentionally making it so. If they weren't trained to not reproduce copyrighted works they would be able to.

replies(3): >>44611747 #>>44612219 #>>44613477 #
29. justinclift ◴[] No.44611561{6}[source]
> Still, it's a decent example of blaming the tool for the actions of its users.

They're not really "blaming" the tool though. They're using a supply chain attack against the subset of users they're interested in.

30. tjwebbnorfolk ◴[] No.44611732{5}[source]
The best way for "entrenched interests" to stifle competition is to buy/encourage regulation that keeps everybody else out of their sandbox pre-emptively.

For reference, see every highly-regulated industry everywhere.

You think Sam Altman was in testifying to the US Congress begging for AI regulation because he's just a super nice guy?

replies(1): >>44612371 #
31. terminalshort ◴[] No.44611747{4}[source]
LLMs even fail on tasks like "repeat back to me exactly the following text: ..." To say they can exactly and reliably reproduce copyrighted work is quite a claim.
replies(1): >>44613620 #
32. energy123 ◴[] No.44611915{3}[source]
The point is to stop and deter market failure, not anticipate hypothetical market failure
33. rapatel0 ◴[] No.44611955[source]
I literally lived this with GDPR. In the beginning every one ran around pretending to understand what it meant. There were a ton of consultants and lawyers that basically made up stuff that barely made sense. They grifted money out of startups by taking the most aggressive interpretation and selling policy templates.

In the end the regulation was diluted to something that made sense(ish) but that process took about 4 years. It also slowed down all enterprise deals because no one knew if a deal was going to be against GDPR and the lawyers defaulted to “no” in those orgs.

Asking regulators to understand and shape market evolution in AI is basically asking them to trade stocks by reading company reports written in mandarin.

replies(2): >>44612893 #>>44613127 #
34. imachine1980_ ◴[] No.44611976{4}[source]
Not really china make big policy bet a decade early and win the battle the put the whole government to buy this new tech before everyone else, forcing buses to be electric if you want the federal level thumbs up, or the lottery system for example.

So I disagree, probably Europe will be even more behind in ev if they doesn't push eu manufacturers to invest so heavily in the industry.

You can se for example than for legacy manufacturers the only ones in the top ten are Europeans being 3 out of 10 companies, not Japanese or Korean for example, and in Europe Volkswagen already overtake Tesla in sales Q1 for example and Audi isn't that much away also.

35. jazzyjackson ◴[] No.44612219{4}[source]
it's like these people never tried asking for song lyrics
36. ◴[] No.44612224{3}[source]
37. mycall ◴[] No.44612297{4}[source]
Depends what those assumptions are. If by protecting humans from AI gross negligence, then the assumptions are predetermined to be siding towards human normals (just one example). Lets hope logic and understanding of the long term situation proceeds the arguments in the rulesets.
replies(1): >>44612400 #
38. m3sta ◴[] No.44612330[source]
The quoted text makes sense when you understand that the EU provides a carveout for training on copyright protected works without a license. It's quite an elegant balance they've suggested despite the challenges it fails to avoid.
replies(1): >>44613883 #
39. pbh101 ◴[] No.44612368{3}[source]
It’s easy to say this in hindsight, though this is the first time I think I’ve seen someone say that about YouTube even though I’ve seen it about Instagram and WhatsApp a lot.

The YouTube deal was a lot earlier than Instagram, 2006. Google was way smaller than now. iPhone wasn’t announced. And it wasn’t two social networks merging.

Very hard to see how regulators could have the clairvoyance to see into this specific future and its counter-factual.

40. goatlover ◴[] No.44612371{6}[source]
Regulation exists because of monopolistic practices and abuses in the early 20th century.
replies(2): >>44612461 #>>44612562 #
41. dmix ◴[] No.44612400{5}[source]
You're just guessing as much as anyone. Almost every generation in history has had doomers predicting the fall of their corner of civilization from some new thing. From religion schisms, printing presses, radio, TV, advertisements, the internet, etc. You can look at some of the earliest writings by English priests in the 1500s predicting social decay and destruction of society which would sound exactly like social media posts in 2025 about AI. We should at a minimum under the problem space before restricting it, especially given the nature of policy being extremely slow to change (see: copyright).
replies(1): >>44612608 #
42. fodkodrasz ◴[] No.44612409{3}[source]
According to the law in some jurisdictions it is. (notably most EU Member States, and several others worldwide).

In those places actually fees are included ("reprographic levy") in the appliance, and the needed supply prices, or public operators may need to pay additionally based on usage. That money goes towards funds created to compensate copyright holders for loss of profit due to copyright infringement carries out through the use of photocopiers.

Xerox is in no way singled out and discriminated against. (Yes, I know this is an Americanism)

43. dmix ◴[] No.44612461{7}[source]
That's a bit oversimplified. Humans have been creating authority systems trying to control others lives and business since formal societies have been a thing, likely even before agriculture. History is also full of examples of arbitrary and counter productive attempts at control, which is a product of basic human nature combined with power, and why we must always be skeptical.
replies(1): >>44612798 #
44. keysdev ◴[] No.44612562{7}[source]
That can be, however regulation has just changed monopolistic practices to even more profitable oligarchaistic practices. Just look at Standard Oil.
45. esperent ◴[] No.44612608{6}[source]
I'd urge you to read a book like Black Swan, or study up on statistics.

Doomers have been wrong about completely different doom scenarios in the past (+), but it says nothing about to this new scenario. If you're doing statistics in your head about it, you're wrong. We can't use scenarios from the past to make predictions about completely novel scenarios like thinking computers.

(+) although they were very close to being right about nuclear doom, and may well be right about climate change doom.

replies(1): >>44616602 #
46. stuaxo ◴[] No.44612672{4}[source]
The EU is founded on the idea of markets and regulation.
replies(1): >>44613616 #
47. verisimi ◴[] No.44612758[source]
Exactly. No anonymity, no thought crime, lots of filters to screen out bad misinformation, etc. Regulate it.
48. verisimi ◴[] No.44612798{8}[source]
As a member of 'humanity', do you find yourself creating authority systems for AI though? No.

If you are paying for lobbyists to write the legislation you want, as corporations do, you get the law you want - that excludes competition, funds your errors etc.

The point is you are not dealing with 'humanity', you are dealing with those who represent authority for humanity - not the same thing at all. Connected politicians/CEOs etc are not actually representing 'humanity' - they merely say that they are doing so, while representing themselves.

49. troupo ◴[] No.44612893{3}[source]
> In the end the regulation was diluted to something that made sense(ish) but that process took about 4 years.

Is the same regulation that was introduced in 2016. The only people who pretend not to understand it are those who think that selling user data to 2000+ "partners" is privacy

50. CalRobert ◴[] No.44613127{3}[source]
The main thing is the EU basically didn’t enforce it. I was really excited for data portability but it hasn’t really come to pass
51. zettabomb ◴[] No.44613191{4}[source]
Xerox already went through that lawsuit and won, which is why photocopiers still exist. The tool isn't in the wrong for being told to print out the copyrighted works. The user still had to make the conscious decision to copy that particular work. Hence, still the user's fault.
replies(1): >>44615490 #
52. TFYS ◴[] No.44613233{4}[source]
Sometimes you can't reverse the damage and societal change after the market has already been created and shaped. Look at fossil fuels, plastic, social media, etc. We're now dependent on things that cause us harm, the damage done is irreversible and regulation is no longer possible because these innovations are now embedded in the foundations of modern society.

Innovation is good, but there's no need to go as fast as possible. We can be careful about things and study the effects more deeply before unleashing life changing technologies into the world. Now we're seeing the internet get destroyed by LLMs because a few people decided it was ok to do so. The benefits of this are not even clear yet, but we're still doing it just because we can. It's like driving a car at full speed into a corner just to see what's behind it.

replies(2): >>44613612 #>>44614574 #
53. badsectoracula ◴[] No.44613357[source]
> One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way

AFAICT the actual text of the act[0] does not mention anything like that. The closest to what you describe is part of the chapter on copyright of the Code of Practice[1], however the code does not add any new requirements to the act (it is not even part of the act itself). What it does is to present a way (which does not mean it is the only one) to comply with the act's requirements (as a relevant example, the act requires to respect machine-readable opt-out mechanisms when training but doesn't specify which ones, but the code of practice explicitly mentions respecting robots.txt during web scraping).

The part about copyright outputs in the code is actually (measure 1.4):

> (1) In order to mitigate the risk that a downstream AI system, into which a general-purpose AI model is integrated, generates output that may infringe rights in works or other subject matter protected by Union law on copyright or related rights, Signatories commit:

> a) to implement appropriate and proportionate technical safeguards to prevent their models from generating outputs that reproduce training content protected by Union law on copyright and related rights in an infringing manner, and

> b) to prohibit copyright-infringing uses of a model in their acceptable use policy, terms and conditions, or other equivalent documents, or in case of general-purpose AI models released under free and open source licenses to alert users to the prohibition of copyright infringing uses of the model in the documentation accompanying the model without prejudice to the free and open source nature of the license.

> (2) This Measure applies irrespective of whether a Signatory vertically integrates the model into its own AI system(s) or whether the model is provided to another entity based on contractual relations.

Keep in mind that "Signatories" here is whoever signed the Code of Practice: obviously if i make my own AI model and do not sign that code of practice myself (but i still follow the act requirements), someone picking up my AI model and signing the Code of Practice themselves doesn't obligate me to follow it too. That'd be like someone releasing a plugin for Photoshop under the GPL and then demanding Adobe release Photoshop's source code.

As for open source models, the "(1b)" above is quite clear (for open source models that want to use this code of practice - which they do not have to!) that all they have to do is to mention in their documentation that their users should not generate copyright infringing content with them.

In fact the act has a lot of exceptions for open-source models. AFAIK Meta's beef with the act is that the EU AI office (or whatever it is called, i do not remember) does not recognize Meta's AI as open source, so they do not get to benefit from those exceptions, though i'm not sure about the details here.

[0] https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:...

[1] https://ec.europa.eu/newsroom/dae/redirection/document/11811...

54. messe ◴[] No.44613460{4}[source]
> Which is very much what Europes market looks like today. Stasis and shifting to a stagnating middle.

Preferable to a burgeoning oligarchy.

replies(1): >>44613912 #
55. ben_w ◴[] No.44613477{4}[source]
They're probably training them to refuse, but fundamentally the models are obviously too small to usually memorise content, and can only do it when there's many copies in the training set. Quotation is a waste of parameters better used for generalisation.

The other thing is that approximately all of the training set is copyrighted, because that's the default even for e.g. comments on forums like this comment you're reading now.

The other other thing is that at least two of the big model makers went and pirated book archives on top of crawling the web.

56. deanc ◴[] No.44613578[source]
Except that it’s seemingly impossible to prevent against prompt injection. The cat is out the bag. Much like a lot of other legislation (eg cookie law, being responsible for user generated content when you have millions of it posted per day) it’s entirely impractical albeit well-meaning.
replies(1): >>44613667 #
57. sneak ◴[] No.44613612{5}[source]
I think it’s one of those “everyone knows” things that plastic and social media are bad, but I think the world without them is way, way worse. People focus on these popular narratives but if people thought social media was bad, they wouldn’t use it.

Personally, I don’t think they’re bad. Plastic isn’t that harmful, and neither is social media.

I think people romanticize the past and status quo. Change is scary, so when things change and the world is bad, it is easy to point at anything that changed and say “see, the change is what did it!”

replies(2): >>44613797 #>>44614166 #
58. miohtama ◴[] No.44613616{5}[source]
The EU is founded on the idea of useless bureaucracy.

It's not just IT. Ask any EU farmer.

replies(1): >>44613856 #
59. tomschwiha ◴[] No.44613620{5}[source]
You can also ask people to repeat a text and some will fail. What I want to say is that even if some LLMs (probably only older ones) will fail doesn't mean future ones will fail (in the majority). Especially if benchmarks indicate they are becoming smarter over time.
60. lcnielsen ◴[] No.44613667{3}[source]
I don't think the cookie law is that impractical? It's easy to comply with by just not storing non-essential user information. It would have been completely nondisruptive if platforms agreed to respect users' defaults via browser settings, and then converged on a common config interface.

It was made impractical by ad platforms and others who decided to use dark patterns, FUD and malicious compliance to deceive users into agreeing to be tracked.

replies(3): >>44613785 #>>44613896 #>>44613989 #
61. deanc ◴[] No.44613785{4}[source]
It is impractical for me as a user. I have to click on a notice on every website on the internet before interacting with it - often which are very obtuse and don’t have a “reject all” button but a “manage my choices” button which takes to an even more convoluted menu.

Instead of exactly as you say: a global browser option.

As someone who has had to implement this crap repeatedly - I can’t even begin to imagine the amount of global time that has been wasted implementing this by everyone, fixing mistakes related to it and more importantly by users having to interact with it.

replies(3): >>44613848 #>>44615071 #>>44615338 #
62. TFYS ◴[] No.44613797{6}[source]
People don't use things that they know are bad, but someone who has grown up in an environment where everyone uses social media for example, can't know that it's bad because they can't experience the alternative anymore. We don't know the effects all the accumulating plastic has on our bodies. The positive effects of these things can be bigger than the negative ones, but we can't know that because we're not even trying to figure it out. Sometimes it might be impossible to find out all the effects before large scale adoption, but still we should at least try. Currently the only study we do before deciding is the one to figure out if it'll make a profit for the owner.
replies(1): >>44613855 #
63. lcnielsen ◴[] No.44613848{5}[source]
Yeah, but the only reason for this time wasteage is because website operators refuse to accept what would become the fallback default of "minimal", for which they would not need to seek explicit consent. It's a kind of arbitrage, like those scammy website that send you into redirect loops with enticing headlines.

The law is written to encourage such defaults if anything, it just wasn't profitable enough I guess.

replies(2): >>44613881 #>>44614219 #
64. sneak ◴[] No.44613855{7}[source]
> We don't know the effects all the accumulating plastic has on our bodies.

This is handwaving. We can be pretty well sure at this point what the effects aren’t, given their widespread prevalence for generations. We have a 2+ billion sample size.

replies(1): >>44614703 #
65. fxtentacle ◴[] No.44613856{6}[source]
Contrary to the constant whining, most of them are actually quite wealthy. And thanks to strong right to repair laws, they can keep using John Deere equipment without paying extortionate licensing fees.
replies(1): >>44614195 #
66. deanc ◴[] No.44613881{6}[source]
The reality is the data that is gathered is so much more valuable and accurate if you gather consent when you are running a business. Defaulting to a minimal config is just not practical for most businesses either. The decisions that are made with proper tracking data have a real business impact (I can see it myself - working at a client with 7 figure monthly revenue).

Im fully supportive of consent, but the way it is implemented is impractical from everyone’s POV and I stand by that.

replies(4): >>44613917 #>>44613943 #>>44614111 #>>44614127 #
67. Oras ◴[] No.44613883[source]
Is that true? How can they decide to wipe out the intellectual property for an individual or entity? It’s not theirs to give it away.
replies(3): >>44613962 #>>44614016 #>>44616465 #
68. jonathanlydall ◴[] No.44613896{4}[source]
I recently received an email[0] from a UK entity with an enormous wall of text talking about processing of personal information, my rights and how there is a “Contact Card” of my details on their website.

But with a little bit of reading, one could ultimately summarise the enormous wall of text simply as: “We’ve added your email address to a marketing list, click here to opt out.”

The huge wall of text email was designed to confuse and obfuscate as much as possible with them still being able to claim they weren’t breaking protection of personal information laws.

[0]: https://imgur.com/a/aN4wiVp

replies(1): >>44614190 #
69. adastra22 ◴[] No.44613903{3}[source]
That has never worked.
70. adastra22 ◴[] No.44613912{5}[source]
No, that... that's exactly what we have today. An oligarchy persists through captured state regulation. A more free market would have a constantly changing top.
replies(1): >>44614271 #
71. ta1243 ◴[] No.44613917{7}[source]
Why would I ever want to consent to you abusing my data?
72. user5534762135 ◴[] No.44613943{7}[source]
That is only true if you agree with ad platforms that tracking ads are fundamentally required for businesses, which is trivially untrue for most enterprises. Forcing businesses to get off privacy violating tracking practices is good, and it's not the EU that's at fault for forcing companies to be open about ad networks' intransigence on that part.
73. user5534762135 ◴[] No.44613954{3}[source]
>Now it's too late to regulate a way out of this.

Technically untrue, monopoly busting is a kind of regulation. I wouldn't bet on it happening on any meaningful scale, given how strongly IT benefits from economies of scale, but we could be surprised.

74. elsjaako ◴[] No.44613962{3}[source]
Copyright is not a god given right. It's an economic incentive created by government to make desired behavior (writing an publishing books) profitable.
replies(3): >>44614270 #>>44616163 #>>44617440 #
75. mgraczyk ◴[] No.44613989{4}[source]
Even EU government websites have horrible intrusive cookie banners. You can't blame ad companies, there are no ads on most sites
replies(1): >>44614216 #
76. arccy ◴[] No.44614016{3}[source]
"intellectual property" only exists because society collectively allows it to. it's not some inviolable law of nature. society (or the government that represents them) can revoke it or give it away.
replies(2): >>44614158 #>>44614667 #
77. bfg_9k ◴[] No.44614111{7}[source]
Are you genuinely trying to defend businesses unnecessarily tracking users online? Why can't businesses sell their core product(s) and you know... not track users? If they did that, then they wouldn't need to implement a cookie banner.
replies(3): >>44614226 #>>44614240 #>>44614993 #
78. discreteevent ◴[] No.44614127{7}[source]
> just not practical for most businesses

I don't think practical is the right word here. All the businesses in the world operated without tracking until the mid 90s.

79. impossiblefork ◴[] No.44614158{4}[source]
Yes, but that's also true of all other things that society enforces-- basically the ownership of anything you can't carry with you.
replies(1): >>44614787 #
80. staunton ◴[] No.44614166{6}[source]
> if people thought social media was bad, they wouldn’t use it.

Do you think Heroin is good?

replies(3): >>44614548 #>>44614551 #>>44614791 #
81. tester756 ◴[] No.44614190{5}[source]
>The huge wall of text email was designed to confuse and obfuscate as much as possible with

It is pretty clear

replies(1): >>44614293 #
82. mavhc ◴[] No.44614195{7}[source]
They're wealthy because they were paid for not using their agricultural land, so they cropped down all the trees on parts of their land that they couldn't use, to classify it as agricultural, got paid, and as a side effect caused downstream flooding
replies(1): >>44615314 #
83. zizee ◴[] No.44614212{3}[source]
Then they should easily fall within the regulation section posted earlier.

If you cannot see the difference between BitTorrent and Ai models, then it's probably not worth engaging with you.

But Ai model have been shown to reproduce the training data

https://gizmodo.com/ai-art-generators-ai-copyright-stable-di...

https://arxiv.org/abs/2301.13188

84. lcnielsen ◴[] No.44614216{5}[source]
Because they track usage stats for site development purposes, and there was no convergence on an agreed upon standard interface for browsers since nobody would respect it. Their banners are at least simple yes/no ones without dark patterns.

But yes, perhaps they should have worked with e.g. Mozilla to develop some kind of standard browser interface for this.

85. fauigerzigerk ◴[] No.44614219{6}[source]
Not even EU institutions themselves are falling back on deaults that don't require cookie consent.

I'm constantly clicking away cookie banners on UK government or NHS (our public healthcare system) websites. The ICO (UK privacy watchdog) requires cookie consent. The EU Data Protection Supervisor wants cookie consent. Almost everyone does.

And you know why that is? It's not because they are scammy ad funded sites or because of government surveillance. It's because the "cookie law" requires consent even for completely reasonable forms of traffic analysis with the sole purpose of improving the site for its visitors.

This is impractical, unreasonable, counterproductive and unintelligent.

replies(3): >>44614559 #>>44614784 #>>44614823 #
86. deanc ◴[] No.44614226{8}[source]
Retargetting etc is massive revenue for online retailers. I support their right to do it if users consent to it. I don’t support their right to do it if users have not consented.

The conversation is not about my opinion on tracking, anyway. It’s about the impracticality of implementing the legislation that is hostile and time consuming for both website owners and users alike

replies(1): >>44615060 #
87. lcnielsen ◴[] No.44614240{8}[source]
Plus with any kind of effort put into a standard browser setting you could easily have some granularity, like: accept anonymous ephemeral data collected to improve website, but not stuff shared with third parties, or anything collected for the purpose of tailoring content or recommendations for you.
88. kriops ◴[] No.44614270{4}[source]
Yes it is. In every sense of the phrase, except the literal.
replies(2): >>44614330 #>>44614811 #
89. messe ◴[] No.44614271{6}[source]
Historically, freer markets have lead to monopolies. It's why we have antitrust regulations in the first place (now if only they were enforced...)
replies(1): >>44614488 #
90. johnisgood ◴[] No.44614293{6}[source]
Only if you read it. Most people do not read it, same with ToSes.
replies(1): >>44614671 #
91. saghm ◴[] No.44614295{3}[source]
If I've copied someone else's copyrighted work on my Xerox machine, then give it to you, you can't reproduce the work I copied. If I leave a copy of it in the scanner when I give it to you, that's another story. The issue here isn't the ability of an LLM to produce it when I provide it with the copyrighted work as an input, it's whether or not there's an input baked-in at the time of distribution that gives it the ability to continue producing it even if the person who receives it doesn't have access to the work to provide it in the first place.

To be clear, I don't have any particular insight on whether this is possible right now with LLMs, and I'm not taking a stance on copyright law in general with this comment. I don't think your argument makes sense though because there's a clear technical difference that seems like it would be pretty significant as a matter of law. There are plenty of reasonable arguments against things like the agreement mentioned in the article, but in my opinion, your objection isn't one of the.

replies(1): >>44614740 #
92. johnisgood ◴[] No.44614311{6}[source]
Yes, a common rhetoric, and terrorism and national security.
93. gkbrk ◴[] No.44614324[source]
> Even for open source models, you can add a license term that requires users of the open source model to take appropriate measures to avoid [...]

You just made the model not open source

replies(3): >>44614685 #>>44614721 #>>44615634 #
94. Zafira ◴[] No.44614330{5}[source]
A lot of cultures have not historically considered artists’ rights to be a thing and have had it essentially imposed on them as a requirement to participate in global trade.
replies(2): >>44614469 #>>44617093 #
95. kolinko ◴[] No.44614469{6}[source]
Even in Europe copyright was protected only for the last 250 years, and over the last 100 years it’s been constantly updated to take into consideration new technologies.
replies(1): >>44615397 #
96. adastra22 ◴[] No.44614488{7}[source]
Depends on the time horizon you look at. A completely unregulated market usually ends up dominated by monopolists… who last a generation or two and then are usurped and become declining oligarchs. True all the way back to the Medici.

In a rigidly regulated market with preemptive action by regulators (like EU, Japan) you end up with a persistent oligarchy that is never replaced. An aristocracy of sorts.

The middle road is the best. Set up a fair playing field and rules of the game, but allow innovation to happen unhindered, until the dust has settled. There should be regulation, but the rules must be bought with blood. The risk of premature regulation is worse.

replies(1): >>44615274 #
97. Lionga ◴[] No.44614548{7}[source]
People who take Heroin think it is good in the situation they are taking it.
98. sneak ◴[] No.44614551{7}[source]
Is the implication in your question that social media is addictive and should be banned or regulated on that basis?

While some people get addicted to it, the vast majority of users are not addicts. They choose to use it.

replies(1): >>44614658 #
99. FirmwareBurner ◴[] No.44614559{7}[source]
>This is impractical, unreasonable, counterproductive and unintelligent.

It keeps the political grifters who make these regulations employed, that's kind of the main point in EU/UKs endless stream of regulations upon regulations.

100. FirmwareBurner ◴[] No.44614574{5}[source]
> Look at fossil fuels

WHAT?! Do you think we as humanity would have gotten to all the modern inventions we have today like the internet, space travel, atomic energy, if we had skipped the fossil fuel era by preemptively regulating it?

How do you imagine that? Unless you invent a time machine, go to the past, and give inventors schematics of modern tech achievable without fossil fuels.

replies(2): >>44614759 #>>44615442 #
101. staunton ◴[] No.44614658{8}[source]
Addiction is a matter of degree. There's a bunch of polls where a large majority of people strongly agree that "they spend too much time on social media". Are they addicts? Are they "coosing to use it"? Are they saying it's too much because that's a trendy thing to say?
102. figassis ◴[] No.44614667{4}[source]
You're alive because society collective allows you to.
replies(1): >>44614973 #
103. octopoc ◴[] No.44614671{7}[source]
If you ask someone if they killed your dog and they respond with a wall of text, then you’re immediately suspicious. You don’t even have to read it all.

The same is true of privacy policies. I’ve seen some companies have very short policies I could read in less than 30s, those companies are not suspicious.

replies(2): >>44615333 #>>44617435 #
104. LadyCailin ◴[] No.44614685{3}[source]
“Source available” then?
105. TFYS ◴[] No.44614703{8}[source]
No, we can't be sure. There's a lot of diseases that we don't know the cause of, for example. Cancers, dementia, Alzheimer's, etc. There is a possibility that the rates of those diseases are higher because of plastics. Plastic pollution also accumulates, there was a lot less plastic in the environment a few decades ago. We add more faster than it gets removed, and there could be some threshold after which it becomes more of an issue. We might see the effect a few decades from now. Not only on humans, but it's everywhere in the environment now, affecting all life on earth.
replies(1): >>44616513 #
106. badsectoracula ◴[] No.44614721{3}[source]
Instead of a license term you can put that in your documentation - in fact that is exactly what the code of practice mentions (see my other comment) for open source models.
107. visarga ◴[] No.44614740{4}[source]
You can train a LLM on completely clean data, creative commons and legally licensed text, and at inference time someone will just put a whole article or chapter in the model and has full access to regenerate it however they like.
replies(1): >>44616884 #
108. TFYS ◴[] No.44614759{6}[source]
Maybe not as fast as we did, but eventually we would have. Maybe more research would have been put into other forms of energy if the effects of fossil fuels were considered more thoroughly and usage was limited to a degree that didn't have a chance cause such fast climate change. And so what if the rate of progress would have been slower and we'd be 50 years behind current tech? At least we wouldn't have to worry about all the damage we've caused now, and the costs associated with that. Due to that damage our future progress might halt while a slower, more careful society would continue advancing far into the future.
replies(2): >>44616563 #>>44616737 #
109. troupo ◴[] No.44614784{7}[source]
> It's because the "cookie law" requires consent even for completely reasonable forms of traffic analysis with the sole purpose of improving the site for its visitors

Yup. That's what those 2000+ "partners" are all about if you believe their "legitimate interest" claims: "improve traffic"

110. CaptainFever ◴[] No.44614787{5}[source]
Yes, that is why (most?) anarchists consider property that one is not occupying and using to be fiction, held up by the state. I believe this includes intellectual property as well.
111. TFYS ◴[] No.44614791{7}[source]
I'm sure it's very good the first time you take it. If you don't consider all the effects before taking it, it does make sense. You feel very good, but the even stronger negative effects come after. Same can be said about a lot of technology.
112. troupo ◴[] No.44614808[source]
> before we have any idea what the market is going to look like in a couple years.

Oh, we already know large chunks of it, and the regulations explicitly address that.

If the chest-beating crowd would be presented with these regulations piecemeal, without ever mentioning EU, they'd probably be in overwhelming support of each part.

But since they don't care to read anything and have an instinctive aversion to all things regulatory and most things EU, we get the boos and the jeers

113. ◴[] No.44614811{5}[source]
114. grues-dinner ◴[] No.44614823{7}[source]
> completely reasonable

This is a personal decision to be made by the data "donor".

The NHS website cookie banner (which does have a correct implementation in that the "no consent" button is of equal prominence to the "mi data es su data" button) says:

> We'd also like to use analytics cookies. These collect feedback and send information about how our site is used to services called Adobe Analytics, Adobe Target, Qualtrics Feedback and Google Analytics. We use this information to improve our site.

In my opinion, it is not, as described, "completely reasonable" to consider such data hand-off to third parties as implicitly consented to. I may trust the NHS but I may not trust their partners.

If the data collected is strictly required for the delivery of the service and is used only for that purpose and destroyed when the purpose is fulfilled (say, login session management), you don't need a banner.

The NHS website is in a slightly tricky position, because I genuinely think they will be trying to use the data for site and service improvement, at least for now, and they hopefully have done their homework to make sure Adobe, say, are also not misusing the data. Do I think the same from, say, the Daily Mail website? Absolutely not, they'll be selling every scrap of data before the TCP connection even closes to anyone paying. Now, I may know the Daily Mail is a wretched hive of villainy and can just not go there, but I do not know about every website I visit. Sadly the scumbags are why no-one gets nice things.

replies(1): >>44615015 #
115. whatevaa ◴[] No.44614949[source]
There is no way to enforce that license. Free software doesn't have funds for such lawsuits.
116. lioeters ◴[] No.44614973{5}[source]
A person being alive is not at all similar to the concept of intellectual property existing. The former is a natural phenomenon, the latter is a social construct.
117. artathred ◴[] No.44614993{8}[source]
Are you genuinely acting this obtuse? what do you think walmart and every single retailer does when you walk into a physical store? it’s always constant monitoring to be able to provide a better customer experience. This doesn’t change with online, businesses want to improve their service and they need the data to do so.
replies(2): >>44615030 #>>44615374 #
118. fauigerzigerk ◴[] No.44615015{8}[source]
>This is a personal decision to be made by the data "donor".

My problem is that users cannot make this personal decision based on the cookie consent banners because all sites have to request this consent even if they do exactly what they should be doing in their users' interest. There's no useful signal in this noise.

The worst data harvesters look exactly the same as a site that does basic traffic analysis for basic usability purposes.

The law makes it easy for the worst offenders to hide behind everyone else. That's why I'm calling it counterproductive.

[Edit] Wrt NHS specifically - this is a case in point. They use some tools to analyse traffic in order to improve their website. If they honour their own privacy policy, they will have configured those tools accordingly.

I understand that this can still be criticised from various angles. But is this criticism worth destroying the effectiveness of the law and burying far more important distinctions?

The law makes the NHS and Daily Mail look exactly the same to users as far as privacy and data protection is concered. This is completely misleading, don't you think?

replies(2): >>44615348 #>>44615961 #
119. sealeck ◴[] No.44615016[source]
> This is European law, not US. Reasonable means reasonable and judges here are expected to weigh each side's interests and come to a conclusion. Not just a literal interpretation of the law.

I think you've got civil and common law the wrong way round :). US judges have _much_ more power to interpret law!

replies(2): >>44615325 #>>44616612 #
120. owebmaster ◴[] No.44615030{9}[source]
> it’s always constant monitoring to be able to provide a better customer experience

This part gave me a genuine laugh. Good joke.

replies(1): >>44615088 #
121. owebmaster ◴[] No.44615060{9}[source]
> Retargetting etc is massive revenue for online retailers

Drug trafficking, stealing, scams are massive revenue for gangs.

122. tcfhgj ◴[] No.44615071{5}[source]
Just don't process any personal data by default when not I inherently required -> no banner required.
123. artathred ◴[] No.44615088{10}[source]
ah yes because walmart wants to harvest your in-store video data so they can eventually clone you right?

adjusts tinfoil hat

replies(1): >>44616332 #
124. messe ◴[] No.44615274{8}[source]
> There should be regulation, but the rules must be bought with blood.

That's an awfully callous approach, and displays a disturbing lack of empathy toward other people.

replies(1): >>44616360 #
125. pyman ◴[] No.44615314{8}[source]
Just to stay on topic: outside the US there's a general rule of thumb: if Meta is against it, the EU is probably doing something right.
replies(1): >>44616471 #
126. saubeidl ◴[] No.44615325{3}[source]
It is European law, as in EU law, not law from a European state. In EU matters, the teleogocial interpretation, i.e. intent applies:

> When interpreting EU law, the CJEU pays particular attention to the aim and purpose of EU law (teleological interpretation), rather than focusing exclusively on the wording of the provisions (linguistic interpretation).

> This is explained by numerous factors, in particular the open-ended and policy-oriented rules of the EU Treaties, as well as by EU legal multilingualism.

> Under the latter principle, all EU law is equally authentic in all language versions. Hence, the Court cannot rely on the wording of a single version, as a national court can, in order to give an interpretation of the legal provision under consideration. Therefore, in order to decode the meaning of a legal rule, the Court analyses it especially in the light of its purpose (teleological interpretation) as well as its context (systemic interpretation).

https://www.europarl.europa.eu/RegData/etudes/BRIE/2017/5993...

replies(1): >>44615527 #
127. 1718627440 ◴[] No.44615333{8}[source]
That's true, because of the EU privacy regulation, because they make companies write a wall of text before doing smth. suspicious.
128. 1718627440 ◴[] No.44615338{5}[source]
I don't have to, because there are add-ons to reject everything.
129. 1718627440 ◴[] No.44615348{9}[source]
> even if they do exactly what they should be doing in their users' interest

If they only do this, they don't need to show anything.

replies(1): >>44615544 #
130. 1718627440 ◴[] No.44615374{9}[source]
If you're talking about the same jurisdiction of this privacy laws, then this is illegal. Your are only allowed to retain videos for 24h and only use it for basically calling the police.
replies(1): >>44617377 #
131. pyman ◴[] No.44615397{7}[source]
The only real mistake the EU made was not regulating Facebook when it mattered. That site caused pain and damage to entire generations. Now it's too late. All they can do is try to stop Meta and the rest of the lunatics from stealing every book, song and photo ever created, just to train models that could leave half the population without a job.

Meta, OpenAI, Nvidia, Microsoft and Google don't care about people. They care about control: controlling influence, knowledge and universal income. That's the endgame.

Just like in the US, the EU has brilliant people working on regulations. The difference is, they're not always working for the same interests.

The world is asking for US big tech companies to be regulated more now than ever.

132. 1718627440 ◴[] No.44615442{6}[source]
The internet was created in the military at the start of the fossil era, there is no reason, why it should be affected by the oil era. If we wouldn't travel that much, because we don't use cars and planes that much, the internet would be even more important.

Space travel does need a lot of oil, so it might be affected, but the beginning of it were in the 40s so the research idea was already there.

Atomic energy is also from the 40s and might have been the alternative to oil, so it would thrive more if we haven't used oil that much.

Also all 3 ARE heavily regulated and mostly done by nation states.

replies(1): >>44616918 #
133. 1718627440 ◴[] No.44615490{5}[source]
You take the copyrighted work to the printer, you don't upload data to an LLM first, it is already in the machine. If you got LLMs without training data (however that works) and the user needs to provide the data, then it would be ok.
replies(1): >>44616586 #
134. chimeracoder ◴[] No.44615527{4}[source]
> It is European law, as in EU law, not law from a European state. In EU matters, the teleogocial interpretation, i.e. intent applies

I'm not sure why you and GP are trying to use this point to draw a contrast to the US? That very much is a feature in US law as well.

replies(1): >>44616428 #
135. fauigerzigerk ◴[] No.44615544{10}[source]
Then we clearly disagree on what they should be doing.

And this is the crux of the problem. The law helps a tiny minority of people enforce an extremely (and in my view pointlessly) strict version of privacy at the cost of misleading everybody else into thinking that using analytics for the purpose of making usability improvements is basically the same thing as sending personal data to 500 data brokers to make money off of it.

replies(1): >>44615875 #
136. h4ck_th3_pl4n3t ◴[] No.44615634{3}[source]
An open source cocaine production machine is still an illegal cocaine production machine. The fact that it's open source doesn't matter.

You seem to not have understood that different forms of appliances need to comply with different forms of law. And you being able to call it open source or not doesn't change anything about its legal aspects.

And every law written is a compromise between two opposing parties.

137. 1718627440 ◴[] No.44615875{11}[source]
If you are talking for example about invasive A/B tests, then the solution is to pay for testers, not to test on your users.

What exactly do think should be allowed which still respect privacy, which isn't now?

138. grues-dinner ◴[] No.44615961{9}[source]
I don't think it's too misleading, because in the absence of any other information, they are the same.

What you could then add to this system is a certification scheme to permit implicit consent of all the data handling (including who you hand it off to and what they are allowed to do with it, as well as whether they have demonstrated themselves to be trustworthy) is audited to be compliant with some more stringent requirements. It could even be self-certification along the lines of CE marking. But that requires strict enforcement, and the national regulators so far have been a bunch of wet blankets.

That actually would encourage organisations to find ways to get the information they want without violating the privacy of their users and anyone else who strays into their digital properties.

139. klabb3 ◴[] No.44616163{4}[source]
Yes, 100%. And that’s why throwing copyright selectively in the bin now when there’s an ongoing massive transfer of wealth from creators to mega corps, is so surprising. It’s almost as if governments were only protecting economic interests of creators when the creators were powerful (eg movie studios), going after individuals for piracy and DRM circumvention. Now that the mega corps are the ones pirating at a scale they get a free pass through a loophole designed for individuals (fair use).

Anyway, the show must go on so were unlikely to see any reversal of this. It’s a big experiment and not necessarily anything that will benefit even the model providers themselves in the medium term. It’s clear that the ”free for all” policy on grabbing whatever data you can get is already having chilling effects. From artists and authors not publishing their works publicly, to locking down of open web with anti-scraping. Were basically entering an era of adversarial data management, with incentives to exploit others for data while protecting the data you have from others accessing it.

replies(4): >>44616552 #>>44616611 #>>44616704 #>>44617293 #
140. owebmaster ◴[] No.44616332{11}[source]
yeah this one wasn't as funny.
replies(1): >>44617380 #
141. adastra22 ◴[] No.44616360{9}[source]
Calculated, not callous. Quite the opposite: precaution kills people every day, just not as visibly. This is especially true in the area of medicine where innovation (new medicines) aren’t made available even when no other treatment is approved. People die every day by the hundreds of thousands of diseases that we could be innovating against.
142. RestlessMind ◴[] No.44616373{5}[source]
OpenAI was not an entrenched interest until 2023. Yahoo mattered until 2009. Nokia was the king of mobile phones until 2010.

Technology changes very quickly and the future of things is hardly decided by entrenched interests.

143. saubeidl ◴[] No.44616428{5}[source]
I will admit my ignorance of the finer details of US law - could you share resources explaining the parallels?
144. victorbjorklund ◴[] No.44616465{3}[source]
Copyright is literally granted by the gov.
145. rpdillon ◴[] No.44616471{9}[source]
Well, the topic is really whether or not the EU's regulations are effective at producing desired outcomes. The comment you're responding to is making a strong argument that it isn't. I tend to agree.

There's a certain hubris to applying rules and regulations to a system that you fundamentally don't understand.

replies(1): >>44617382 #
146. rpdillon ◴[] No.44616513{9}[source]
You're not arguing in a way that strikes me as intellectually honest.

You're hypothesizing the existence of large negative effects with minimal evidence.

But the positive effects of plastics and social media are extremely well understood and documented. Plastics have revolutionized practically every industry we have.

With that kind of pattern of evidence, I think it makes sense to discount the negatives and be sure to account for all the positives before saying that deploying the technology was a bad idea.

replies(1): >>44617388 #
147. ramses0 ◴[] No.44616552{5}[source]
You've put into words what I've been internally struggling to voice. Information (on the web) is a gas, it expands once it escapes.

In limited, closed systems, it may not escape, but all it takes is one bad (or hacked) actor and the privacy of it is gone.

In a way, we used to be "protected" because it was "too big" to process, store, or access "everything".

Now, especially with an economic incentive to vacuum literally all digital information, and many works being "digital first" (even a word processor vs a typewriter, or a PDF that is sent to a printer instead of lithograph metal plates)... is this the information Armageddon?

148. rpdillon ◴[] No.44616563{7}[source]
I think it's an open question whether we can reboot society without the use of fossil fuels. I'm personally of the opinion that we wouldn't be able to.

Simply taking away some giant precursor for the advancements we enjoy today and then assuming it all would have worked out somehow is a bit naive.

I would need to see a very detailed pipeline from growing wheat in an agrarian society to the development of a microprocessor without fossil fuels to understand the point you're making. The mining, the transport, the manufacture, the packaging, the incredible number of supply chains, and the ability to give people time to spend on jobs like that rather than trying to grow their own food are all major barriers I see to the scenario you're suggesting.

The whole other aspect of this discussion that I think is not being explored is that technology is fundamentally competitive, and so it's very difficult to control the rate at which technology advances because we do not have a global government (and if we did have a global government, we'd have even more problems than we do now). As a comment I read yesterday said, technology concentrates gains towards those who can deploy it. And so there's going to be competition to deploy new technologies. Country-level regulation that tries to prevent this locally is only going to lead to other countries gaining the lead.

149. CamperBob2 ◴[] No.44616586{6}[source]
You don't "upload" data to an LLM, but that's already been explained multiple times, and evidently it didn't soak in.

LLMs extract semantic information from their training data and store it at extremely low precision in latent space. To the extent original works can be recovered from them, those works were nothing intrinsically special to begin with. At best such works simply milk our existing culture by recapitulating ancient archetypes, a la Harry Potter or Star Wars.

If the copyright cartels choose to fight AI, the copyright cartels will and must lose. This isn't Napster Part 2: Electric Boogaloo. There is too much at stake this time.

150. rpdillon ◴[] No.44616602{7}[source]
I'd like for you to expand your point on understanding statistics better. I think I have a very good understanding of statistics, but I don't see how it relates to your point.

Your point is fundamentally philosophical, which is you can't use the past to predict the future. But that's actually a fairly reductive point in this context.

GP's point is that simply making an argument about why everything will fail is not sufficient to have it be true. So we need to see something significantly more compelling than a bunch of arguments about why it's going to be really bad to really believe it, since we always get arguments about why things are really, really bad.

151. isaacremuant ◴[] No.44616611{5}[source]
Governments always protect the interests of their powerful friends and donors over the people they allegedly represent.

They've just mastered the art of lying to gullible idiots or complicit psycophants.

It's not new to anyone who pays and kind of attention.

152. lowkey_ ◴[] No.44616612{3}[source]
In the US, for most laws, and most judges, there's actually much less power to interpret law. Part of the benefit of the common law system is to provide consistency and take that interpretation power away from judges of each case.
153. rpdillon ◴[] No.44616639{4}[source]
One of the reasons the New York Times didn't supply the prompts in their lawsuit is because it takes an enormous amount of effort to get LLMs to produce copyrighted works. In particular, you have to actually hand LLMs copyrighted works in the prompt to get them to continue it.

It's not like users are accidentally producing copies of Harry Potter.

154. vidarh ◴[] No.44616704{5}[source]
Why? Copyright is 1) presented as being there to protect the interests of the general public, not creators, 2) Statute of Anne, the birth of modern copyright law, protected printers - that is "big businesss" over creators anyway, so even that has largely always been a fiction.

But it is also increasingly dubious that the public gets a good deal out of copyright law anyway.

> From artists and authors not publishing their works publicly

The vast majority of creators have never been able to get remotely close to make a living from their creative work, and instead often when factoring in time lose money hand over fist trying to get their works noticed.

155. FirmwareBurner ◴[] No.44616737{7}[source]
Very naive take that's not based in reality but would only work in fiction.

Historically, all nations that developed and deployed new tech, new sources of energy and new weapons, have gained economic and military superiority over nations who did not, which ended up being conquered/enslaved.

UK would not have managed to be the world power before the US, without their coal fueled industrial era.

So as history goes, if you refuse to take part in, or cannot keep up in the international tech, energy and weapons race, you'll be subjugated by those who win that race. That's why the US lifted all brakes on AI, to make sure they'll win and not China. What EU is doing, self regulating itself to death, is ensuring its future will be at the mercy of US and China. I'm not the one saying this, history proves it.

replies(1): >>44617472 #
156. saghm ◴[] No.44616884{5}[source]
Re-quoting the section the parent comment included from this agreement:

> > GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.

It sounds to me like an LLM you describe would be covered if they people distributing it put in a clause in the license saying that people can't do that.

157. FirmwareBurner ◴[] No.44616918{7}[source]
How would you have won the world wars without oil?

Your augment only work in a fictional world where oil does not exist and you have the hindsight of today.

But when oil does exist and if you would have chosen not to use it, you will have long been steamrolled by industrialized nations powers who used their superior oil fueled economy and military to destroy or enslave your nation and you wouldn't be writing this today.

replies(1): >>44617187 #
158. wavemode ◴[] No.44617093{6}[source]
To be fair, "copy"right has only been needed for as long as it's been possible to copy things. In the grand scheme of human history, that technology is relatively new.
159. 1718627440 ◴[] No.44617187{8}[source]
I thought we are arguing about regulating oil not to not use oil at all.

> How would you have won the world wars without oil?

You don't need to win world wars to have technological advancement, in fact my country didn't. I think the problem with this discussion, is that we all disagree what to regulate, that's how we ended up with the current situation after all.

I interpreted it to mean that we wouldn't use plastic for everything. I think we would be fine having glass bottles and paper, carton, wood for grocery wrapping. It wouldn't be so individual per company, but this not important for the economy and consumers, and also would result in a more competitive market.

I also interpreted it to mean that we wouldn't have so much cars and don't use planes beside really important stuff (i.e. international politics). The cities simply expand to the travel speed of the primary means of transportation. We would simply have more walkable cities and would use more trains. Amazon probably wouldn't be possible and we would have more local producers. In fact this is what we currently aim for and it is hard, because transition means that we have larger cities then we can support with the primary means of transportation.

As for your example inventions: we did have computers in the 40s and the need for networking would arise. Space travel is in danger, but you can use oil for space travel without using it for everyday consumer products. As I already wrote, we would have more atomic energy, not sure if that would be good though.

160. daedrdev ◴[] No.44617293{5}[source]
copyright is the backbone of modern media empires. It both allows small creators and massive corporations to seek rent on works, but since the works are under copyright for a century its quite nice to corporations
161. artathred ◴[] No.44617377{10}[source]
walmart has sales associates running around gathering all those data points, as well as people standing around monitoring. Their “eyes” aren’t regulated.
162. artathred ◴[] No.44617380{12}[source]
I can see how it hits too close to home for you
163. pyman ◴[] No.44617382{10}[source]
For those of us outside the US, it's not hard to understand how regulations work. The US acts as a protectionist country, it sets strict rules and pressures other governments to follow them. But at the same time, it promotes free markets, globalisation, and neoliberal values to everyone else.

The moment the EU shows even a small sign of protectionism, the US complains. It's a double standard.

164. TFYS ◴[] No.44617388{10}[source]
I agree that plastics probably do have more positives than negatives, but my point is that many of our innovations do have large negative effects, and if we take them into use before we understand those negative effects it can be impossible to fix the problems later. Now that we're starting to understand the extent of plastic pollution in our environment, if some future study reveals that it's a causal factor in some of our diseases it'll be too late to do anything about it. The plastic is in the environment and we can't get it out with regulation anymore.

Why take such risks when we could take our time doing more studies and thinking about all the possible scenarios? If we did, we might use plastics where they save lives and not use them in single-use containers and fabrics. We'd get most of the benefit without any of the harm.

165. johnisgood ◴[] No.44617435{8}[source]
I do not disagree. It could indeed be made shorter than usual, especially if you are not malicious.
166. bryanrasmussen ◴[] No.44617440{4}[source]
actually in much of the EU if not all of it Copyright is an intrinsic right of the creator.
167. TFYS ◴[] No.44617472{8}[source]
You're right, in a system based on competition it's not possible to prevent these technologies from being used as soon as they're invented if there's some advantage to be gained. We need to figure out global co-operation before such a thing is realistic.

But if such co-operation was possible, it would make sense to progress more carefully.