←back to thread

451 points croes | 3 comments | | HN request time: 0.002s | source
Show context
mattxxx ◴[] No.43962976[source]
Well, firing someone for this is super weird. It seems like an attempt to censor an interpretation of the law that:

1. Criticizes a highly useful technology 2. Matches a potentially-outdated, strict interpretation of copyright law

My opinion: I think using copyrighted data to train models for sure seems classically illegal. Despite that, Humans can read a book, get inspiration, and write a new book and not be litigated against. When I look at the litany of derivative fantasy novels, it's obvious they're not all fully independent works.

Since AI is and will continue to be so useful and transformative, I think we just need to acknowledge that our laws did not accomodate this use-case, then we should change them.

replies(19): >>43963017 #>>43963125 #>>43963168 #>>43963214 #>>43963243 #>>43963311 #>>43963423 #>>43963517 #>>43963612 #>>43963721 #>>43963943 #>>43964079 #>>43964280 #>>43964365 #>>43964448 #>>43964562 #>>43965792 #>>43965920 #>>43976732 #
palmotea[dead post] ◴[] No.43963168[source]
[flagged]
ulbu ◴[] No.43963480[source]
these comparisons of llms with human artists copying are just ridiculous. it’s saying “well humans are allowed to break twigs and damage the planet in various ways, so why not allow building a fucking DEATH STAR”.

abstracting llms from their operators and owners and possible (and probable) ends and the territories they trample upon is nothing short of eye-popping to me. how utterly negligent and disrespectful of fellow people must one be at the heart to give any credence to such arguments

replies(3): >>43964105 #>>43964159 #>>43964449 #
1. Intralexical ◴[] No.43964159[source]
It's a very consistently Silicon Valley mindset. Seems like almost every company that makes it big in tech, be it Facebook and Google monetizing our personal data, or Uber and Amazon trampling workers' rights, makes money by reducing people to objects that can be bought and sold, more than almost any other industry. No matter the company, all claimed prosocial intentions are just window dressing to convince us to be on board with our own commodification.

That's also why I'm really not worried about the "AI singularity" folks. The hype is IMO blatantly unsubstantiated by the actual capabilities, but gets pushed anyway only because it speaks to this deep-seated faith held across the industry. "AI" is the culmination of an innate belief that people should be replaceable, fungible, perfectly obedient objects, and such a psychosis blinds decision-makers to its actual limits. Only trouble is whether they have the political power to try to force it anyway.

replies(1): >>43967100 #
2. palmotea ◴[] No.43967100[source]
> That's also why I'm really not worried about the "AI singularity" folks. The hype is IMO blatantly unsubstantiated by the actual capabilities, but gets pushed anyway only because it speaks to this deep-seated faith held across the industry. "AI" is the culmination of an innate belief that people should be replaceable, fungible, perfectly obedient objects, and such a psychosis blinds decision-makers to its actual limits. Only trouble is whether they have the political power to try to force it anyway.

I'm worried because decision-makers genuinely don't seem to be bothered very much by actual capabilities, and are perfectly happy to trade massive reductions in quality for cost savings. In other worse, I don't think the limits of LLMS will actually constrain the decision-makers.

replies(1): >>43969158 #
3. johnnyanmac ◴[] No.43969158[source]
It will when it inevitably hits their wallets. Be it via the public rejection of a lower quality product, or court orders. But both sentiments move slow, so we're in here for a while.

Even with NFTs it still was a full year+ of everyone trying to shill them out before the sentiment turned. Machine learning, meanwhile, is actually useful but is being shoved into every hole.