Most active commenters

    45 points taubek | 12 comments | | HN request time: 0.914s | source | bottom
    1. gnabgib ◴[] No.44382592[source]
    Discussion (168 points, 1 day ago, 201 comments) https://news.ycombinator.com/item?id=44367850
    2. roywiggins ◴[] No.44382655[source]
    > Then, its service providers stripped the books from their bindings, cut their pages to size, and scanned the books into digital form — discarding the paper originals.

    This is basically the plot to Vinge's Rainbows End, AI and all.

    replies(1): >>44385970 #
    3. pseufaux ◴[] No.44385580[source]
    > But to make anyone pay specifically for the use of a book each time they read it, each time they recall it from memory, each time they later draw upon it when writing new things in new ways would be unthinkable.

    This feels like an unwarranted anthropomorphization of what LLMs are doing.

    replies(3): >>44387177 #>>44387291 #>>44387683 #
    4. kristianp ◴[] No.44385970[source]
    Is that the one where the books were shredded and the shreds scanned, being reconstructed from the shreds as part of the process?
    replies(1): >>44386317 #
    5. jdboyd ◴[] No.44386317{3}[source]
    Yes
    6. Am4TIfIsER0ppos ◴[] No.44386895[source]
    Stealing? Which book store did they burgle? Was it a publisher's warehouse?
    7. reverendsteveii ◴[] No.44387177[source]
    Like corporations, the machines will be human for purposes of rights and abstract, ephemeral entities for purposes of responsibility.
    replies(1): >>44388784 #
    8. bgwalter ◴[] No.44387291[source]
    Indeed, we don't charge humans for breathing, but we do attempt to discourage CO2 emissions for machines. These are completely different things on a completely different scale.

    Misanthropic has convinced this particular judge, but there are many others, especially in other countries.

    9. gschizas ◴[] No.44387303[source]
    I wonder how this is going to affect the Disney+Universal vs OpenAI trial.
    10. hn_throwaway_99 ◴[] No.44387683[source]
    I feel like the fundamental issue, and the things people really have a problem with, is that the speed and scale with which LLMs function completely breaks the use cases for which fair use was originally envisioned. IMO, existing copyright law is just wholly unsuited to deal with the consequences of AI.

    That is, I don't think anyone (especially on this website) would have a problem if someone read a ton of books, and them opened a website where you can chat with them and ask them questions about the books. But if this person had "super abilities", where they could read every book that ever existed, then respond almost instantly to questions about any book that was read, and the person could respond to millions of questions simultaneously, I think that "fair use" as it exists now would have never existed - it completely breaks the economic model that copyright was supposed to incentivize in the first place. I'm not arguing which position is right or wrong, but I am arguing that using "if a human did it it would be fair use" is a very bad analogy.

    As a similar example, in the US, courts had regularly held that people walking around outside don't have an expectation of privacy. But what if computers could then record you, upload you to a website, and use facial recognition so that anyone else in the world could set an alert to be notified if you ever appeared on some certain camera. The original logic that fed into the "no expectations of privacy when in public" rulings breaks down solely due to the speed and scale with which computers can operate.

    11. pseufaux ◴[] No.44388784{3}[source]
    I'm unsure if this is true. I'm far from an expert in the current legal framework, but so far the court cases regarding liability in autonomous vehicle crashes have held humans responsible. That may change as driverless vehicles reach higher levels of automation, but in my understanding the ruling is still out.

    I don't see why it would be different for LLMs.