Most active commenters

    ←back to thread

    443 points wg0 | 16 comments | | HN request time: 0.236s | source | bottom
    1. bschne ◴[] No.45899276[source]
    The same thing happened to German magazine Spiegel recently, see the correction remark at the end of this article

    https://www.spiegel.de/wirtschaft/unternehmen/deutsche-bahn-...

    replies(3): >>45899456 #>>45900256 #>>45900962 #
    2. kavith ◴[] No.45899456[source]
    Fair play to them for owning up to their mistake, and not just pretending like it didn't happen!
    replies(5): >>45899575 #>>45899645 #>>45899892 #>>45899966 #>>45901762 #
    3. bonesss ◴[] No.45899575[source]
    As programmers I think we can extend some professional empathy and understanding: copy-and-pasting all day is a lot harder than you’d think.
    replies(1): >>45899701 #
    4. CGamesPlay ◴[] No.45899645[source]
    Maybe, although I'm a bit doubtful that they were 100% honest.

    > Entgegen unseren Standards

    5. tonyhart7 ◴[] No.45899701{3}[source]
    compared to the writing yourself???? absolutely not
    replies(1): >>45900971 #
    6. yard2010 ◴[] No.45899892[source]
    You're absolutely right! but they can shove this euphemism. Just say that chatgpt wrote the article and no one read it before publishing, no need for all the fluff.
    replies(1): >>45901063 #
    7. reaperducer ◴[] No.45899966[source]
    Fair play to them for owning up to their mistake, and not just pretending like it didn't happen!

    That's what the legitimate media has done for the last couple of hundred years. Every issue of the New York Times has a Corrections section. I think the Washington Post's is called Corrections and Amplifications.

    Bloggers just change the article and hope it didn't get cached in the Wayback Machine.

    8. dredmorbius ◴[] No.45900256[source]
    Scientific paper as well: <https://fediscience.org/@GeorgKrammer/115536337398227063>

    Original <https://doi.org/10.1016/j.surfin.2024.104081> and retraction: <https://doi.org/10.1016/j.surfin.2024.104081>.

    replies(1): >>45900559 #
    9. roflmaostc ◴[] No.45900559[source]
    I still think someone should have done this as a pun and get your paper trending everywhere.
    10. IAmBroom ◴[] No.45900962[source]
    "We regret to admit that our editors don't actually take the time to read these articles before hitting the PUBLISH button..."
    replies(2): >>45901164 #>>45904981 #
    11. IAmBroom ◴[] No.45900971{4}[source]
    It was sarcastic.
    12. phkahler ◴[] No.45901063{3}[source]
    >> Just say that chatgpt wrote the article and no one read it before publishing

    This is so interesting. I wonder if no human prompted for the article to be written either. I could see some kind of algorithm figuring out what to "write" about and prompting AI to create the articles automatically. Those are the jobs that are actually being replaced by AI - writing fluff crap to build an attention trap for ad revenue.

    replies(1): >>45901159 #
    13. Cthulhu_ ◴[] No.45901159{4}[source]
    Very likely this already happens on slop websites (...which I can't name because I don't go there), which for example just republish press releases (which could be considered aggregation sites I guess), or which automatically scrape Reddit and translate them into listicles on the fly.
    14. Cthulhu_ ◴[] No.45901164[source]
    This is the real issue; I'm sure journalists already use loads of shortcuts to do their job efficiently but the end responsible is the editor(s).
    15. constantcrying ◴[] No.45901762[source]
    They do not deserve a shred of recommendation. This is just damage control, pretending that it did not happen never was an option. Instead they tried to claim that it was just a one of mistake. What it really shows is that nobody even bothers to read their articles before hitting publish and that AI is widely used internally.
    16. SoftTalker ◴[] No.45904981[source]
    The editors were laid off and replaced by an LLM. Or more likely, the editorial staff was cut in half and the ones who were kept were told to use LLMs to handle the increased workload.