←back to thread

358 points tkgally | 2 comments | | HN request time: 0s | source

The use of the em dash (—) now raises suspicions that a text might have been AI-generated. Inspired by a suggestion from dang [1], I created a leaderboard of HN users according to how many of their posts before November 30, 2022—that is, before the release of ChatGPT—contained em dashes. Dang himself comes in number 2—by a very slim margin.

Credit to Claude Code for showing me how to search the HN database through Google BigQuery and for writing the HTML for the leaderboard.

[1] https://news.ycombinator.com/item?id=45053933

Show context
maaaaattttt ◴[] No.45073453[source]
I think this whole em dash topic should lead to some deeper (though not very deep) conversations:

* If it was not widely used before where/how did (chat)GPT picked it up?

    * If it was widely used, then it shouldn't be a topic at all. But, there seems to be informal agreement that it wasn’t widely used.
    
    * Or, could GPT have inferred that even though it's not widely used, it's the better way to go (to use it). Which then makes one wonder about the whole probability of next token idea. Maybe this line of thinking falls too short of what might be really going on internally.

 * If it had picked up something that is widely used but in the wrong way, it should make us pause (again) about the future feedback loops these LLMs, which aren't going away, are already creating. Not just in terms of grammar and spelling but also in terms of way of thinking and seeing the world.
(edit: formatting)
replies(3): >>45073476 #>>45073485 #>>45073747 #
1. msgodel ◴[] No.45073476[source]
It's used a lot in formal writing (academic papers, books etc) which are probably a large portion of chatGPTs training. If the HRL was done by professional writers then it was probably additionally biased toward using them.

People are more casual on the web. It's sort of like how people can often tell when it's me in IM without my name because I properly use periods while that's unusual in that medium. ChatGPT is so correct it feels robotic.

replies(1): >>45073736 #
2. maaaaattttt ◴[] No.45073736[source]
It’s the most likely explanation I believe. I have no idea about the content distribution of the training data but I would have assumed twitter and Reddit content would completely dwarf the literary content. Somewhat good that if it’s indeed not the case!