←back to thread

427 points JumpCrisscross | 2 comments | | HN request time: 0.001s | source
Show context
weinzierl ◴[] No.41901523[source]
We had a time when CGI took off, where everything was too polished and shiny and everyone found it uncanny. That started a whole industry to produce virtual wear, tear, dust, grit and dirt.

I wager we will soon see the same for text. Automatic insertion of the right amount of believable mistakes will become a thing.

replies(2): >>41901654 #>>41901774 #
anshumankmr ◴[] No.41901774[source]
Without some form of watermarking, I do not believe there is any way to differentiate. How that water marking would look like I have no clue.

The pandora's box has been opened with regards to large language models.

replies(1): >>41902702 #
1. weinzierl ◴[] No.41902702[source]
I thought words that rose in popularity because of LLMs (like "delve" for exampme) might be an indicator of watermarking, but I am not sure.
replies(1): >>41905643 #
2. gs17 ◴[] No.41905643[source]
It's not a very good "watermark". Ignoring that a slightly clever student can use something like https://github.com/sam-paech/antislop-sampler/tree/main to prevent those words, students who have been exposed to AI-written text will naturally use those more often.