←back to thread

427 points JumpCrisscross | 5 comments | | HN request time: 0.962s | source
1. weinzierl ◴[] No.41901523[source]
We had a time when CGI took off, where everything was too polished and shiny and everyone found it uncanny. That started a whole industry to produce virtual wear, tear, dust, grit and dirt.

I wager we will soon see the same for text. Automatic insertion of the right amount of believable mistakes will become a thing.

replies(2): >>41901654 #>>41901774 #
2. ImHereToVote ◴[] No.41901654[source]
You can already do that easily with ChatGPT. Just tell it to rate the text it generated on a scale from 0-10 in authenticity. Then tell it to crank out similar text at a higher authenticity scale. Try it.
3. anshumankmr ◴[] No.41901774[source]
Without some form of watermarking, I do not believe there is any way to differentiate. How that water marking would look like I have no clue.

The pandora's box has been opened with regards to large language models.

replies(1): >>41902702 #
4. weinzierl ◴[] No.41902702[source]
I thought words that rose in popularity because of LLMs (like "delve" for exampme) might be an indicator of watermarking, but I am not sure.
replies(1): >>41905643 #
5. gs17 ◴[] No.41905643{3}[source]
It's not a very good "watermark". Ignoring that a slightly clever student can use something like https://github.com/sam-paech/antislop-sampler/tree/main to prevent those words, students who have been exposed to AI-written text will naturally use those more often.