←back to thread

242 points alphabetting | 1 comments | | HN request time: 0s | source
Show context
wenbin ◴[] No.41872782[source]
NotebookLM is contributing to fake podcasts across the internet, with over 1,300 and counting:

https://github.com/ListenNotes/ai-generated-fake-podcasts/bl...

Google is taking a different approach this time, moving quickly. While NotebookLM is indeed a remarkable tool for personal productivity and learning, it also opens the door for spammers to mass-produce content that isn't meant for human consumption.

Amidst all the praise for this project, I’d like to offer a different perspective. I hope the NotebookLM team sees this and recognizes the seriousness of the spam issue, which will only grow if left unaddressed. If you know someone on the team, please bring this to their attention - Could you please provide a tool or some plain-English guidelines to help detect audio generated by NotebookLM? Is there a watermark or any other identifiable marker that can be used?

Just recently, a Hacker News post highlighted how nearly all Google image results for "baby peacock" are AI-generated: https://news.ycombinator.com/item?id=41767648

It won't be long before we see a similar trend with low-quality, AI-generated fake podcasts flooding the internet.

replies(14): >>41872802 #>>41872821 #>>41872878 #>>41872954 #>>41873067 #>>41873074 #>>41873152 #>>41873269 #>>41873297 #>>41873476 #>>41874055 #>>41874427 #>>41874680 #>>41875008 #
1. squigz ◴[] No.41874680[source]
> Is there a watermark or any other identifiable marker that can be used?

The problem with this is it's not feasible long-term, or even medium-term - as soon as a watermarking system is implemented, a watermark-removal system will be created.

(Happy to be proven wrong)