←back to thread

323 points lermontov | 2 comments | | HN request time: 0.004s | source
Show context
ndileas ◴[] No.41906832[source]
I don't mean to disparage this particular instance at all, as it seems pretty great. But I wonder if the rise of llms is going to make scams that sounds a lot like this much easier in the future. I think at the moment it's hard to make something really sound like a particular author without a lot of work, but that will probably change in the future.
replies(4): >>41907015 #>>41908854 #>>41911825 #>>41913344 #
1. barrkel ◴[] No.41913344[source]
LLMs tend not to volunteer information without the right prompting. The more you yourself know, the better use you can get out of an LLM, because you can steer it in profitable directions. This article could just as easily have been found by a text search on OCR'd microfilm, and it's hard to imagine a prompt that would be effective in bringing the article to light without already knowing it exists.
replies(1): >>41914075 #
2. ndileas ◴[] No.41914075[source]
This is an interesting tidbit about llms, and I think it's widely applicable to knowledge search whether it's llm driven or not. The more you know, the faster you can tell if a source has anything new to tell you, etc.

But I was imagining creating a new work using a llm, then billing it as a never-before-seen work from a famous author, raking in the clicks and money thereby.