←back to thread

114 points dworks | 1 comments | | HN request time: 0.204s | source
Show context
bigmattystyles ◴[] No.44482203[source]
Old maps (and perhaps new ones) used to add fake little alleys so a publisher could quickly spot publishers infringing on their IP rather than going out and actually mapping. I wonder if something similar is possible with LLMs.
replies(6): >>44482287 #>>44482430 #>>44482713 #>>44482830 #>>44482968 #>>44482971 #
varispeed ◴[] No.44482430[source]
I often say an odd thing on public forum or make up a story and then see if LLM can bring it up.

I started doing that once LLM provided me with a solution to a problem that was quite elegant, but was not implemented in the particular project. Turns out it learned it from GitHub issues post that described how particular problem could be tackled, but PR never actually got in.

replies(1): >>44482815 #
1. richardw ◴[] No.44482815[source]
I’ve wondered whether humans who wanted to protect some areas of knowledge just start writing BS here and there. Organised and large scale, with hidden orchestration channels, it could potentially really screw with models. Put the signal to humans in related but slightly removed places.