←back to thread

770 points ta988 | 1 comments | | HN request time: 0s | source
Show context
uludag ◴[] No.42551713[source]
I'm always curious how poisoning attacks could work. Like, suppose that you were able to get enough human users to produce poisoned content. This poisoned content would be human written and not just garbage, and would contain flawed reasoning, misjudgments, lapses of reasoning, unrealistic premises, etc.

Like, I've asked ChatGPT certain questions where I know the online sources are limited and it would seem that from a few datapoints it can come up with a coherent answer. Imagine attacks where people would publish code misusing libraries. With certain libraries you could easily outnumber real data with poisoned data.

replies(4): >>42552062 #>>42552110 #>>42552129 #>>42557901 #
layer8 ◴[] No.42552110[source]
Unless a substantial portion of the internet starts serving poisoned content to bots, that won’t solve the bandwidth problem. And even if a substantial portion of the internet would start poisoning, bots would likely just shift to disguising themselves so they can’t be identified as bots anymore. Which according to the article they already do now when they are being blocked.
replies(1): >>42571713 #
1. yupyupyups ◴[] No.42571713[source]
>even if a substantial portion of the internet would start poisoning, bots would likely just shift to disguising themselves so they can’t be identified as bots anymore.

Good questions to ask would be:

- How do they disguise themselves?

- What fundamental features do bots have that distinguish them from real users?

- Can we use poisoning in conjunction with traditional methods like a good IP block lists to remove the low hanging fruits?