←back to thread

646 points blendergeek | 3 comments | | HN request time: 0.001s | source
Show context
bflesch ◴[] No.42726827[source]
Haha, this would be an amazing way to test the ChatGPT crawler reflective DDOS vulnerability [1] I published last week.

Basically a single HTTP Request to ChatGPT API can trigger 5000 HTTP requests by ChatGPT crawler to a website.

The vulnerability is/was thoroughly ignored by OpenAI/Microsoft/BugCrowd but I really wonder what would happen when ChatGPT crawler interacts with this tarpit several times per second. As ChatGPT crawler is using various Azure IP ranges I actually think the tarpit would crash first.

The vulnerability reporting experience with OpenAI / BugCrowd was really horrific. It's always difficult to get attention for DOS/DDOS vulnerabilities and companies always act like they are not a problem. But if their system goes dark and the CEO calls then suddenly they accept it as a security vulnerability.

I spent a week trying to reach OpenAI/Microsoft to get this fixed, but I gave up and just published the writeup.

I don't recommend you to exploit this vulnerability due to legal reasons.

[1] https://github.com/bf/security-advisories/blob/main/2025-01-...

replies(8): >>42727288 #>>42727356 #>>42727528 #>>42727530 #>>42733203 #>>42733949 #>>42738239 #>>42742714 #
JohnMakin ◴[] No.42727288[source]
Nice find, I think one of my sites actually got recently hit by something like this. And yea, this kind of thing should be trivially preventable if they cared at all.
replies(2): >>42727906 #>>42731618 #
zanderwohl ◴[] No.42731618[source]
IDK, I feel that if you're doing 5000 HTTP calls to another website it's kind of good manners to fix that. But OpenAI has never cared about the public commons.
replies(2): >>42731695 #>>42739329 #
1. marginalia_nu ◴[] No.42731695[source]
Yeah, even beyond common decency, there's pretty strong incentives to fix it, as it's a fantastic way of having your bot's fingerprint end up on Cloudflare's shitlist.
replies(1): >>42741909 #
2. bflesch ◴[] No.42741909[source]
Kinda disappointed by cloudflare - it feels they have quite basic logic only. Why would anomaly detection not capture these large payloads?

There was a zip-bomb like attack a year ago where you could send one gigabyte of the letter "A" compressed into very small filesize with brotli via cloudflare to backend servers, basically something like the old HTTP Transfer-Encoding (which has been discontinued).

Attacker --1kb--> Cloudflare --1GB--> backend server

Obviously the servers who received the extracted HTTP request from the cloudflare web proxies were getting killed but cloudflare didn't even accept it as a valid security problem.

AFAIK there was no magic AI security monitoring anomaly detection thing which blocked anything. Sometimes I'd love to see the old web application firewall warnings for single and double quotes just to see if the thing is still there. But maybe it's misconfiguration on side of cloudflare user because I can remember they at least had a WAF product in the past.

replies(1): >>42745097 #
3. benregenspan ◴[] No.42745097[source]
> But maybe it's misconfiguration on side of cloudflare user because I can remember they at least had a WAF product in the past

They still have a WAF product, though I don't think anything in the standard managed ruleset will fire just on quotes, the SQLi and XSS checks are a bit more sophisticated than that.

From personal experience, they will fire a lot if someone uses a WAF-protected CMS to write a post about SQL.