←back to thread

646 points blendergeek | 1 comments | | HN request time: 0.247s | source
Show context
bflesch ◴[] No.42726827[source]
Haha, this would be an amazing way to test the ChatGPT crawler reflective DDOS vulnerability [1] I published last week.

Basically a single HTTP Request to ChatGPT API can trigger 5000 HTTP requests by ChatGPT crawler to a website.

The vulnerability is/was thoroughly ignored by OpenAI/Microsoft/BugCrowd but I really wonder what would happen when ChatGPT crawler interacts with this tarpit several times per second. As ChatGPT crawler is using various Azure IP ranges I actually think the tarpit would crash first.

The vulnerability reporting experience with OpenAI / BugCrowd was really horrific. It's always difficult to get attention for DOS/DDOS vulnerabilities and companies always act like they are not a problem. But if their system goes dark and the CEO calls then suddenly they accept it as a security vulnerability.

I spent a week trying to reach OpenAI/Microsoft to get this fixed, but I gave up and just published the writeup.

I don't recommend you to exploit this vulnerability due to legal reasons.

[1] https://github.com/bf/security-advisories/blob/main/2025-01-...

replies(8): >>42727288 #>>42727356 #>>42727528 #>>42727530 #>>42733203 #>>42733949 #>>42738239 #>>42742714 #
hassleblad23 ◴[] No.42727528[source]
I am not surprised that OpenAI is not interested if fixing this.
replies(2): >>42727750 #>>42730584 #
bflesch ◴[] No.42727750[source]
Their security.txt email address replies and asks you to go on BugCrowd. BugCrowd staff is unwilling (or too incompetent) to run a bash curl command to reproduce the issue, while also refusing to forward it to OpenAI.

The support@openai.com waits an hour before answering with ChatGPT answer.

Issues raised on GitHub directly towards their engineers were not answered.

Also Microsoft CERT & Azure security team do not reply or care respond to such things (maybe due to lack of demonstrated impact).

replies(2): >>42729126 #>>42734923 #
permo-w ◴[] No.42729126[source]
why try this hard for a private company that doesn't employ you?
replies(8): >>42729394 #>>42730264 #>>42730800 #>>42731345 #>>42732640 #>>42735360 #>>42736114 #>>42738383 #
Brian_K_White ◴[] No.42735360[source]
At least one time it's worth going through all the motions to prove whether it is or is not actually functional, so that they can not say "no one reported a problem..." about all the problems.

You can't say they don't have a funtional process, and they are lying or disingenuous when they claim to, if you never actually tried for real for yourself at least once.

replies(1): >>42741821 #
1. bflesch ◴[] No.42741821[source]
Yes, most of the time you can find someone that cares in the data privacy team or some random security engineer on social media. But it's a very draining process, especially when it's a tech company where people should actually quickly grasp the issue at hand.

I tried every single channel I could think of except calling phone numbers from the whois records, so there must've been someone who saw at least one of the mails and they decided that I'm full of shit so they wouldn't even send a reply.

And if BugCrowd staff with their boilerplate answers and fantasy nicknames wouldn't grasp how a HTTP request works it's a problem of OpenAI choosing them as their vendor. A potential bounty payout is not worth the emotional pain of going through this middleman behavior for days at a time.

Maybe I'm getting too old for this :)