Edited November 14 2025:
Added an additional hyperlink to the full report in the initial section
Corrected an error about the speed of the attack: not "thousands of requests per second" but "thousands of requests, often multiple per second"
Edited November 14 2025:
Added an additional hyperlink to the full report in the initial section
Corrected an error about the speed of the attack: not "thousands of requests per second" but "thousands of requests, often multiple per second"
The assumption that no human could ever (program a computer to) do multiple things per second, nor have their code do different things depending on the result of the previous request is... interesting.
(observation is not original to me, it was someone on Twitter who pointed it out)
I've seen printed books checked by paid professionals that consisted a "replace all" populated without context. Creating a grammar error on every single page. Or ones where everyone just forgot to add page numbers. Or a large cook book where index and page numbers didn't mach, making it almost impossible to navigate.
I'm talking of pre-AI work, with publisher. Apparently it wasn't obvious for them.
One of the things I enjoy about Penn and Teller is that they explain in detail how their point of view differs from the audiences and how they intentionally use that difference in their shows. With that in mind you might picture your org as the audience, with one perspective diligently looking forwards.