If it was slop they could complain that it was wasting their time on false or unimportant reports, instead they seem to be complaining that the program reported a legitimate security issue?
For a human, generating bug reports requires a little labor with a human in the loop, which imposes a natural rate limit on how many reports are submitted, which also imposes a natural triaging of whether it's personally worth it to report the bug. It could be worth it if you're prosocially interested in the project or if your operations depend on it enough that you are willing to pay a little to help it along.
For a large company which is using LLMs to automatically generate bug reports, the cost is much lower (indeed it may be longer-term profitable from a standpoint like marketing, finding product niches, refining models, etc.) This can be asymmetric with the maintainer's perspective, where the quality and volume of reports matter in affecting maintainer throughput and quality of life.