If the problem is too many submissions, that would suggest there needs to be structures in place to manage that.
Perhaps projects receiving lage quanties of updates need triage teams. I suspect most of the submissions are done in good faith.
I can see some people choosing to avoid AI due to the possibility of legal issues. I'm doubtful of the likelihood of such problems, but some people favour eliminating all possibly over minimizing likelihood. The philosopher in me feels like people who think they have eliminated the possibility of something just haven't thought about it enough.
This ignores the fact that many open source projects do not have the resources to dedicate to a large number of contributions. A side effect of LLM generated code is probably going to be a lot of code. I think this is going to be an issue that is not dependent on the overall quality of the code.
With AI you're going to get job hunters automating PRs for big name projects so they can stick the contributions in their resume.
The barrier to being able to do a first commit on any project is usually quite high, there are plenty of people who would like to contribute to projects but cannnot dedicate the time n effort to pass that initial threshold. This might allow people an ability to contribute at a lower level while gently introducing them to the codebase where perhaps they might become a regular contributer in the future.