To my mind, premoderation is the way. Any new user's submissions go to the premoderation queue for review, not otherwise visible. Noise and spam can be rejected automatically. More underhanded stuff gets a manual review. All rejections are silent, except for the rare occasion of a legitimate but naive user making an honest mistake.
What's passed gets published. Users who passed premoderation without issues for, say, 10 times, skip the human review step, given that they've passed automatic filters, so they can talk without any perceptible delay. The most trusted of them even get the privilege to do the human review step themselves %)
Meta uses they LLMs to summarize comments already and can do this, yet they choose to allow obvious crypto scammers, T-shirt scams, “hey add me comments”.
A simple LLM prompt of “is this post possibly a scam”, especially for new accounts, would do wonders. GitHub could likely do it too.
L-MO = Language Model Optimized