I'm not a fan of the OSA but proponents of it will *keep winning* if you *keep misrepresenting it*.
You can, and should, argue about the effects but the core of the OSA and how it can be sold is this, at several different levels:
One, most detailed.
Sites that provide user to user services have some level of duty of care to their users, like physical sites and events.
They should do risk assessments to see if their users are at risk of getting harmed, like physical sites and events.
They should implement mitigations based on those risk assessments. Not to completely remove all possibility of harm, but to lower it.
For example, sites where kids can talk to each other in private chats should have ways of kids reporting adults and moderators to review those reports. Sites where you can share pictures should check for people sharing child porn (if you have a way of a userbase sharing encrypted images with each other anonymously, you're going to get child porn on there). Sites aimed at adults with public conversations like some hobby site with no history of issues and someone checking for spam/etc doesn't need to do much.
You should re-check things once a year.
That's the selling point - and as much as we can argue about second order effects (like having a list of IDs and what you've watched, overhead etc), those statements don't on the face of it seem objectionable.
Two, shorter.
Sites should be responsible about what they do just like shops and other spaces, with risk assessments and more focus when there are kids involved.
Three, shortest.
Facebook should make sure people aren't grooming your kids.
Now, the problem with talking about " a total surveillance police state, where all speech is monitored," is where does that fit into the explanations above? How do you explain that to even me, a highly technical, terminally online nerd who has read at least a decent chunk of the actual OFCOM guidelines?