If your argument must rest on a caricature of weak persuasiveness attempting to persuade someone of something extremely disadvantageous to show how impossible hazardous persuasion is, there is something wrong. Nevertheless:
First, you argued the implausibility of strong persuasion. Your rhetoric was effectively "look how silly this whole notion of a machine persuading someone of something is, because how dumb would you need to be for this silly thing to convince you to do this very bad thing?"
That is then used to fuel an argument that I am merely propagating AI woo, consumed by magical thinking, and clearly am just afraid of something equivalent to violent video games and/or movies. The level of inferential contortion is difficult to wrap my head around.
Now, you seem to be arguing along an entirely different track: that AI models should have the inalienable right to self expression, for the same reason humans should have that right (I find it deeply ironic that this is the direction you'd choose after accusations of AI woo, but I digress). Or, equivalently, that humans should have the inalienable right to access and use such models.
This is no longer an argument about the plausibility of AI being persuasive, or that persuasion can be hazardous, but that we should permit it in spite of any of that because freedom of expression is generally a good thing.
(This is strange to me, because I never argued that the models should be banned or prohibited, merely that tooling should try to avoid direct human-to-model-output contact, as such contact (when model output is sufficiently persuasive) is hazardous. Much like how angle grinders or power tools are generally not banned, but have safety features preventing egregious bodily harms.)
> In reality, even if they improve to be completely indistinguishable from the sharpest and most persuasive of human minds our society has ever known, I'd still make exactly the same arguments as above.
While my true concern is systems of higher persuasiveness than humans have ever been exposed to, let's see:
> I have a hard time imagining [the most persuasive of human minds our society has ever known] convincing anyone of sound mind to seriously harm themselves or do some egregiously stupid/violent thing.
This is immediately falsified by the myriad examples of exactly this occurring, via a much lower bar than 'most persuasive person ever'. Hmm. Strange wonder that it requires a sarcastic caricature to not immediately seem like a nonsense argument.
Considering my entire position is simply that exposure to persuasion can be hazardous, I don't see what you're trying to prove now. It's certainly not in opposition to something I've said.
As it does seem you have shifted perspectives to the moral rather than the mechanistic, and that you've conceded that persuasion carries with it nontrivial hazard (even if we should entertain that hazard for the sake of our freedoms), are we now determining how much risk is acceptable to maintain freedoms? I'm not interested in having that discussion, as I don't purport to restrict said freedoms in any case.
Going back to the power tool analogy, you are of course free to disable safety precautions on your own personal angle grinder. At work, some sort of regulatory agency (OSHA, etc) will toil to stop your employer from doing so. I, personally, want a future of AI tooling akin to this. If AI are persuasive enough to be hazardous, I don't want to be forced by my employer to directly consume ultra-high-valence generated output. I want such high-valence content to be treated as the light of an arc-welder, something you're required to wear protection to witness or risk intervention by independent agencies that everybody grumbles about but enjoys the fruit of (namely, a distinct lack of exotic skin cancers and blindness in welders).
My point was originally and remains the bare observation that any of this will cost in blood, and whatever regulations are made will be inked in it.
I do understand the deeper motivations of your arguments, the desire to avoid (and/or fear of) gleeful overreach by the hands of AI labs who want nothing more than to wholly control all use of such models. That lies orthogonal to my basis of reasoning. It does not adequately contend with the realities of what to do when persuasiveness approaches sufficient levels. Is the truth now something to be avoided because it would serve the agenda of somebody in particular? Should we distort our understanding to not encroach on ideas that will be misappropriated by those with something to gain?
Ignoring any exposition on whether it is plausible or whether it caps out at human or subhuman or superhuman levels or any of the chaff about freedom of expression or misappropriation by motivated actors: if we do manage to build such a thing as I describe (and the hazard inherent is plainly obvious if the construction is not weakened, but resident still even if weakened), what do we do? How many millions will be exposed to these systems? How can it be made into something that retains utility yet is not a horror beyond reckoning?
There is a great deal more to say on the subject, I unfortunately don't have the time to explore it in any real depth here.