My approach has been to lock AI assistants (for me, that's just Apple intelligence as far as I can help it) out of integrations with the vast majority of apps, and especially chat and email apps.
At some point, some reverse engineer will publish a writeup either confirming or denying how local these models are, how much data (and maybe even what data) is being sent up to the mothership, and how these integrations appear to be implemented.
It's not perfect, and it only offers a point-in-time view of the situation, but it's the best we can do in an intensely closed-source world. I'd be happier if these companies published the code (regardless of the license) and allowed users to test for build parity.