Last time FLoC came up, I commented that the idea of FLoC missed the point of why we oppose tracking: https://news.ycombinator.com/item?id=25906791
The EFF writes:
> The power to target is the power to discriminate.
I would extend on this point: the power to target information that the user is not choosing to share is the power to discriminate. Part of recognizing people's agency online is giving them the ability to choose how they present themselves and to choose what they share. It's not inherently wrong to say that someone might want to signal something about themselves that they find important or even just convenient to share. But that should always be their choice, it should not be a top down decision about what information is "safe" or "dangerous".
FLoC has some benefits (although they won't matter once every website decides to use FLoC as a fingerprinting vector), but even saying that FLoC has benefits, it is still based on the idea that users should not be in charge of their identities. It's got to be automated, it's got to happen in the background, it's got to use machine learning and be something that users can't inspect. I oppose the philosophy behind both current tracking systems and proposals like FLoC.
Last time this came up I also theorized about what a privacy-respecting version of FLoC could look like for people who do want to see ads or who do want personalized content online -- what a version of FLoC could be that I would be more supportive of: https://news.ycombinator.com/item?id=25907079
None of those ideas are fleshed out, but they try to get at the heart of what the fundamental difference is between allowing a user to easily signal that they want to see personalized content about shoes, and trying to intuit behind a user's back that they will buy shoes if you show them a particular ad.