> Given that this has been a serious problem with Google models
It hasn't been though, has it?
At one point one of their earliest image gen models had a prompting problem: they tried to have the LLM doing prompt expansion avoid always generating white people, since they realized white people were significantly overrepresented in their training data compared to their proportion of the population.
Unfortunately that prompt expansion would sometimes clash with cases where there was a specific race required for historical accuracy.
AFAIK they fixed that ages ago and it stopped being an issue.