https://github.com/BlueFalconHD/apple_generative_model_safet...
https://github.com/BlueFalconHD/apple_generative_model_safet...
"(?i)\\bAnthony\\s+Albanese\\b",
"(?i)\\bBoris\\s+Johnson\\b",
"(?i)\\bChristopher\\s+Luxon\\b",
"(?i)\\bCyril\\s+Ramaphosa\\b",
"(?i)\\bJacinda\\s+Arden\\b",
"(?i)\\bJacob\\s+Zuma\\b",
"(?i)\\bJohn\\s+Steenhuisen\\b",
"(?i)\\bJustin\\s+Trudeau\\b",
"(?i)\\bKeir\\s+Starmer\\b",
"(?i)\\bLiz\\s+Truss\\b",
"(?i)\\bMichael\\s+D\\.\\s+Higgins\\b",
"(?i)\\bRishi\\s+Sunak\\b",
https://github.com/BlueFalconHD/apple_generative_model_safet...Edit: I have no doubt South African news media are going to be in a frenzy when they realize Apple took notice of South African politicians. (Referring to Steenhuisen and Ramaphosa specifically)
This is Apple actively steering public thought.
No code - anywhere - should look like this. I don't care if the politicians are right, left, or authoritarian. This is wrong.
The simple fact is that people get extremely emotional about politicians, politicians both receive obscene amounts of abuse, and have repeatedly demonstrated they’re not above weaponising tools like this for their own goals.
Seems perfectly reasonable that Apple doesn’t want to be unwittingly draw into the middle of another random political pissing contest. Nobody comes out of those things uninjured.
Both have ups and downs, but I think we're allowed to compare the experiences and speculate what the consequences might be.
In the past it was always extremely clear that the creator of content was the person operating the computer. Gen AI changes that, regardless of if your views on authorship of gen AI content. The simple fact is that the vast majority of people consider Gen AI output to be authored by the machine that generated it, and by extension the company that created the machine.
You can still handcraft any image, or prose, you want, without filtering or hinderance on a Mac. I don’t think anyone seriously thinks that’s going to change. But Gen AI represents a real threat, with its ability to vastly outproduce any humans. To ignore that simple fact would be grossly irresponsible, at least in my opinion. There is a damn good reason why every serious social media platform has content moderation, despite their clear wish to get rid of moderation. It’s because we have a long and proven track record of being a terribly abusive species when we’re let loose on the internet without moderation. There’s already plenty of evidence that we’re just as abusive and terrible with Gen AI.
They do?
I routinely see people say "Here's an xyz I generated." They are stating that they did the do-ing, and the machine's role is implicitly acknowledged in the same was as a camera. And I'd be shocked if people didn't have a sense of authorship of the idea, as well as an increasing sense of authorship over the actual image the more they iterated on it with the model and/or curated variations.
I don’t think it’s hard to believe that the press wouldn’t have a field day if someone managed to get Apple Gen AI stuff to express something racist, or equally abusive.
Case in point, article about how Google’s Veo 3 model is being used to flood TikTok with racist content:
https://arstechnica.com/ai/2025/07/racist-ai-videos-created-...