←back to thread

534 points BlueFalconHD | 1 comments | | HN request time: 0s | source

I managed to reverse engineer the encryption (refered to as “Obfuscation” in the framework) responsible for managing the safety filters of Apple Intelligence models. I have extracted them into a repository. I encourage you to take a look around.
Show context
bawana ◴[] No.44484214[source]
Alexandra Ocasio Cortez triggers a violation?

https://github.com/BlueFalconHD/apple_generative_model_safet...

replies(7): >>44484242 #>>44484256 #>>44484284 #>>44484352 #>>44484528 #>>44485841 #>>44488050 #
michaelt ◴[] No.44484528[source]
I assume all the corporate GenAI models have blocks for "photorealistic image of <politician name> being arrested", "<politician name> waving ISIS flag", "<politician name> punching baby" and suchlike.
replies(2): >>44484622 #>>44484876 #
1. lupire ◴[] No.44484622[source]
Maybe so, but think about how such a thing would be technically implemented, and how it would lead to false positives and false negatives, and what the consequences would be.