←back to thread

534 points BlueFalconHD | 3 comments | | HN request time: 0.483s | source

I managed to reverse engineer the encryption (refered to as “Obfuscation” in the framework) responsible for managing the safety filters of Apple Intelligence models. I have extracted them into a repository. I encourage you to take a look around.
1. Cort3z ◴[] No.44488496[source]
What are they protecting against? Honestly. LLMs should probably have an age limit, and then, if you are above, you should be adult enough to understand what this is and how it can be used.

To me, it seems like they only protect against bad press

replies(2): >>44488509 #>>44488533 #
2. plutokras ◴[] No.44488509[source]
> What are they protecting against? Honestly.

They are protcting their producer from bad PR.

3. empiko ◴[] No.44488533[source]
Yes, it is indeed to mitigate bad press. Unfortunately, the discussion about AI is so ridiculous, that it is often considered newsworthy when a product generates something funky for a person with large enough Twitter audience. Nobody wants to answer the questions about why their LLM generated it and how they will prevent it in the future.