←back to thread

534 points BlueFalconHD | 1 comments | | HN request time: 0.212s | source

I managed to reverse engineer the encryption (refered to as “Obfuscation” in the framework) responsible for managing the safety filters of Apple Intelligence models. I have extracted them into a repository. I encourage you to take a look around.
Show context
torginus ◴[] No.44484236[source]
I find it funny that AGI is supposed to be right around the corner, while these supposedly super smart LLMs still need to get their outputs filtered by regexes.
replies(8): >>44484268 #>>44484323 #>>44484354 #>>44485047 #>>44485237 #>>44486883 #>>44487765 #>>44493460 #
jonas21 ◴[] No.44484323[source]
I don't think anyone believes Apple's LLMs are anywhere near state of the art (and certainly not their on-device LLMs).
replies(1): >>44484929 #
1. lupire ◴[] No.44484929[source]
Apple isn't the only one doing this.