←back to thread

534 points BlueFalconHD | 1 comments | | HN request time: 0.21s | source

I managed to reverse engineer the encryption (refered to as “Obfuscation” in the framework) responsible for managing the safety filters of Apple Intelligence models. I have extracted them into a repository. I encourage you to take a look around.
Show context
torginus ◴[] No.44484236[source]
I find it funny that AGI is supposed to be right around the corner, while these supposedly super smart LLMs still need to get their outputs filtered by regexes.
replies(8): >>44484268 #>>44484323 #>>44484354 #>>44485047 #>>44485237 #>>44486883 #>>44487765 #>>44493460 #
bahmboo ◴[] No.44484268[source]
This is just policy and alignment from Apple. Just because the Internet says a bunch of junk doesn't mean you want your model spewing it.
replies(1): >>44484459 #
wistleblowanon ◴[] No.44484459[source]
sure but models also can't see any truth on their own. They are literally butchered and lobotomized with filters and such. Even high IQ people struggle with certain truth after reading a lot, how is these models going to find it with so much filters?
replies(6): >>44484505 #>>44484950 #>>44484951 #>>44485065 #>>44485409 #>>44487139 #
1. Dylan16807 ◴[] No.44487139[source]
> how is these models going to find it with so much filters?

That's not one of the goals here, and there's no real reason it should be. It's a little assistant feature.