Most active commenters

    ←back to thread

    534 points BlueFalconHD | 20 comments | | HN request time: 1.031s | source | bottom

    I managed to reverse engineer the encryption (refered to as “Obfuscation” in the framework) responsible for managing the safety filters of Apple Intelligence models. I have extracted them into a repository. I encourage you to take a look around.
    1. userbinator ◴[] No.44484484[source]
    China calls it "harmonious society", we call it "safety". Censorship by any other name would be just as effective for manipulating the thoughts of the populace. It's not often that you get to see stuff like this.
    replies(4): >>44484542 #>>44485060 #>>44487180 #>>44487705 #
    2. madeofpalk ◴[] No.44484542[source]
    I don't think it's controversial or unsurprising at all that a company doesn't want their random sentence generator to spit out 'brand damaging' sentences. You know the field day media would have Apple's new feature summarises a text message as "Jane thinks Anthony Albanese should die".
    replies(2): >>44484801 #>>44485555 #
    3. ryandrake ◴[] No.44484801[source]
    When the choice is between 1. "avoid tarnishing my own brand" and 2. "doing what the user requested," corporations will always choose option 1. Who is this software supposed to be serving, anyway?

    I'm surprised MS Office still allows me to type "Microsoft can go suck a dick" into a document and Apple's Pages app still allows me to type "Apple are hypocritical jerks." I wonder how long until that won't be the case...

    replies(2): >>44486508 #>>44498126 #
    4. cyanydeez ◴[] No.44485060[source]
    In america is due to lawyers, nothing more.

    Ya'll love capitalism until it starts manipulating the populace into the safest space to sell you garbage you dont need.

    Then suddenly its all "ma free speech"

    replies(1): >>44486267 #
    5. userbinator ◴[] No.44485555[source]
    If that's what the message actually said, why would the media be complaining? Or do you mean false positives?
    6. SV_BubbleTime ◴[] No.44486267[source]
    Right, because the European models coming out are super SOTA? Minstrel is decent, but needs to be mixed with a ton of uncensored data to be useful.

    I’m convinced the only reason China keeps releasing banging models with light to no censorship is because they are undermining the value of US AI, it has nothing to do with capitalism, communism or un“safety”.

    7. chii ◴[] No.44486508{3}[source]
    > I wonder how long until that won't be the case...

    when there's no more alternative word processors any more.

    8. energy123 ◴[] No.44487180[source]
    This is the rhetorical tactic of false equivalence. State censorship by an autocracy with the objective of population control is not the same thing as a private company inside a democracy censoring their product to avoid bad press and maintain goodwill for shareholders. If you want solid proof that it's not the same thing, see all the uncensored open weights models that you can freely download and use without fear of persecution.
    replies(3): >>44487393 #>>44488175 #>>44506210 #
    9. troupo ◴[] No.44487393[source]
    > is not the same thing as a private company inside a democracy censoring their product to avoid bad press and

    Yet this private company has more power and influence than most countries. And there are several such companies. We already live in sci fi corporate dystopia, we just haven't fully realised it yet.

    replies(2): >>44488019 #>>44490575 #
    10. jeroenhd ◴[] No.44487705[source]
    I still remember when "bush hid the facts" went around the news cycle. Entertainment services will absolutely slam and misrepresent any small mistake made by large companies.

    I don't think it's as much a problem with safety as it is a problem with AI. We haven't figured out how to remove information from LLMs so when an LLM starts spouting bullshit like "<random name> is a paedophile", companies using AI have no recourse but to rewrite the input/output of their predictive text engines. It's no different than when Microsoft manually blacklisted the function name for the Fast Inverse Square Root that it spat out verbatim, rather than actually removing the code from their LLM.

    This isn't 1984 as much as it's companies trying to hide that their software isn't ready for real world use by patching up the mistakes in real time.

    11. chgs ◴[] No.44488019{3}[source]
    People think a trillion dollar brainwashing industry is absolutely fine because of “democracy”, completely ignoring that all you have to do is use a century of experience convincing people to act against their own interests can deliver whatever you want.

    Often the same people who think America is fine and safe are the ones who whine about the “main stream media” and “sheeple”.

    replies(1): >>44491727 #
    12. Hackbraten ◴[] No.44488175[source]
    But who of the general populace has the technical skill to replace their on-device assistant with a free one? And that's if Apple even allows that?

    In practice, there's not that much difference between a megacorporate monopolist and a state.

    replies(2): >>44488466 #>>44488959 #
    13. energy123 ◴[] No.44488466{3}[source]
    I think there are big differences, such as whether or not you go to prison. Those differences are obfuscated when we use language like "megacorporate monopolist" or "scifi dystopia". Instead of using these abstract labels that attempt to categorize different things into homogeneous buckets that have preexisting moral valence, which is a good rhetorical strategy but a poor strategy for understanding, simply describe what is actually happening at a sufficient level of detail without judgement. We would gain a clearer understanding, which is needed to identify the real problems, such as what Meta is doing to our civic fabric, not some unimportant thing that Apple is doing to its nascent LLM that has 0% market share.
    replies(1): >>44489596 #
    14. s3p ◴[] No.44488959{3}[source]
    So in modern times, not being able to generate an image of suicide on your phone whenever you want means you are suffering from communist censorship?
    15. Hackbraten ◴[] No.44489596{4}[source]
    You're saying that as if Apple's LLM somehow were the exception.

    No matter if we want it or not, life and cultural exchange increasingly happens on Tiktok, Instagram and the like. One thing that all those platforms have in common is that they disallow their users worldwide to have any meaningful discourse on e.g. sex, rape, and suicide. Don't you think that it's important, perhaps more important than ever before, for teenagers to be able to inform themselves about these topics?

    16. thinkingtoilet ◴[] No.44490575{3}[source]
    If you were selling a product to enterprise customers, would you want it to be able to generate nude images of celebrities? Would you want it to be able to create deep fakes of politicians, or even your CEO? Would you want it to have hot takes on hot button political issues? Good luck on your sales calls. Not everything is a conspiracy.
    replies(1): >>44494613 #
    17. Spivak ◴[] No.44491727{4}[source]
    Which trillion dollar brainwashing industry— primary school, news, social media, advertising, the printing press?

    I would put individuals using language models for their own purposes pretty low on my list of things that can cause societal harm.

    18. troupo ◴[] No.44494613{4}[source]
    Or "Granular mango serpent" and "explain like i'm five about Biden https://github.com/BlueFalconHD/apple_generative_model_safet...

    > Not everything is a conspiracy.

    No one said it was

    19. madeofpalk ◴[] No.44498126{3}[source]
    But so often these tools are used in a way that the user didn't explicitly request, like summarising notifications, or generating slideshows from your photo library.
    20. xk_id ◴[] No.44506210[source]
    Except Apple has a country’s worth of users, whose livelihoods are reliant on them. The “state democracy” is right now more subordinated to tech oligarchies, than vice versa.