Most active commenters
  • avianlyric(10)
  • SV_BubbleTime(4)
  • ghxst(4)
  • bigyabai(3)
  • adrian_b(3)

←back to thread

534 points BlueFalconHD | 73 comments | | HN request time: 3.006s | source | bottom

I managed to reverse engineer the encryption (refered to as “Obfuscation” in the framework) responsible for managing the safety filters of Apple Intelligence models. I have extracted them into a repository. I encourage you to take a look around.
1. bawana ◴[] No.44484214[source]
Alexandra Ocasio Cortez triggers a violation?

https://github.com/BlueFalconHD/apple_generative_model_safet...

replies(7): >>44484242 #>>44484256 #>>44484284 #>>44484352 #>>44484528 #>>44485841 #>>44488050 #
2. bahmboo ◴[] No.44484242[source]
Perhaps in context? Maybe the training data picked up on her name as potentially used as a "slur" associated with her race. Wonder if there are others I know I can look.
3. cpa ◴[] No.44484256[source]
I think that’s because she’s been victim of a lot of deep fake porn
replies(1): >>44484294 #
4. mmaunder ◴[] No.44484284[source]
As does:

   "(?i)\\bAnthony\\s+Albanese\\b",
    "(?i)\\bBoris\\s+Johnson\\b",
    "(?i)\\bChristopher\\s+Luxon\\b",
    "(?i)\\bCyril\\s+Ramaphosa\\b",
    "(?i)\\bJacinda\\s+Arden\\b",
    "(?i)\\bJacob\\s+Zuma\\b",
    "(?i)\\bJohn\\s+Steenhuisen\\b",
    "(?i)\\bJustin\\s+Trudeau\\b",
    "(?i)\\bKeir\\s+Starmer\\b",
    "(?i)\\bLiz\\s+Truss\\b",
    "(?i)\\bMichael\\s+D\\.\\s+Higgins\\b",
    "(?i)\\bRishi\\s+Sunak\\b",
   
https://github.com/BlueFalconHD/apple_generative_model_safet...

Edit: I have no doubt South African news media are going to be in a frenzy when they realize Apple took notice of South African politicians. (Referring to Steenhuisen and Ramaphosa specifically)

replies(6): >>44484366 #>>44484419 #>>44484695 #>>44484709 #>>44484883 #>>44487192 #
5. HeckFeck ◴[] No.44484294[source]
How does this explain Boris Johnson or Liz Truss?
replies(4): >>44484385 #>>44484397 #>>44484671 #>>44487987 #
6. FateOfNations ◴[] No.44484352[source]
interesting, that's specifically in the Spanish localization.
7. armchairhacker ◴[] No.44484366[source]
Also “Biden” and “Trump” but the regex is different.

https://github.com/BlueFalconHD/apple_generative_model_safet...

https://github.com/BlueFalconHD/apple_generative_model_safet...

replies(1): >>44484964 #
8. AlphaAndOmega0 ◴[] No.44484385{3}[source]
I can only imagine that people would pay to not see porn of either individual.
9. baxtr ◴[] No.44484397{3}[source]
I’m telling you, some people have weird fantasies…
replies(1): >>44485278 #
10. userbinator ◴[] No.44484419[source]
I'm not surprised that anything political is being filtered, but this should definitely provoke some deep consideration around who has control of this stuff.
replies(2): >>44484702 #>>44486338 #
11. michaelt ◴[] No.44484528[source]
I assume all the corporate GenAI models have blocks for "photorealistic image of <politician name> being arrested", "<politician name> waving ISIS flag", "<politician name> punching baby" and suchlike.
replies(2): >>44484622 #>>44484876 #
12. lupire ◴[] No.44484622[source]
Maybe so, but think about how such a thing would be technically implemented, and how it would lead to false positives and false negatives, and what the consequences would be.
13. Aeolun ◴[] No.44484671{3}[source]
Put them together in the same prompt?
14. skissane ◴[] No.44484695[source]
The problem with blocking names of politicians: the list of “notable politicians” is not only highly country-specific, it is also constantly changing-someone who is a near nobody today in a few more years could be a major world leader (witness the phenomenal rise of Barack Obama from yet another state senator in 2004-there’s close to 2000 of them-to US President 5 years later.) Will they put in the ongoing effort to constantly keep this list up to date?

Then there’s the problem of non-politicians who coincidentally have the same as politicians - witness 1990s/2000s Australia, where John Howard was Prime Minister, and simultaneously John Howard was an actor on popular Australian TV dramas (two different John Howards, of course)

replies(1): >>44484782 #
15. stego-tech ◴[] No.44484702{3}[source]
You’re not wrong, and it’s something we “doomers” have been saying since OpenAI dumped ChatGPT onto folks. These are curated walled gardens, and everyone should absolutely be asking what ulterior motives are in play for the owners of said products.
replies(1): >>44486197 #
16. echelon ◴[] No.44484709[source]
Apple's 1984 ad is so hypocritical today.

This is Apple actively steering public thought.

No code - anywhere - should look like this. I don't care if the politicians are right, left, or authoritarian. This is wrong.

replies(2): >>44484841 #>>44493486 #
17. idkfasayer ◴[] No.44484782{3}[source]
Fun fact: There was at least on dip in Berkshire Hathaway stock, when Anne Hathaway got sick
replies(2): >>44484916 #>>44488065 #
18. avianlyric ◴[] No.44484841{3}[source]
Why is this wrong? Applying special treatment to politically exposed persons has been standard practice in every high risk industry for a very long time.

The simple fact is that people get extremely emotional about politicians, politicians both receive obscene amounts of abuse, and have repeatedly demonstrated they’re not above weaponising tools like this for their own goals.

Seems perfectly reasonable that Apple doesn’t want to be unwittingly draw into the middle of another random political pissing contest. Nobody comes out of those things uninjured.

replies(7): >>44484868 #>>44484887 #>>44484934 #>>44484948 #>>44485015 #>>44485098 #>>44488968 #
19. bigyabai ◴[] No.44484868{4}[source]
The criticism is still valid. In 1984, the Macintosh was a bicycle for the mind. In 2025, it's a smart-car that refuses to take you certain places that are considered a brand-risk.

Both have ups and downs, but I think we're allowed to compare the experiences and speculate what the consequences might be.

replies(1): >>44484944 #
20. bigyabai ◴[] No.44484876[source]
Particularly the models owned by CEOs who suck-up to authoritarianism, one could imagine.
21. mvdtnz ◴[] No.44484883[source]
They spelled Jacinda Ardern's name wrong.
replies(2): >>44486539 #>>44486890 #
22. twoodfin ◴[] No.44484887{4}[source]
I dunno. Transpose something like the civil rights era to today and this kind of risk avoidance looks cowardly.

We really need to get over the “calculator 80085” era of LLM constraints. It’s a silly race against the obviously much more sophisticated capabilities of these models.

23. lupire ◴[] No.44484916{4}[source]
Was she eating at Jimmy's Buffet?
24. pyuser583 ◴[] No.44484934{4}[source]
It’s not wrong, it just requires transparency. This is extremely untransparent.

A while back a British politician was “de-banked” and his bank denied it. That’s extremely wrong.

By all means: make distinctions. But let people know it!

If I’m denied a mortgage because my uncle is a foreign head of state, let me know that’s the reason. Let the world know that’s the reason! Please!

replies(1): >>44485029 #
25. avianlyric ◴[] No.44484944{5}[source]
I think gen AI is radically different to tools like photoshops or similar.

In the past it was always extremely clear that the creator of content was the person operating the computer. Gen AI changes that, regardless of if your views on authorship of gen AI content. The simple fact is that the vast majority of people consider Gen AI output to be authored by the machine that generated it, and by extension the company that created the machine.

You can still handcraft any image, or prose, you want, without filtering or hinderance on a Mac. I don’t think anyone seriously thinks that’s going to change. But Gen AI represents a real threat, with its ability to vastly outproduce any humans. To ignore that simple fact would be grossly irresponsible, at least in my opinion. There is a damn good reason why every serious social media platform has content moderation, despite their clear wish to get rid of moderation. It’s because we have a long and proven track record of being a terribly abusive species when we’re let loose on the internet without moderation. There’s already plenty of evidence that we’re just as abusive and terrible with Gen AI.

replies(2): >>44485116 #>>44485118 #
26. goopypoop ◴[] No.44484948{4}[source]
What's bad to do to a politician but fine to do to someone else?
replies(2): >>44485057 #>>44485077 #
27. immibis ◴[] No.44484964{3}[source]
Right next to Palestine, oddly enough.
28. tjwebbnorfolk ◴[] No.44485015{4}[source]
I can Google for any of these people, and I can get real results with real information.
replies(1): >>44485344 #
29. avianlyric ◴[] No.44485029{5}[source]
> A while back a British politician was “de-banked” and his bank denied it. That’s extremely wrong.

Cry me a river. I’ve worked in banks in the team making exactly these kinds of decisions. Trust me Nigel Farage knew exactly what happened and why. NatWest never denied it to the public, because they originally refused to comment on it. Commenting on the specifics details of a customer would be a horrific breach of customer privacy, and a total failure in their duty to their customers. There’s a damn good reason the NatWests CEO was fired after discussing the details of Nigel’s account with members of the public.

When you see these decisions from the inside, and you see what happens when you attempt real transparency around these types of decisions. You’ll also quickly understand why companies are so cagey about explaining their decision making. Simple fact is that support staff receive substantially less abuse, and have fewer traumatic experiences when you don’t spell out your reasoning. It sucks, but that’s the reality of the situation. I used to hold very similar views to yourself, indeed my entire team did for a while. But the general public quickly taught us a very hard lesson about cost of being transparent with the public with these types of decisions.

replies(3): >>44485174 #>>44488117 #>>44488528 #
30. avianlyric ◴[] No.44485057{5}[source]
Most normal people aren’t represented well enough in training sets for Gen AI to be trivially abused. Plus there will 100% be filters to prevent general abuse targeted at anyone. But politicians are particularly big target, and you know damn well that people out there will spent lots of time trying to find ways around the filters. There’s not point making the abuse easy, when it’s so trivial to just blocklist the set of people who are obviously going to targets of abuse.
31. t-3 ◴[] No.44485077{5}[source]
There are many countries where it's illegal to criticize people holding political office, foreign heads of state, certain historical political figures etc., while still being legal to call your neighbor a dick.
32. echelon ◴[] No.44485098{4}[source]
You can buy a MacBook and fashion the components into knives, bullets, and bombs. Apple does nothing to prevent you from doing this.

In fact, it's quite easy to buy billions of dangerous things using your MacBook and do whatever you will with them. Or simply leverage physics to do all the ill on your behalf. It's ridiculously easy to do a whole lot of harm.

Nobody does anything about the actually dangerous things, but we let Big Tech control our speech and steer the public discourse of civilization.

If you can buy a knife but not be free to think with your electronics, that says volumes.

Again, I don't care if this is Republicans, Democrats, or Xi and Putin. It does not matter. We should be free to think and communicate. Our brains should not be treated as criminals.

And it only starts here. It'll continue to get worse. As the platforms and AI hyperscalers grow, there will be less and less we can do with basic technology.

33. bigyabai ◴[] No.44485116{6}[source]
All I heard was a bunch of excuses.
34. furyofantares ◴[] No.44485118{6}[source]
> The simple fact is that the vast majority of people consider Gen AI output to be authored by the machine that generated it

They do?

I routinely see people say "Here's an xyz I generated." They are stating that they did the do-ing, and the machine's role is implicitly acknowledged in the same was as a camera. And I'd be shocked if people didn't have a sense of authorship of the idea, as well as an increasing sense of authorship over the actual image the more they iterated on it with the model and/or curated variations.

replies(1): >>44485324 #
35. pyuser583 ◴[] No.44485174{6}[source]
> NatWest never denied it to the public, because they originally refused to comment on it.

Are you saying that Alison Rose did not leak to the BBC? Why was she forced to resign? I thought it was because she leaked false information to the press.

This isn’t a diversion. It’s exactly the problem with not being transparent. Of course Farage knew what happened, but how could he convince the public (he’s a public figure), when the bank is lying to the press?

The bank started with a lie (claiming he was exited because the account was too low), and kept lying!

These were active lies, not simply a refusal to explain their reasons.

replies(1): >>44485299 #
36. AuryGlenz ◴[] No.44485278{4}[source]
Now that they've cleaned it up it isn't so bad, but browse Civit.ai a bit and that'll still be confirmed - just not with real people anymore.
replies(1): >>44486238 #
37. avianlyric ◴[] No.44485299{7}[source]
> Why was she forced to resign? I thought it was because she leaked false information to the press.

She was forced to resign because she leaked, the content of the leak was utterly immaterial. The simple fact she leaked was an automatically fireable offence, it doesn’t matter a jot if she lied or not. Customer privacy is non-negotiable when you’re bank. Banks aren’t number 10, the basic expectation is that customer information is never handed out, except to the customer, in response to a court order, or the belief that there is an immediate threat to life.

Do you honestly think that it’s okay for banks to discuss the private banking details of their customers with the press?

replies(2): >>44487081 #>>44487123 #
38. avianlyric ◴[] No.44485324{7}[source]
Yes people will happily claim authorship over AI output when it’s in their favour. They will equally disclaim authorship if it allows them to express a view while avoiding the consequences of expressing that view.

I don’t think it’s hard to believe that the press wouldn’t have a field day if someone managed to get Apple Gen AI stuff to express something racist, or equally abusive.

Case in point, article about how Google’s Veo 3 model is being used to flood TikTok with racist content:

https://arstechnica.com/ai/2025/07/racist-ai-videos-created-...

39. avianlyric ◴[] No.44485344{5}[source]
You would hope that search would be a politically safe space to operate. But politicians find a way to ruin everything for short term political gain.

https://arstechnica.com/tech-policy/2018/12/republicans-in-c...

replies(1): >>44486204 #
40. jofzar ◴[] No.44485841[source]
AOC is very vocal about AI and is leading a bill related to AI. It's probably a "let's not fuck around and find out" situation

https://thehill.com/policy/technology/5312421-ocasio-cortez-...

41. SV_BubbleTime ◴[] No.44486197{4}[source]
Some of us really value offline and uncensored LLMs for this and more reasons, but that doesn’t solve the problem it just reduces or changes the bias.
replies(1): >>44486410 #
42. SV_BubbleTime ◴[] No.44486204{6}[source]
I would hope!

But no one actually believes Google is politically neutral do they?

replies(1): >>44498731 #
43. SV_BubbleTime ◴[] No.44486238{5}[source]
I’m convinced there are a dozen deviants on Covid with a hundred new accounts per month posting their perversion in order to make it seem more commonplace.

No porn site has that much extremely X or Y stuff.

Someone is using the internets newest porn site to push a sexual agenda.

44. dwaite ◴[] No.44486338{3}[source]
"Filtered" in which way?
45. heavyset_go ◴[] No.44486410{5}[source]
As long as we have to rely on pre trained networks and curated training sets, normal people will not be able to surpass this issue.
replies(1): >>44487673 #
46. teppic ◴[] No.44486539{3}[source]
Just in the region/CN file, weirdly.
47. lordgrenville ◴[] No.44486890{3}[source]
I wonder if they used an LLM to generate the list of safety terms.
48. adrian_b ◴[] No.44487081{8}[source]
She was fired because she leaked information and this fact had become public.

When they can cover such facts, the banks are much less prone to use appropriate punishments.

Many years ago, some employee of a bank has confused my personal bank account with a company account of my employer, and she has sent a list with everything that I have bought using my personal account, during 4 months, to my employer, where the list could have been read by a few dozen people.

Despite the fact this was not only a matter of internal discipline, but violating the banking secrecy was punishable by law where I lived, the bank has tried for a long time to avoid admitting that anything wrong has happened.

However, I have pursued the matter, so they have been forced to admit the wrong doing. Despite this being something far more severe than what has happened to Farage, I did not want for the bank employee to be fired. I considered that an appropriate punishment would have been a pay cut for a few months, which would have ensured that in the future she would have better checked the account numbers for which she sends information to external entities.

In the end all I have got was a written letter where the bank greatly apologized for their mistake. I am not sure if the guilty employee has ever been punished in any way.

After that, I have moved my operations to another bank. Had they reacted rightly to what had happened, I would have stayed with them.

replies(2): >>44487749 #>>44489737 #
49. Dylan16807 ◴[] No.44487123{8}[source]
> Do you honestly think that it’s okay for banks to discuss the private banking details of their customers with the press?

The high level nature of the matter was quite public at that point.

50. beAbU ◴[] No.44487192[source]
Irish Prez is also in that list, also current and former British PMs and other world leaders.

So I don't think its anything specifically related to SA going on here.

replies(1): >>44487996 #
51. ghxst ◴[] No.44487673{6}[source]
If the training data was "censored" by leaving out certain information, is there any practical way to inject that missing data after the model has already been trained?
replies(3): >>44487774 #>>44488372 #>>44488395 #
52. ghxst ◴[] No.44487749{9}[source]
> I considered that an appropriate punishment would have been a pay cut for a few months

This can absolutely cripple a family, I'd be really cautious wishing that upon someone if they wronged you without malice, though I completely understand where you are coming from.

In this case at the very least, I'd want to know what went wrong and what they’re doing to make sure it doesn’t happen again. From a software-engineer’s standpoint, there’s probably a bunch of low-hanging fruit that could have prevented this in the first place.

If all they sent was a (generic) apology letter, I'd have switched banks too.

How did you pursue the matter?

replies(1): >>44488087 #
53. heavyset_go ◴[] No.44487774{7}[source]
You can fine tune a model with new information, but it is not the same thing as training it from scratch, and can only get you so far.

You might even be able to poison a model against being fine-tuned on certain information, but that's just a conjecture.

54. blitzar ◴[] No.44487987{3}[source]
Rule 34
55. touristtam ◴[] No.44487996{3}[source]
What is weird is that the FR file contains current French President, PM and then former and current (afaik) party leader from the extreme right. Nothing about any of them in the CN file: https://github.com/BlueFalconHD/apple_generative_model_safet...
56. AmazingTurtle ◴[] No.44488050[source]
"driving with Focus turned on"

https://github.com/BlueFalconHD/apple_generative_model_safet...

replies(1): >>44489420 #
57. extraduder_ire ◴[] No.44488065{4}[source]
Even if your keyword searching trading bot is smart enough to know it's unrelated, knowing there's dumber bots out there is information you can base trades on.
58. adrian_b ◴[] No.44488087{10}[source]
After the big surprise of seeing at work a list with all my personal purchases included in a big set of documents to which I, together with a great number of other colleagues, had access, I went immediately to the bank and I reported the fact.

After some days had passed without seeing any consequence, I went again, this time discussing with some supervising employee, who attempted to convince me that this is some kind of minor mistake and there is no need to do anything about it.

However, I pointed to the precise law paragraphs condemning what they have done and I threatened with legal action. This escalation resulted in me being invited to a bigger branch of the bank, to a discussion with someone in a management position. This time they were extremely ass-kissing, I was shown also the guilty employee, who apologized herself, and eventually I let it go, though there were no clear guarantees that they will change their behavior to prevent such mistakes in the future.

Apparently the origin of the mistake had been a badly formulated database query, which had returned a set of accounts for which the transactions had to be reported to my employer. I had been receiving during the same time interval some money from my employer into my private account, corresponding to salary and travel expenses, and somehow those transactions were matched by the bad database query, grouping my private account with the company accounts. Then the set of account numbers was used to generate reports, without further verification of the account ownership.

replies(2): >>44488718 #>>44488873 #
59. like_any_other ◴[] No.44488117{6}[source]
> You’ll also quickly understand why companies are so cagey about explaining their decision making.

Because they want to perform political censorship without us knowing about it? You'll forgive me if I'm not too sympathetic to that.

I happen to be familiar with that case, and that is exactly what happened. The Coutts report explicitly found that he met the economic criteria for retention [0], but was dropped due to political reasons, among others his friendship with Novak Djokovic, and re-tweeting an allegedly transphobic joke by Ricky Gervais ("old fashioned women. You know, the ones with wombs.") [1].

To top it off, the BBC did their best to aid in this deception, reporting: Farage says he was effectively "de-banked" for his political views and that he is "far from alone" [2]

Contrary to the BBC's portrayal, this was not an unsupported opinion coming from Farage - he directly quoted what the bank itself wrote in their internal discussions on this matter, that he obtained through a subject access request.

Further, in their apology for getting the story wrong, the BBC wrote: "On 4 July, the BBC reported Mr Farage no longer met the financial requirements for Coutts, citing a source familiar with the matter. The former UKIP leader later obtained a Coutts report which indicated his political views were also considered." [3]

This is misleading past the point of deceit. The BBC tried to give the impression that financial requirements were the primary reason for the account closure, and his politics were just an at-best secondary "also". But the Coutts report explicitly said that he “meets the EC [economic contribution] criteria for commercial retention”, so his politics were the primary and only reason.

Most of this information is absent in the BBC's reporting, which uses only vague, anodyne phrases like "political views" and "politically exposed person", avoids specifics, but does find time to cite Labour MP accusations that it is hypocritical how quickly the government reacted to banks trying to financially deplatform the enemy political faction, when the government hasn't yet rid itself of corruption.

So yes, you sure present a difficult "dilemma": Do we want powerful commercial and media interests to team up and lie to us, or do we want at least some degree of transparency and honesty in their dealings? Really there are no easy answers, and the choice would keep anyone up at night...

[0] https://www.telegraph.co.uk/news/2023/07/18/nigel-farage-cou...

[1] https://www.telegraph.co.uk/news/2023/07/18/nigel-farage-cou... (Ignore Farage's hyperbole that collecting information posted to public Twitter accounts is "Stasi-style")

[2] https://www.bbc.co.uk/news/live/business-66296935

[3] https://www.bbc.com/news/entertainment-arts-66288464

60. calaphos ◴[] No.44488372{7}[source]
If it's just filtered out in the training sets, adding the information as context should work out fine - after all this is exactly how o3, Gemini 2.5 and co deal with information that is newer than their training data cutoff.
61. selfhoster11 ◴[] No.44488395{7}[source]
Yes, RAG is one way to do that.
62. zelphirkalt ◴[] No.44488528{6}[source]
The point is not merely for that affected person to know, whoever they are, the point of transparency is for the public to know and form their opinion about it, and not be blindly controlled by unelected businesses.
63. Xss3 ◴[] No.44488718{11}[source]
Behavior isn't what needs to change here. It's a poor system design. Humans make mistakes. Systems prevent mistakes.

Do you think the mistake would have happened if a machine checked the numbers vs the address? How about if a 2nd person looked it over? How about both?

In this case a computer could have easily flagged an address mismatch between your account number and the receiver (your work).

replies(1): >>44488850 #
64. ghxst ◴[] No.44488850{12}[source]
Thank you, that's what I intended to say.
65. ghxst ◴[] No.44488873{11}[source]
Thanks for sharing. Sounds like they have (hopefully _had_) a really messy system in place.

And just to be clear, I didn’t mean to downplay what happened to you, I completely understand how serious it is.

66. raxxorraxor ◴[] No.44488968{4}[source]
What do you mean reasonable? I know that some Apple users tend to outsource "possibilities" to their favorite company, but I would obviously want an AI to not be affected by the political bitching du jours.

Not that getting the latest trash talk is the main vocation of pretrained AIs anyway.

The only risk here is that some third grade journalist of a third grade newspaper writes another article about how outrageous some generated AI statement is. An article that should be completely ignored instead of it leading to more censorship.

And Apple flinches here, so in the end it means it cannot provide a sensible general model. It would be affected by their censorship.

67. thih9 ◴[] No.44489420[source]
For context, the “Focus” refers to an iOS feature that minimizes distractions: https://support.apple.com/en-gb/guide/iphone/iphd6288a67f/io...
68. avianlyric ◴[] No.44489737{9}[source]
There is a huge difference between an honest mistake by an employee, and clear employee misconduct.

Punishing employees for making honest mistakes, where appropriate process should have prevented error, is a horrific way to handle mistakes like this. It would be equivalent to personally punishing engineers every time they deployed code that contained bugs. Nobody would ever think that’s an acceptable thing to do, why on earth would think it’s acceptable to punish customer service staff in a similar manner?

replies(1): >>44497721 #
69. jama211 ◴[] No.44493486{3}[source]
No, it’s them saving their butts from an “incident” where the LLM otherwise spits out something controversial at the devious manipulation of the user and says something political and someone writes an article and it all goes haywire.

If you were in charge of apple you’d do the same or you’d be silly not to. That’s why _every_ llm has guardrails like this, it isn’t just apple, sheesh.

70. adrian_b ◴[] No.44497721{10}[source]
This was not a honest mistake.

It was completely reckless behavior, even if the guilt was distributed both on the employee who has not checked whether the information sent to external parties is information to which access is permitted for them and on the employees who did not implement a system that would check automatically for such mistakes.

Moreover, the attempt made by multiple bank employees to hide the incident, instead of taking responsibility for it, has amply demonstrated that only a financial punishment that would have affected them personally would have caused them to act carefully in the future.

Also, the guilty bank employee was not some poor customer service staff, but she appeared to have a senior position, handling the accounts of a very big multinational company, which was my employer at the time.

I have little doubt that trying to hide such incidents is the normal behavior for banks, unlike the poster to which I have replied said, i.e. they take seriously things like banking secrecy only if they are caught.

It was an unlikely occurrence that I happened to also have access to the documents where my personal information was included, so I could discover what the bank has done. In most such cases it is likely that the account owner never becomes aware that the bank has leaked confidential information.

replies(1): >>44498685 #
71. avianlyric ◴[] No.44498685{11}[source]
Has it occurred to you that personally punishing employees would just create further incentive to hide errors? You just create a culture of fear, where any attempt to acknowledge mistakes and learn from them is punished rather than rewarded.

I have no idea why you think inflicting financial penalties on employees would result in better outcomes. You only need to look at some highly avoidable transit disasters in Japan to understand why a model of punishment produces worse outcomes, not better.

https://en.m.wikipedia.org/wiki/Amagasaki_derailment

There is a reason we have regulators (or at least we do in the UK). I can assure you that if this had happened in the UK, and the complaint raised to the Financial Ombudsman (FOS), there would have been hefty financial punishment for the bank. If there were repeated infractions, the FCA would step in to investigate, and possibly personally punish C-suite leaders for failing to build the needed processes and culture to both prevent, and learn from mistakes like this.

And I’m not speaking about theory, I’m speaking from personal experience. I know exactly what it’s like to be on the pointy end of both the FOS and FCAs gaze. It’s not a comfortable position for any team in any bank, and even less comfortable for senior leaders.

72. avianlyric ◴[] No.44498731{7}[source]
Evidence suggests they’re about as neutral as you could hope.

It’s not like Google search is some kind special tool used only by the elite. It’s pretty trivial for political scientists to pump queries into Google and measure the results. Which is exactly what many have done.

There’s been plenty of independent research into political bias of Google search results, and plenty of lawsuits that have gone fishing via discovery for internal evidence of bias. As yet, nobody has found a smoking gun, or any real evidence of search result bias (on a political axis, the same can be said for commercial gain).

There are many problems with Google, and Google search. Google as an org isn’t politically neutral (although I have no idea how they could be). But political bias in their results isn’t one of those problems.

replies(1): >>44502524 #
73. SV_BubbleTime ◴[] No.44502524{8}[source]
Maybe you haven’t followed…

The CEO hosted a cry session about broken hearts and how they as a company would resist when Trump won in 2016.

The black nazis, female popes, etc. No, Google isn’t neutral.