Most active commenters
  • donavanm(5)
  • grogenaut(4)
  • mlyle(3)
  • fragmede(3)
  • (3)

←back to thread

192 points beedeebeedee | 54 comments | | HN request time: 4.901s | source | bottom
1. peterkos ◴[] No.41900587[source]
I'm reminded of a time that an intern took down us-east1 on AWS, by modifying a configuration file they shouldn't have had access to. Amazon (somehow) did the correct thing and didn't fire them -- instead, they used the experience to fix the security hole. It was a file they shouldn't have had access to in the first place.

If the intern "had no experience with the AI lab", is it the right thing to do to fire them, instead of admitting that there is a security/access fault internally? Can other employees (intentionally, or unintentionally) cause that same amount of "damage"?

replies(12): >>41900622 #>>41900627 #>>41900641 #>>41900805 #>>41900919 #>>41901069 #>>41901814 #>>41903916 #>>41909887 #>>41910021 #>>41910134 #>>41910235 #
2. raihansaputra ◴[] No.41900622[source]
afaik this was intentional in that they stopped training runs and changing parameters for other employee training runs, and even joined in on the debugging group trying to solve the "issues".
3. dudus ◴[] No.41900627[source]
The difference in this case is intent.

Did the employee have the intent to cause damage? If so just fire him/her.

replies(1): >>41900733 #
4. grogenaut ◴[] No.41900641[source]
From what I've seen in Amazon it's pretty consistent that they do not blame the messenger which is what they consider the person who messed up. Usually that person is the last in a long series of decisions that could have prevented the issue, and thus why blame them. That is unless the person is a) acting with malice, b) is repeatedly shown a pattern of willful ignorance. IIRC, when one person took down S3 with a manual command overriding the safeguards the action was not to fire them but to figure out why it was still a manual process without sign off. Say what you will about Amazon culture, the ability to make mistakes or call them out is pretty consistently protected.
replies(4): >>41900811 #>>41901212 #>>41911207 #>>41914419 #
5. danpalmer ◴[] No.41900733[source]
Malicious intent to be precise. Well-intentioned attempts to demonstrate issues for the purposes of helping to fix should generally not be punished, unless there is a wider fallout than expected and that can be attributed to negligence.
6. kleton ◴[] No.41900805[source]
It was one of the STEP interns that took down Google prod by modifying some config file by putting something erroneous into an automated tool. Everyone at the company was locked out, and someone had to physically access some machines in a datacenter to recover.
7. tgavVs ◴[] No.41900811[source]
> From what I've seen in Amazon it's pretty consistent that they do not blame the messenger which is what they consider the person who messed up

Interesting that my experience has been the exact opposite.

Whenever I’ve participated in COE discussions (incident analysis), questions have been focused on highlighting who made the mistake or who didn’t take the right precautions.

replies(5): >>41900843 #>>41900913 #>>41901176 #>>41901751 #>>41902023 #
8. dockerd ◴[] No.41900843{3}[source]
That was not the idea of COE ever. Probably you were in bad org/team.
replies(1): >>41909859 #
9. grogenaut ◴[] No.41900913{3}[source]
I've bar raised a ton of them. You do end up figuring out what actions by what operator caused what issues or didn't work well, but that's to diagnose what controls/processes/tools/metrics were missing. I always removed the actual people's name as part of the bar raising, well before publishing, usually before any manager sees it. Instead used Oncall 1, or Oncall for X team, Manager for X team. And that's mainly for the timeline.

As a sibling said you were likely in a bad or or one that was using COEs punatively.

replies(3): >>41901015 #>>41901855 #>>41909919 #
10. EE84M3i ◴[] No.41900919[source]
I'd like to learn more about the AWS incident, but when I google "us-east1 intern" I get this comment. Do you have a link?
replies(1): >>41902508 #
11. mlyle ◴[] No.41901015{4}[source]
In the article's case, there's evidence of actual malice, though-- sabotaging only large jobs, over a month's time.
replies(1): >>41901174 #
12. bawolff ◴[] No.41901069[source]
There is a huge difference between someone making a mistake and someone intentionally sabotaging.

You're not firing the person because they broke stuff, you are firing them because they tried to break stuff. If the attempt was a failure and caused no harm, you would still fire them. Its not about the damage they caused its that they wanted to cause damage.

replies(2): >>41901122 #>>41901395 #
13. ozim ◴[] No.41901122[source]
But for damaging company assets on purpose firing is only first step.

I do not see any mention of other legal action and article is shallow.

It might’ve been that someone in command chain called it “malicious” to cover up his own mistakes. I think that is parent poster point while writing out Amazon story.

replies(2): >>41901143 #>>41902755 #
14. bawolff ◴[] No.41901143{3}[source]
Maybe, but without any other info, i kind of have to take the info provided at face value. Like obviously if the article is inaccurate the whole situation should be viewed differently.
15. fragmede ◴[] No.41901174{5}[source]
All I got from the linked article was

> TikTok owner, ByteDance, says it has sacked an intern for "maliciously interfering" with the training of one of its artificial intelligence (AI) models.

Are there other links with additional info?

replies(1): >>41901326 #
16. geon ◴[] No.41901176{3}[source]
Isn't that a necessary step in figuring out the issue and how t prevent it?
17. evanextreme ◴[] No.41901212[source]
At least in my experience, this is also how Azure continues to function. Certainly reduces stress in the working environment
18. mlyle ◴[] No.41901326{6}[source]
A lot of the original social media sources have been pulled, but this is what was alleged on social media:

https://juejin.cn/post/7426926600422637594

https://github.com/JusticeFighterDance/JusticeFighter110

https://x.com/0xKyon/status/1847529300163252474

replies(1): >>41901343 #
19. fragmede ◴[] No.41901343{7}[source]
Thanks. Google translate off the first link:

> He exploited the vulnerability of huggingface's load ckpt function to inject code, dynamically modifying other people's optimizer to randomly sleep for a short period of time, and modifying the direction of parameter shaving. He also added a condition that only tasks with more than 256 cards would trigger this condition.

Okay yeah that's malicious and totally a crime. "modifying the direction of parameter shaving" means he subtly corrupted his co-workers work. that's wild!

replies(2): >>41901370 #>>41911851 #
20. mlyle ◴[] No.41901370{8}[source]
Some of the sources say that he sat in the incident meetings during troubleshooting and adjusted his attacks to avoid detection, too.
replies(2): >>41904131 #>>41909548 #
21. Jensson ◴[] No.41901459{3}[source]
They were just fired, not put in prison or sued. Getting fired is a typical capitalist punishment, I'd bet way more engineers gets fired for mistakes in USA than China.
22. sokoloff ◴[] No.41901751{3}[source]
I’ve run the equivalent process at my company and I absolutely want us to figure out who took the triggering actions, what data/signals they were looking at, what exactly they did, etc.

If you don’t know what happened and can’t ask more details about it, how can you possibly reduce the likelihood (or impact) of it in the future?

Finding out in detail who did it does not require you to punish that person and having a track record of not punishing them helps you find out the details in future incidents.

replies(1): >>41901947 #
23. ◴[] No.41901814[source]
24. aitchnyu ◴[] No.41901855{4}[source]
Whats bar raising in this context?
replies(2): >>41902095 #>>41909849 #
25. fragmede ◴[] No.41901906{3}[source]
Large powerful groups lying to save face is not a feature of communism, sadly. Stories about the CIA, FBI, and PG&E caught trying to do so come to mind, among others.
26. ◴[] No.41901947{4}[source]
27. Cthulhu_ ◴[] No.41902023{3}[source]
But when that person was identified, were they personally held responsible, bollocked, and reprimanded or were they involved in preventing the issue from happening again?

"No blame, but no mercy" is one of these adages; while you shouldn't blame individuals for something that is an organization-wide problem, you also shouldn't hold back in preventing it from happening again.

replies(1): >>41907147 #
28. bspammer ◴[] No.41902095{5}[source]
https://www.aboutamazon.co.uk/news/working-at-amazon/what-is...
29. rafram ◴[] No.41902508[source]
Probably this: https://aws.amazon.com/message/41926/
replies(1): >>41910064 #
30. andmarios ◴[] No.41902755{3}[source]
The article says:

  As well as firing the person in August, ByteDance said it had informed the intern's university and industry bodies about the incident.
31. ◴[] No.41903916[source]
32. justinclift ◴[] No.41904131{9}[source]
Wonder what the underlying motive was? Seems like a super weird thing to do.
replies(1): >>41910140 #
33. grogenaut ◴[] No.41907147{4}[source]
Usually helping prevent the issue, training. Almost everyone I've ever seen cause an outage is so "oh shit oh shit oh shit" that a reprimand is worthless, I've spent more time a) talking them through what they could have done better and, encouraging them to escalate quicker b) assusaging their fears that it was all their fault and they'll be blamed / fired. "I just want you to know we don't consider this your fault. It was not your fault. Many many people made poor risk tradeoffs for us to get to the point where you making X trivial change caused the internet to go down"

In some cases like interns we probably just took their commit access away or blocked their direct push access. Now a days interns can't touch critical systems and can't push code directly to prod packages.

34. NetOpWibby ◴[] No.41909548{9}[source]
LMAO that's just diabolical. Wonder what motivated them.
35. kelnos ◴[] No.41909849{5}[source]
Usually I hear it in the context of a person outside the team added to an interview panel, to help ensure that the hiring team is adhering to company-wide hiring standards, not the team's own standards, where they may differ.

But in this case I'm guessing their incident analysis teams also get an unrelated person added to them, in order to have an outside perspective? Seems confusing to overload the term like that, if that's the case.

replies(1): >>41910088 #
36. kelnos ◴[] No.41909859{4}[source]
Or maybe you were in an unusually good team?

I always chuckle a little when the response to "I had a bad experience" is "I didn't, so you must be an outlier".

replies(1): >>41909958 #
37. noobermin ◴[] No.41909887[source]
It's a Chinese company, saving face is far more important for them than "teaching lessons" to anyone, particularly employees who are probably considered expendable.
replies(1): >>41910118 #
38. donavanm ◴[] No.41909919{4}[source]
As I recall the coe tool “automated reviewer” checks cover this. It should flag any content that looks like a person (or customer name) before the author submits it.
39. donavanm ◴[] No.41909958{5}[source]
No. The majority of teams and individuals are using it as intended, to understand and prevent future issues from process and tool defects. The complaints Ive heard are usually correlated with other indicators of a “bad”/punitive team culture, a lower level IC not understanding process or intent, or shades of opinion like “its a lot of work and I dont see the benefit. Ergo its malicious or naive.”

I worked at aws for 13 years, was briefly in the reliability org that owns the COE (post incident analysis) tooling, and spent a lot if time on “ops” for about 5 years.

40. donavanm ◴[] No.41910021[source]
I worked at AWS for 13 years. I did “aws call leader” for 7 years, and worked in the reliability org when we rebuilt the coe tool. Ive personally blown up a service or two, and know other PEs whove done the same or larger.

Ive never heard of an individual being terminated or meaningfully punished for making an earnest mistake, regardless of impact. I do know of people who were rapid term’d for malicious, or similar, actions like sharing internal information or (attempting to) subvert security controls.

On the whole I did see Amazon “do the right thing” around improving process and tools; people are a fallible _part_ of a system, accountability requires authority, incremental improvements today over a hypothetical tomorrow.

replies(1): >>41910958 #
41. donavanm ◴[] No.41910064{3}[source]
No. That was operational modification of system state using existing tools. The “miss” was an intended subset filter that was not interpreted correctly.

> an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.

As of a while back that entire state management subsystem, which dates from the very beginning of AWS, has been replaced.

Source: me. I was oncall for (some of) the incident management of that event.

42. grogenaut ◴[] No.41910088{6}[source]
They are the same role different specialties. Like saying SDE for ML or for Distributed Systems or Clients.

you can usually guess from context but what you say is "we need a bar raiser for this hiring loop" or "get a bar raiser for this COE" or "get a bar raiser for the UI", there are qualified bar raisers for each setting.

43. throw3828455 ◴[] No.41910118[source]
I always laugh when I see these predictable comments about "face" when talking about Asian companies, like they are so beholden to their culture they can't make individual judgments.

I wonder if we applied this culture talk to Western companies how funny it would sound.

The reason Facebook is firing so many people is because individualism "is far more important for them than 'teaching lessons' to anyone, particularly employees who are probably considered expendable."

replies(1): >>41911048 #
44. godelski ◴[] No.41910134[source]
I think this is an important distinction and the answer is that it is hard to distinguish. People often bring up the Simple Sabotage Field Manual in situations like these and I think there's something that is often missed: the reason the techniques in here are effective is because they are difficult to differentiate from normal behavior. This creates plausible deniability for the saboteur. Acting too hastily could mean losing someone valuable for a genuine mistake. I'm saying I agree with the Amazon example. (You can also use saboteurs to your advantage if you recognize that they are hunting down and exploiting inefficiencies, but that's a whole other conversation)

But my understanding of this case is that the actions do not appear like simple easy to make mistakes. As I understand, the claim was that the intern was modifying the weights of checkpoints for other peoples' training results in an effort to make their own work better. Mucking about in a checkpoint is not a very common thing to do, so should make someone suspicious in the first place. On top of this it appears he was exploiting weaknesses and injecting code to mess with peoples' optimizers, and to do things that do not have a reasonable explanation for.

So as far as I can tell, not only was he touching files he shouldn't have been touching (and yes, shouldn't have had access to), he was taking steps to bypass the blocks there were in place and was messing with them in ways that are very difficult to explain away with "I thought this might be a good idea." (Things that explicitly look like a bad idea). If that is what in fact happened, I think it is not a reach to claim intentional sabotage. Because if it wasn't, then the actions are represent such a level of incompetence that they are a huge liability to anyone within reach.

[0] https://www.cia.gov/static/5c875f3ec660e092cf893f60b4a288df/...

45. tyingq ◴[] No.41910140{10}[source]
Could be just so his work looked better in comparison. Or something more sinister, like being paid to slow progress.
46. Aurornis ◴[] No.41910235[source]
> If the intern "had no experience with the AI lab", is it the right thing to do to fire them, instead of admitting that there is a security/access fault internally?

This wasn’t an accident, though. The intern had malicious intent and was intentionally trying to undermine other people’s work.

This isn’t a case where blameless post-mortems apply. When someone is deliberately sabotaging other people’s work, they must be evicted from the company.

47. zmgsabst ◴[] No.41910958[source]
PAM debacle (17Q4) in Device Econ is a counter example.

And that wasn’t even a mistake the SDEs made — they were punished for the economists being reckless and subsequently bullied out of the company, despite the SDEs trying to raise the alarm the whole time.

replies(1): >>41911006 #
48. donavanm ◴[] No.41911006{3}[source]
Is that devices as in digital/alexa land? Never had too much overlap there. AWS and CDO were discrete for incident and problem management after ‘14 or soz
replies(1): >>41913294 #
49. simplify ◴[] No.41911048{3}[source]
I don't get it, aren't individual judgements made in the context of culture?

How does your example sound funny?

replies(1): >>41913638 #
50. Twirrim ◴[] No.41911207[source]
> when one person took down S3 with a manual command overriding the safeguards

It didn't override safeguards, but they sure wanted you to think that something unusual was done as part of the incident. What they executed was a standard operational command. The problem was, the components that that command interacted with had been creaking at the edges for years by that point. It was literally a case of "when", and not "if". All that happened was the command tipped it over the edge in combination with everything else happening as part of normal operational state.

Engineering leadership had repeatedly raised the risk with further up the chain and no one was willing to put headcount to actually mitigating the problem. If blame was to be applied anywhere, it wasn't on the engineer following the run book that gave them a standard operational command to execute with standard values. They did exactly what they were supposed to.

Some credit where it's due, my understanding from folks I knew still in that space, is that S3 leadership started turning things around after that incident and started taking these risks and operational state seriously.

51. yorwba ◴[] No.41911851{8}[source]
"parameter shaving" (参数剃度) is, by the way, a typo for "parameter gradient" (参数梯度), 梯度 being the gradient and 剃度 being a tonsure.
52. zmgsabst ◴[] No.41913294{4}[source]
Yeah — my point was Amazon is very large and standards vary. I won’t pretend I know the whole picture, but I’ve seen retaliation against SDEs multiple times.

I’ve heard mixed things about CDO, positive things about AWS, but where I worked in Devices and FinTech were both wild… to the point FinTech (circa 2020) didn’t even use the PRFAQ/6-pager methodology. Much to the surprise of people in CDO I asked for advice.

53. gwervc ◴[] No.41913638{4}[source]
Every company worldwide, including US ones are trying to "save face" when anything bad happens. This is why we have corporate speech.
54. DrillShopper ◴[] No.41914419[source]
It's a shame that they're so bad at (physically) delivering their products these days.