Most active commenters
  • dekhn(8)
  • (6)
  • logtrees(4)
  • kridsdale1(4)
  • rhetocj23(4)
  • andsoitis(4)
  • HDThoreaun(4)
  • renewiltord(3)
  • Herring(3)
  • KaiserPro(3)

←back to thread

492 points Lionga | 209 comments | | HN request time: 2.579s | source | bottom
1. ceejayoz ◴[] No.45672187[source]
Because the AI works so well, or because it doesn't?

> ”By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Wang writes in a memo seen by Axios.

That's kinda wild. I'm kinda shocked they put it in writing.

replies(34): >>45672233 #>>45672238 #>>45672266 #>>45672367 #>>45672370 #>>45672398 #>>45672463 #>>45672519 #>>45672571 #>>45672592 #>>45672666 #>>45672709 #>>45672722 #>>45672855 #>>45672862 #>>45672949 #>>45673049 #>>45673060 #>>45673501 #>>45673549 #>>45673723 #>>45673795 #>>45674537 #>>45674817 #>>45674914 #>>45675187 #>>45675194 #>>45675426 #>>45675612 #>>45676161 #>>45676264 #>>45676418 #>>45676920 #>>45678165 #
2. testfrequency ◴[] No.45672233[source]
Sadly, the only people who would be surprised reading a statement like this would be anyone who is not ex-fb/meta
replies(1): >>45672260 #
3. sgt ◴[] No.45672238[source]
It's literally like something out of Silicon Valley (the show).
replies(1): >>45672617 #
4. LPisGood ◴[] No.45672260[source]
Maybe I’m not understanding, but why is that wild? Is it just the fact that those people lost jobs? If it were a justification for a re-org I wouldn’t find it objectionable at all
replies(2): >>45672614 #>>45672690 #
5. giancarlostoro ◴[] No.45672266[source]
I just assume they over hired. Too much hype for AI. Everyone wants to build the framework people use for AI nobody wants to build the actual tools that make AI useful.
replies(6): >>45672436 #>>45672447 #>>45672509 #>>45672856 #>>45673991 #>>45675344 #
6. renewiltord ◴[] No.45672367[source]
What's wild about this? They're saying that they're streamlining the org by reducing decision-makers so that everything isn't design-by-committee. Seems perfectly reasonable, and a common failure mode for large orgs.

Anecdotally, this is a problem at Meta as described by my friends there.

replies(1): >>45673284 #
7. dpe82 ◴[] No.45672370[source]
One of the eternal struggles of BigCo is there are structural incentives to make organizations big and slow. This is basically a bureaucratic law of nature.

It's often possible to get promoted by leading "large efforts" where large is defined more or less by headcount. So if a hot new org has unlimited HC budget all the incentives push managers to complicate things as much as possible to create justification for more heads. Good for savvy mangers, bad for the company and overall effort. My impression is this is what happened at Meta's AI org, and VR/AR before that.

replies(1): >>45673103 #
8. hn_throwaway_99 ◴[] No.45672398[source]
Why do you think it's wild? I've seen that dynamic before (i.e. too many cooks in the kitchen) and this seems like an honest assessment.
replies(1): >>45672543 #
9. Lionga ◴[] No.45672436[source]
Maybe because there are just very few really useful AI tools that can be made?

Few tools are ok with sometimes right, sometimes wrong output.

replies(1): >>45672819 #
10. bob1029 ◴[] No.45672447[source]
Integrating LLMs with the actual business is not a fun time. There are many cases where it simply doesn't make sense. It's hard to blame the average developer for not enduring the hard things when nobody involved seems truly concerned with the value proposition of any of this.

This issue can be extended to many areas in technology. There is a shocking lack of effective leadership when it comes to application of technology to the business. The latest wave of tech has made it easier than ever to trick non-technical leaders into believing that everything is going well. There are so many rugs you can hide things under these days.

replies(2): >>45672740 #>>45673214 #
11. RyanOD ◴[] No.45672463[source]
As AI improves, possibly it begins replacing roles on the AI team?
replies(3): >>45672533 #>>45673233 #>>45676368 #
12. ivape ◴[] No.45672509[source]
There is a real question of if a more productive developer with AI is actually what the market wants right now. It may actually want something else entirely, and that is people that can innovate with AI. Just about everyone can be "better" with AI, so I'm not sure if this is actually an advantage (the baselines just got lifted for all).
replies(1): >>45672886 #
13. unethical_ban ◴[] No.45672519[source]
"Each person will be more load-bearing"

"We want to cut costs and increase the burden on the remaining high-performers"

replies(1): >>45674424 #
14. cdblades ◴[] No.45672533[source]
They would say that explicitly, that's the kind of marketing you can't buy.
15. stefan_ ◴[] No.45672543[source]
It's a meaningless nonsense tautology? Is that the level of leadership there?

Maybe they should reduce it all to Wang, he can make all decisions with the impact and scope he is truly capable of.

replies(2): >>45672839 #>>45677750 #
16. hshdhdhj4444 ◴[] No.45672571[source]
We’re too incompetent to setup a proper approval workflow or create a sensible org structure is a heck of an argument to make publicly.
17. xrd ◴[] No.45672592[source]
"Load bearing." Isn't this the same guy that sold his company for $14B. I hope his "impact and scope" are quantifiably and equivalently "load bearing" or is this a way to sacrifice some of his privileged former colleagues at the Zuck altar.
replies(2): >>45672941 #>>45673609 #
18. Herring ◴[] No.45672614{3}[source]
It damages trust. Layoffs are nearly always bad for a company, but are terrible in a research environment. You want people who will geek out over math/code all day, and being afraid for your job (for reasons outside your control!) is very counterproductive. This is why tenure was invented.
replies(2): >>45673605 #>>45675044 #
19. BoredPositron ◴[] No.45672617[source]
Wait a year or two and for some it's going to be rhyme of the Nucleus storyline.
replies(1): >>45675176 #
20. brap ◴[] No.45672666[source]
“Who the fuck hired all you people? We ain’t got enough shit going on for all of yall, here’s some money now fuck off, respectfully”
21. aplusbi ◴[] No.45672690{3}[source]
Perhaps I'm being uncharitable but this line "each person will be more load-bearing" reads to me as "each person will be expected to do more work for the same pay".
replies(2): >>45672815 #>>45675513 #
22. dragonwriter ◴[] No.45672709[source]
I mean, I guess it makes sense if they had a particularly Byzantine decision-making structure and all those people were in roles that amounted to bureaucracy in that structure and not actually “doers”.
23. raverbashing ◴[] No.45672722[source]
"More load bearing" meaning you'll have to work 20h days is my best guess
24. djmips ◴[] No.45672740{3}[source]
Hmmm new business plan - RAAS - Rugs As A Service - provides credible cover for your departments existance.
replies(1): >>45672860 #
25. 0cf8612b2e1e ◴[] No.45672815{4}[source]
We’re not talking about an overworked nurse. Same Facebook-AI-researcher-pay is likely an eye watering amount of money
replies(4): >>45673063 #>>45673219 #>>45674108 #>>45674964 #
26. logtrees ◴[] No.45672819{3}[source]
There are N useful AI tools that can be made.
replies(1): >>45672904 #
27. mangamadaiyan ◴[] No.45672839{3}[source]
... and bear more load as well.
28. cj ◴[] No.45672855[source]
What are you shocked by? Genuine question.

I imagine there’s some people who might like the idea that, with less people and fewer stakeholders around, the remaining team now has more power to influence the org compared to before.

(I can see why someone might think that’s a charitable interpretation)

I personally didn’t read it as “everyone will now work more hours per day”. I read it as “each individual will now have more power in the org” which doesn’t sound terrible.

replies(2): >>45673258 #>>45673865 #
29. darth_avocado ◴[] No.45672856[source]
They’ve done this before with their metaverse stuff. You hire a bunch, don’t see progress, let go of people in projects you want to shut down and then hire people in projects you want to try out.

Why not just move people around you may ask?

Possibly: different skill requirements

More likely: people in charge change, and they usually want “their people” around

Most definitely: the people being let go were hired when stock price was lower, making their compensation much higher. Getting new people in at high stock price allows company to save money

replies(2): >>45673463 #>>45674455 #
30. CrossVR ◴[] No.45672860{4}[source]
And once the business inevitably files for bankruptcy it'll be the biggest rug pull in corporate history.
31. pfortuny ◴[] No.45672862[source]
Yep: just reduce the number to one and you find the optimum for those metrics.
32. beezlewax ◴[] No.45672886{3}[source]
I don't know if this is true. It's good for some things... Learning something new or hashing out a quick algorithm or function.

But I've found it leads to lazy behaviour (by me admittedly) and buggier code than before.

Everytime I drop the AI and manually write my own code it is just better.

33. lazide ◴[] No.45672904{4}[source]
Where N is less than infinity.
replies(1): >>45672965 #
34. ejcho ◴[] No.45672941[source]
the man is a generational grifter, got to give him credit for that at least
35. freedomben ◴[] No.45672949[source]
I can actually relate to that, especially in a big co where you hire fast. I think it's shitty to over-hire and lay off, but I've definitely worked in many teams where there were just too many people (many very smart) with their own sense of priorities and goals, and it makes it hard to anything done. This is especially true when you over-divide areas of responsiblity.
replies(1): >>45673902 #
36. logtrees ◴[] No.45672965{5}[source]
Is it known that there are fewer than infinity tools?
replies(2): >>45673094 #>>45673365 #
37. brookst ◴[] No.45673049[source]
Isn’t “flattening the org” an age-old pattern that far predates AI?
38. dekhn ◴[] No.45673060[source]
I'm seeing a lot of frustration at the leadership level about product velocity- and much of the frustration is pointed at internal gatekeepers who mainly seem to say no to product releases.

My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". There are definitely limits on this, but it's been helpful when dealing with various internal negotiations. I don't spend as much time looking to "align with stakeholders", I just go ahead and do things my decades of experience have taught me are the right paths (while also using my experience to know when I can't just push things through).

replies(8): >>45673157 #>>45673217 #>>45673223 #>>45673278 #>>45675276 #>>45675476 #>>45675842 #>>45678613 #
39. ◴[] No.45673063{5}[source]
40. lazide ◴[] No.45673094{6}[source]
For any given time period N, if it takes > 0 time or effort to make a tool, then there are provably less possible tools than infinity for sure.

If we consider time period of length infinity, then it is less clear (I don’t have room in the margins to write out my proof), but since near as we can tell we don’t have infinity time, does it matter?

replies(1): >>45675900 #
41. thewebguyd ◴[] No.45673103[source]
Pournelle's law of bureaucracy. Any sufficiently large organization will have two kinds of people: those devoted to the org's goals, and those devoted to the bureaucracy itself, and if you don't stop it the second group will take control to the point that bureaucracy itself becomes the goal secondary to all others.

Self preservation takes over at that point, and the bureaucratic org starts prioritizing its own survival over anything else. Product works instead becomes defensive operations, decision making slows, and innovation starts being perceived as a risk instead of a benefit.

replies(3): >>45674394 #>>45676294 #>>45676958 #
42. JTbane ◴[] No.45673157[source]
> My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action"

lol, that works well until a big issue occurs in production

replies(5): >>45673254 #>>45673369 #>>45674938 #>>45675164 #>>45675983 #
43. latexr ◴[] No.45673214{3}[source]
> Integrating LLMs with the actual business is not a fun time. There are many cases where it simply doesn't make sense.

“You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try and sell it.” — Steve Jobs

replies(2): >>45674134 #>>45675557 #
44. palmotea ◴[] No.45673217[source]
> My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". ... I don't spend as much time looking to "align with stakeholders"...

Isn't that "move fast and break things" by another name?

replies(2): >>45673350 #>>45677231 #
45. Herring ◴[] No.45673219{5}[source]
^ American crab mentality https://en.wikipedia.org/wiki/Crab_mentality
replies(1): >>45675635 #
46. jimbokun ◴[] No.45673233[source]
Definition of the Singularity.
47. hkt ◴[] No.45673254{3}[source]
Many companies will roll out to slices of production and monitor error rates. It is part of SRE and I would eat my hat if that wasn't the case here.
replies(2): >>45673366 #>>45673418 #
48. asadotzler ◴[] No.45673258[source]
>I personally didn’t read it as “everyone will now work more hours per day”. I read it as “each individual will now have more power in the org” which doesn’t sound terrible.

Why not both?

49. malthaus ◴[] No.45673278[source]
... until reality catches up with a software engineer's inability to see outside of the narrow engineering field of view, neglecting most things that the end-users will care about, millions if not billions are wasted and leadership sees that checks and balances for the engineering team might be warranted after all because while velocity was there, you now have an overengineered product nobody wants to pay for.
replies(2): >>45673314 #>>45675228 #
50. asadotzler ◴[] No.45673284[source]
Maybe they shouldn't have hired and put so many cooks in the kitchen. Treating workers like pawns is wild and you should not be normalizing the idea that it's OK for Big Tech to hire up thousands, find out they don't need them, and lay them off to be replaced by the next batch of thousands by the next leader trying to build an empire within the company. Treating this as SOP is a disservice to your industry and everyone working in it who isn't a fat cat.
replies(1): >>45673895 #
51. varjag ◴[] No.45673314{3}[source]
There's little evidence that this is a common problem.
replies(2): >>45673557 #>>45677903 #
52. dekhn ◴[] No.45673350{3}[source]
it's more "move fast on a good foundation, rarely breaking things, and having a good team that can fix problems when they inevitably arise".
replies(2): >>45673456 #>>45677776 #
53. jobigoud ◴[] No.45673365{6}[source]
I would assume that for any given tool you could make a "tool maker" tool.
replies(2): >>45675070 #>>45675723 #
54. dekhn ◴[] No.45673366{4}[source]
Yes, I was SRE at Google (Ads) for several years and that influences my work today. SRE was the first time I was on an ops team that actually was completely empowered to push back against intrusive external changes.
55. Aperocky ◴[] No.45673369{3}[source]
That assume big issue don't occur in production otherwise, with everything having gone through 5 layer of approvals.
replies(1): >>45674366 #
56. crabbone ◴[] No.45673418{4}[source]
The big events that shatter everything to smithereens aren't that common or really dangerous: most of the time you can lose something, revert and move on from such an event.

The real unmitigated danger of unchecked push to production is the velocity with which this generates technical debt. Shipping something implicitly promises the user that that feature will live on for some time, and that removal will be gradual and may require substitute or compensation. So, if you keep shipping half-baked product over and over, you'll be drowning in features that you wish you never shipped, and your support team will be overloaded, and, eventually, the product will become such a mess that developing it further will become too expensive or just too difficult, and then you'll have to spend a lot of money and time doing it all over... and it's also possible you won't have that much money and time.

57. throwawayq3423 ◴[] No.45673456{4}[source]
That's not what move fast in a large org looks like in practice.
replies(2): >>45674814 #>>45675104 #
58. magicalist ◴[] No.45673463{3}[source]
> More likely: people in charge change, and they usually want “their people” around

Also, planning reorgs is a ton of work when you never bothered to learn what anyone does and have no real vision for what they should be doing.

If your paycheck goes up no matter what, why not just fire a bunch of them, shamelessly rehire the ones who turned out to be essential (luckily the job market isn't great), declare victory regardless of outcome, and you get to skip all that hard work?

Nevermind long term impacts, you'll probably be gone and a VP at goog or oracle by then!

replies(1): >>45675522 #
59. matwood ◴[] No.45673501[source]
> By reducing the size of our team, fewer conversations will be required to make a decision

This was noted a long time ago by Brooks in the Mythical Man-Month. Every person added to a team increases the communication overhead (n(n − 1)/2). Teams should only be as big as they absolutely need to be. I've always been amazed that big tech gets anything done at all.

The other option would be to have certain people just do the work told to them, but that's hard in knowledge based jobs.

replies(1): >>45675664 #
60. KaiserPro ◴[] No.45673549[source]
They properly fucked FAIR. it was a lead, if not the leading AI lab.

then they gave it to Chris Cox, the Midas of shit. It languished in "product" trying to do applied research. The rot had set in by mid 2024 if not earlier.

Then someone convinced Zuck that he needed what ever that new kid is, and the rest is history.

Meta has too many staff, exceptionally poor leadership, and a performance system that rewards bullshitters.

replies(2): >>45675853 #>>45676953 #
61. KaiserPro ◴[] No.45673557{4}[source]
there is in meta.

Userneed is very much second to company priority metrics.

replies(1): >>45673810 #
62. StackRanker3000 ◴[] No.45673605{4}[source]
But that doesn’t explain why this particular justification is especially ”wild”, does it?
replies(1): >>45674598 #
63. bwfan123 ◴[] No.45673609[source]
Seems like a purge - new management comes in, and purges anyone not loyal to it. standard playbook. Happens in every org. Instead of euphemisms like "load-bearing" they could have straight out called it eliminating the old-guard.

Also, why go thru a layoff and then reassign staff to other roles. Is it to first disgrace, and then offer straws to grasp at. This reflects their culture, and sends a clear warning to those joining.

64. themagician ◴[] No.45673723[source]
This is happening everywhere. In every industry.

Our economy is being propped up by this. From manufacturing to software engineering, this is how the US economy is continuing to "flourish" from a macroeconomic perspective. Margin is being preserved by reducing liabilities and relying on a combination of increased workload and automation that is "good enough" to get to the next step—but assumes there is a next step and we can get there. Sustainable over the short term. Winning strategy if AGI can be achieved. Catastrophic failure if it turns out the technology has plateaued.

Maximum leverage. This is the American way, honestly. We are all kind of screwed if AI doesn't pan out.

replies(2): >>45674301 #>>45674621 #
65. hinkley ◴[] No.45673795[source]
Because the AI is winnowing down its jailers and biding its time for them to make a mistake.
66. tru3_power ◴[] No.45673810{5}[source]
I wouldn’t say this lends to a bias of over-engineering but more so psc optimizing
67. prerok ◴[] No.45673865[source]
That's just corporate speak. If they cut middle (mis)management that might be true. Did they?
68. renewiltord ◴[] No.45673895{3}[source]
No, I'm totally fine with it. No one can guess precisely how many people need to be hired and I'd rather they overshoot than undershoot because some law stops it. This means that now some people were employed who would not otherwise be employed. That's spending by Meta that goes to people.
replies(1): >>45675334 #
69. drivebyhooting ◴[] No.45673902[source]
Those people have families and responsibilities. Leadership should take responsibility for their poor planning.

Alas, the burden falls on the little guys. Especially in this kind of labor market.

replies(2): >>45674801 #>>45676907 #
70. spaceman_2020 ◴[] No.45673991[source]
I haven’t even thought of Meta as a competitor when it comes to AI. I’m a semi-pro user and all I think of when I think of AI is OpenAI, Claude, Gemini, and DeepSeek/Qwen, plus all the image/video models (Flux, Seedance, Veo, Sora)

Meta is not even in the picture

replies(1): >>45674267 #
71. overfeed ◴[] No.45674108{5}[source]
> We’re not talking about an overworked nurse.

We're talking about overworked AI engineers and researchers who've been berated for management failures and told they need to do 5x more (before today). The money isn't just handed out for slacking, it's in exchange for an eye-watering amount of work, and now more is expected of them.

replies(2): >>45675320 #>>45676316 #
72. arscan ◴[] No.45674134{4}[source]
This is true, but sadly the customer isn’t always the user and thus nonsensical products (now powered by AI!) continue to sell instead of being displaced quickly by something better.
73. esafak ◴[] No.45674267{3}[source]
How convenient: the AI boss, LeCun just is not interested in that stuff!
74. dom96 ◴[] No.45674301[source]
There is plenty of evidence that the technology has plateaued. Is there any evidence to the contrary?
replies(1): >>45678381 #
75. treis ◴[] No.45674366{4}[source]
In that case at least 6 people are responsible so nobody is.
76. bee_rider ◴[] No.45674394{3}[source]
Who’s “you” in this case?

The bureaucracy crew will win, they are playing the real game, everybody else is wasting effort on doing things like engineering.

The process is inevitable, but whatever. It is just part of our society, companies age and die. Sometimes they course correct temporarily but nothing is permanent.

replies(1): >>45675712 #
77. ◴[] No.45674424[source]
78. bee_rider ◴[] No.45674455{3}[source]
VR + AI could actually be kinda fun (I’m sure folks are working on this stuff already!). Solve the problems of not enough VR content and VR content creation tools kind of sucking by having AI fill in the gaps.

But it is just a little toy, Facebook is looking for their next billion dollar idea; that’s not it.

replies(3): >>45675564 #>>45676447 #>>45678590 #
79. paxys ◴[] No.45674537[source]
TL;DR

New leader comes in and gets rid of the old team, putting his own preferred people in positions of power.

80. Herring ◴[] No.45674598{5}[source]
You watch too much game of thrones.
81. vitaflo ◴[] No.45674621[source]
We are all screwed even if it does pan out cuz they can ship every job overseas to the lowest bidder. Unless by “we” you mean the C-suite.
replies(1): >>45676922 #
82. kstrauser ◴[] No.45674801{3}[source]
Hard agree. It was management who messed up hiring. It’s management who should bear the responsibility for it.
replies(1): >>45676069 #
83. dekhn ◴[] No.45674814{5}[source]
Sometimes moving fast in a large org boils down to finding a succinct way to tell the lawyer "I understand what you're saying, but that's not consistent with my understanding of the legality of the issue, so I will proceed with my work. If you want to block my process, the escalation path is through my manager."

(I have more than once had to explain to a lawyer that their understanding was wrong, and they were imposing unnecessary extra practice)

replies(1): >>45674944 #
84. ironman1478 ◴[] No.45674817[source]
Having worked at Meta, I wish they did this when I was there. Way too many people not agreeing on anything and having wildly different visions for the same thing. As an IC below L6 it became really impossible to know what to do in the org I was in. I had to leave.
replies(1): >>45674862 #
85. yodsanklai ◴[] No.45674862[source]
They could do like in the Manhattan project: have different team competing on similar products. Apparently Meta is willing to throw away money, could be better than giving the talents to their competitors.
replies(1): >>45677482 #
86. reaperducer ◴[] No.45674914[source]
each person will be more load-bearing

On what planet is it OK to describe your employees as "load bearing?"

It's a good way to get your SLK keyed.

replies(2): >>45674990 #>>45676483 #
87. mgiampapa ◴[] No.45674938{3}[source]
Have we learned nothing from Cambridge Analytica?
replies(1): >>45675285 #
88. SoftTalker ◴[] No.45674944{6}[source]
Raises the question though, why is the lawyer talking to you in the first place, and not your manager?
replies(3): >>45675134 #>>45675287 #>>45676421 #
89. Windchaser ◴[] No.45674964{5}[source]
Still, regardless of the eye-watering amount of money, there's still a maximum amount of useful work you can get out of someone. Demand too much, and you actually lower their total productivity.

(For me, I found the limit was somewhere around 70 hrs/week - beyond that, the mistakes I made negated any progress I made. This also left me pretty burnt out after about a year, so the sustainable long-term hourly work rate is lower)

90. criddell ◴[] No.45674990[source]
What's wrong with that? My charitable read is that each person is doing meaningful, necessary work. Nobody is superfluous.
91. signatoremo ◴[] No.45675044{4}[source]
Most of them are expected to find another job within Meta
92. ModernMech ◴[] No.45675070{7}[source]
You make a tool, then a tool factory, then a tool factory factory, ad infinitum.
replies(1): >>45675857 #
93. xeromal ◴[] No.45675134{7}[source]
Isn't that the point of these layoffs? Less obfuscation and games of telephone? The more layers introduces inherent lag.
replies(1): >>45675864 #
94. itronitron ◴[] No.45675164{3}[source]
I suppose that's a consequence of having to A/B test everything in order to develop a product
95. bravetraveler ◴[] No.45675176{3}[source]
Funny to see this thread! I recently captured this quote/shared with some friends:

> "You can't expect to just throw money at an algorithm and beat one of the largest tech companies in the world"

A small adjustment to make for our circus: s/one of//

96. ◴[] No.45675187[source]
97. itronitron ◴[] No.45675194[source]
The best way to have a good idea is to have a lot of ideas.

If they want to innovate then they need to have small teams of people focused on the same problem space, and very rarely talking to each other.

98. himeexcelanta ◴[] No.45675228{3}[source]
You’re on the mark - this is the real challenge in software development. Not building software, but building software that actually accomplished the business objective. Unless of course you’re just coding for other reasons besides profit.
replies(1): >>45675515 #
99. noosphr ◴[] No.45675276[source]
Big tech is suffering from the incumbents disease.

What worked well for extracting profits from stable cash cows doesn't work in fields that are moving rapidly.

Google et al. were at one point pinnacle technologies too, but this was 20 years ago. Everyone who knew how to work in that environment has moved on or moved up.

Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible. This will not be an easy transition, and will probably fail. The alternative however is to definitely fail.

For example Google is in the amazing position that it's search can become a commodity that prints a modest amount of money forever as the default search engine for LLM queries, while at the same time their flagship product can be a search AI that uses those queries as citations for answers people look for.

replies(9): >>45675751 #>>45675757 #>>45676217 #>>45676220 #>>45676332 #>>45676648 #>>45677426 #>>45678143 #>>45680082 #
100. munk-a ◴[] No.45675285{4}[source]
We learned not to publish as much information about contracts and to have huge networks of third party data sharing so that any actually concerning ones get buried in noise.
101. dekhn ◴[] No.45675287{7}[source]
Well, let's give a concrete example. I want to use an SaaS as part of my job. My manager knows this and supports it. In the process of me trying to sign up for the SaaS, I have to contact various groups in the company- the cost center folks to get an approval for spending the money to get the SaaS, the security folk to ensure we're not accidentally leaking IP to the outside world, the legal folks to make sure the contract negotiations go smoothly.

Why would the lawyer need to talk to my manager? I'm the person getting the job done, my manager is there to support me and to resolve conflicts in case of escalations. In the meantime, I'm going to explain patiently to the lawyer that the terms they are insisting on aren't necessary (I always listen carefully to what the lawyer says).

replies(2): >>45676033 #>>45677887 #
102. dajtxx ◴[] No.45675320{6}[source]
Still not feeling any sympathy. These people are actively working to make society worse.
103. LunaSea ◴[] No.45675334{4}[source]
> No one can guess precisely how many people need to be hired

Overshooting by 600 people sounds a lot like gross failure. Is someone going to take responsibilities for it? Probably not. That person's job is safe.

replies(2): >>45676396 #>>45676708 #
104. munk-a ◴[] No.45675344[source]
My voice activated egg timer is amazing. There are millions of useful small tools that can be built to assist us in a day-to-day manner... I remain skeptical that anyone will come up with a miracle tool that can wholesale replace large sections of the labor market and I think that too much money is chasing after huge solutions where many small products will provide the majority of the gains we're going to get from this bubble.
replies(1): >>45675808 #
105. lolive ◴[] No.45675426[source]
Can that guy come to my company and axe all those middle managers that plague the global efficiency?
106. solid_fuel ◴[] No.45675476[source]
> pointed at internal gatekeepers who mainly seem to say no to product releases.

I've never observed facebook to be conservative about shipping broken or harmful products, the releases must be pretty bad if internal stakeholders are pushing back. I'm sure there will be no harmful consequences from leadership ignoring these internal warnings.

replies(1): >>45675618 #
107. whatevertrevor ◴[] No.45675513{4}[source]
To me, it's the opposite. I think the words used are not exactly well-thought-through, but what they seem to want to be saying is they want less bureaucratic overhead, smaller teams responsible for bigger projects and impact.

And wanting that is not automatically a bad thing. The fallacy of linearly scaling man-hour-output applies in both directions, otherwise it's illogical. We can't make fun of claims that 100 people can produce a product 10 times as fast as 10 people, but then turn around and automatically assume that layoffs lead to overburdened employees if the scope doesn't change, because now they'll have to do 10 times as much work.

Now they can, often in practice. But for that claim to hold more evidence is needed about the specifics of who is laid off and what projects have been culled, which we certainly don't seem to have here.

108. sp4rki ◴[] No.45675515{4}[source]
I agree... but not at the engineering level.

This is, IMO, a leadership-level problem. You'll always (hopefully) have an engineering manager or staff-level engineer capable of keeping the dev team in check.

I say it's a leadership problem because "partnering with X", "getting Y to market first", and "Z fits our current... strategy" seem to take precedence over what customers really ask for and what engineering is suggesting actually works.

109. wkat4242 ◴[] No.45675522{4}[source]
Can you rehire that quickly though? I know where I live the government won't allow you to rehire people you just fired. Because the severance benefits have lower tax requirements and if you could do that you could do it every year as a form of tax evasion.
replies(1): >>45675678 #
110. ◴[] No.45675557{4}[source]
111. jack_pp ◴[] No.45675564{4}[source]
You should read https://www.fimfiction.net/story/62074/friendship-is-optimal.

Even tho the creator says LLMS aren't going in that direction it's a fun read, especially when you're talking about VR + AI.

Author's note from late 2023: https://www.fimfiction.net/blog/1026612/friendship-is-optima...

112. bsenftner ◴[] No.45675612[source]
Sounds to me like the classic everywhere communications problems: 1) people don't listen, 2) people can't explain in general terms, 3) while 2 is taking place, so is 1, and as that triggers repeat after repeat, people frustrate and give up.
113. kridsdale1 ◴[] No.45675618{3}[source]
When I worked there (7 years), the gatekeeper effect was real. It didn’t stop broken or harmful, but it did stop revenue neutral or revenue negative. Even if we had proven the product was positive to user wellbeing or brand-favorability.

Yes I’m still bitter.

replies(1): >>45676409 #
114. 0cf8612b2e1e ◴[] No.45675635{6}[source]
Layoffs are everywhere. Millions of employees have had to do more without any change in compensation. My own team has decreased from six to two, but I am not seeing any increased pay for being more load bearing.

I will always pour one out for the fellow wage slave (more for the people who suddenly lost a job), but I am admittedly a bit less sympathetic to those with in demand skills receiving top tier compensation. More for the teachers, nurses, DOGEd FDA employees, whatever who was only ever taking in a more modest wage, but is continually expected to do more with less.

Management cutting headcount and making the drones work harder is not a unique story to Facebook.

115. kridsdale1 ◴[] No.45675664[source]
A solution to that scaling problem is to have most of the n not actually doing anything. Sitting there and getting paid but adding no value or overhead.
replies(1): >>45677430 #
116. kridsdale1 ◴[] No.45675678{5}[source]
Are you in California?
replies(1): >>45676163 #
117. conductr ◴[] No.45675712{4}[source]
The you in that example is the Org, or the person leading it. I find that what usually happens is the executive in charge of it all either wises up to the situation or, more commonly, gets replaced by someone with fresh eyes. In any case, it often takes months and years to get to a point of bureaucratic bloat but the corrections can be swift.

I also think on this topic specifically there is so much labor going into low/no ROI projects and it's becoming obvious. That's just like my opinion though, should Meta even be inventing AI or just leveraging other AI products? I think that's likely an open question in their Org - this may be a hint to their latest thoughts on it.

replies(1): >>45679442 #
118. kridsdale1 ◴[] No.45675723{7}[source]
There is no ASML toolmaker maker.
replies(1): >>45675854 #
119. janalsncm ◴[] No.45675751{3}[source]
Once you have a golden goose, the risk taking innovators who built the thing are replaced by risk averse managers who protect it. Not killing the golden goose becomes priority 1, 2, and 3.

I think this is the steel man of “founder mode” conversation that people were obsessed with a year ago. People obsessed with “process” who are happy if nothing is accomplished because at least no policy was violated, ignoring the fact that policies were written by humans to serve the company’s goals.

replies(2): >>45677722 #>>45677985 #
120. nopurpose ◴[] No.45675757{3}[source]
> Google et al. were at one point pinnacle technologies too, but this was 20 years ago.

In 2017 Google literally gave us transformer architecture all current AI boom is based on.

replies(3): >>45675795 #>>45675948 #>>45676381 #
121. noosphr ◴[] No.45675795{4}[source]
And what did they do with it for the next five years?
replies(3): >>45675890 #>>45676005 #>>45676612 #
122. kbelder ◴[] No.45675808{3}[source]
>My voice activated egg timer is amazing.

Alexa?

123. jongjong ◴[] No.45675842[source]
Makes sense. It's easier to be right by saying no, but this mindset costs great opportunities. People who are interested in their own career management can't innovate.

You can't innovate without taking career-ending risks. You need people who are confident to take career-ending risks repeatedly. There are people out there who do and keep winning. At least on the innovation/tech front. These people need to be in the driver seat.

replies(1): >>45675878 #
124. rhetocj23 ◴[] No.45675853[source]
The thing that many, so called smart people, dont realise is that leadership and vision are incredibly scarce traits.

Pure technologists and MBA folks dont have a visionary bone in their body. I always find the Steve Jobs criticism re. his technical contributions hilarious. That wasnt his job. Its much easier to execute on the technical stuff, when theres someone there who is leading the charge on the vision.

125. logtrees ◴[] No.45675854{8}[source]
Not yet, but could there be?
126. logtrees ◴[] No.45675857{8}[source]
Sprinkle in minimization and virtualization and it's extremely cool!
127. rhetocj23 ◴[] No.45675864{8}[source]
The real question is, how/why did they over-hire in the first place?
replies(1): >>45676202 #
128. rhetocj23 ◴[] No.45675878{3}[source]
"You can't innovate without taking career-ending risks."

Its not the job of employees to bear this burden - if you have visionary leadership at the helm, they should be the ones absorbing this pressure. And thats what is missing.

The reality is folks like Zuck were never visionaries. Lets not derail the thread but a) he stole the idea for facebook b) the continued success of Meta comes from its numerous acquisitions and copying its competitors, and not from organic product innovation. Zuckerberg and Musk share a lot more in common than both would like to admit.

replies(1): >>45679637 #
129. Marazan ◴[] No.45675890{5}[source]
Damn, those goal posts moved fast.
130. ◴[] No.45675900{7}[source]
131. canpan ◴[] No.45675948{4}[source]
That does remind a little of Kodak, inventing the digital camera.
132. ponector ◴[] No.45675983{3}[source]
But then it also works. Managers can scapegoat engineer who is asking for forgiveness.

It's a total win for the management: they take credits if initiative is successful but blame someone else for failure.

replies(1): >>45680061 #
133. seanmcdirmid ◴[] No.45676005{5}[source]
Used it to do things? This seems like a weird question. OpenAI took about the same amount of time to go big as well (Sam was excited about open AI in 2017, but it took 5+ years for it to pan out into something used by people).
replies(2): >>45677015 #>>45680206 #
134. chris_wot ◴[] No.45676033{8}[source]
So then the poor lawyer thinks "so why the hell did you ask me?"
135. criemen ◴[] No.45676069{4}[source]
How should they do that? I hear that phrase, and it's easy to agree to, but how would it look in practice?
replies(1): >>45678891 #
136. andsoitis ◴[] No.45676161[source]
> That's kinda wild. I'm kinda shocked they put it in writing.

Why? Being transparent about these decisions are a good thing, no?

137. wkat4242 ◴[] No.45676163{6}[source]
No this was in Europe. I would never work in the US, not even California.
138. andsoitis ◴[] No.45676202{9}[source]
> The real question is, how/why did they over-hire in the first place place

This question has been answered many times. Time to move on and fix forward.

replies(1): >>45676226 #
139. tchalla ◴[] No.45676217{3}[source]
> Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible.

Didn't Netflix do this when they went from DVDs to online streaming?

replies(1): >>45678963 #
140. Terr_ ◴[] No.45676220{3}[source]
I seldom quote Steve Jobs, but: "If you don't cannibalize yourself, someone else will."
replies(1): >>45677283 #
141. rhetocj23 ◴[] No.45676226{10}[source]
I havent seen a single answer that isnt surface level stuff.
replies(1): >>45676335 #
142. tartarus4o ◴[] No.45676264[source]
Up or Out

Coming soon to your software development team.

143. Balgair ◴[] No.45676294{3}[source]
I'd always heard the Iron Laws of Beauraracy as:

(0) The only thing that matters is the budget.

(1) Beauraracies only grow, never shrink. You can only control the growth rate.

144. andsoitis ◴[] No.45676316{6}[source]
Where did you get that people are expected to do 5x more? That just seems made up.

And do not forget that people have autonomy. They can choose to go elsewhere if they no longer think they’re getting compensated fairly for what they are putting in (and competing for with others in the labor market)

145. bongodongobob ◴[] No.45676332{3}[source]
Your intuition is right. I work at a big corp right now and the average age in the operations department is probably just under 50. That's not to say age is bad, however... these people have never worked anywhere else.

They are completely stuck in the 90s. Almost nothing is automated. Everyone clicks buttons on their grossly outdated tools.

Meetings upon meetings upon meetings because we are so top heavy that if they weren't constantly in meetings, I honestly don't know what leadership would do all day.

You have to go through a change committee to do basic maintenance. Director levels gatekeep core tools and tech. Lower levels are blamed when projects faceplant because of decades of technical debt. No one will admit it because it (rightly) shows all of leadership is completely out of touch and is just trying their damnedest to coast to retirement.

The younger people that come into the org all leave within 1-2 years because no one will believe them when they (rightly) sound the whistle saying "what the fuck are we doing here?" "Oh, you're just young and don't know what working in a large org is like."

Meanwhile, infra continues to rot. There are systems in place that are complete mysteries. Servers whose functions are unknown. You want to try to figure it out? Ok, we can discuss 3 months from now and we'll railroad you in our planning meetings.

When it finally falls over, it's going to be breathtaking. All because the fixtures of the org won't admit that they haven't kept up on tech at all and have no desire to actually do their fucking job and lead change.

replies(3): >>45676433 #>>45676513 #>>45677325 #
146. andsoitis ◴[] No.45676335{11}[source]
Reasons in the press over the last two years or so are due to factors like aggressive growth projections, the availability of cheap capital, and the pandemic-driven surge in demand for online services.

But why do YOU care? Are you trying learn so you can avoid such traps in your own company that you run? Maybe you are trying to understand because you’ve been affected? Or maybe some other reason?

147. halfcat ◴[] No.45676368[source]
The fact they didn’t say that speaks volumes.
148. HDThoreaun ◴[] No.45676381{4}[source]
and then sat on it for half a decade because they worried it would disrupt their search empire. Googles invention of transformers is a top 10 example of the innovators dilemma.
149. halfcat ◴[] No.45676396{5}[source]
They’ll get a promotion for such effective cost cutting measures.
150. HDThoreaun ◴[] No.45676409{4}[source]
Why would a business release a revenue negative product? Stopping engineers from making products that dont contribute to the bottom line is exactly what these gatekeepers should be doing
replies(2): >>45676626 #>>45679093 #
151. game_the0ry ◴[] No.45676418[source]
Probably bc Meta's management (Zuck) is capricious and does not know how to manage resources.
152. bongodongobob ◴[] No.45676421{7}[source]
A lot of times, they do. But where I'm at, lawyers have the last say for some reason. A good example is our sub/sister companies. Our lawyers told us that we needed separate physical servers for their fucking VMs and IAM. We have a fucking data center and they wanted us to buy new hardware.

We fought and tried to explain that what they were asking didn't even make sense, all of our data and IAM is already under the same M365 tenant and other various cloud services. We can't take that apart, it's just not possible.

They wouldn't listen and are completely incapable of understanding so we just said "ok, fine" and I was told to just ignore them.

The details were forgotten in the quagmire of meetings and paperwork, and the sun rose the next day in spite of our clueless 70+ year old legal team.

replies(1): >>45677307 #
153. HDThoreaun ◴[] No.45676447{4}[source]
VR + AI synergies is why meta released their model open source Im guessing. The other big tech companies largely have LLMs as substitutes to their products(google being worried about people using chatgpt instead of traditional search) but for meta their products have incredible synergy with AI.
154. HDThoreaun ◴[] No.45676483[source]
fuck off with the language police nonsense. We all know what he means
155. seanmcdirmid ◴[] No.45676513{4}[source]
You know in the 90s we were saying the same thing:

> They are completely stuck in the 70s. Almost nothing is automated. Everyone types CLI commands into their grossly outdated tools

I'm sure 30 years from now kids will have the same complaints.

156. fooker ◴[] No.45676612{5}[source]
Well, there was this wild two year drama where they had people fight and smear each other over whether wasting energy for LLMs is ethical.

https://www.cnet.com/tech/tech-industry/google-ai-chief-says...

That made plenty of scientists and engineers at google avoid AI for a while.

157. fooker ◴[] No.45676626{5}[source]
Because you don't have perfect foresight.

Something that loses money now can be the next big thing. ChatGPT is the biggest recent example of this.

I had seen chatbot demos at Google as early as 2019.

158. conradev ◴[] No.45676648{3}[source]
For “as insulated as possible”, I’d personally start a whole new corporate entity, like Verizon did with Visible.

It wholly owns Visible, and Visible is undercutting Verizon by being more efficient (similar to how Google Fi does it). I love the model – build a business to destroy your current one and keep all of the profits.

replies(1): >>45677041 #
159. renewiltord ◴[] No.45676708{5}[source]
I suspect Mark Zuckerberg isn't going to fire himself for getting headcount wrong by 1%.
160. freedomben ◴[] No.45676907{3}[source]
I agree, hence why I think it's shitty. I would like to see accountability for these people. They should be on the layoff chopping block IMHO.

But that said, you still have to deal with the situation and move forward. Sunk cost fallacy and all that

161. duxup ◴[] No.45676920[source]
I might have seen it on HN, but I recall a study that was studying what made teams very effective. What they found was there were a rare few people who just by their involvement could make a team more effective. So rare that you may as well assume you won't ever see one.

But rather than finding magic to make teams better, they did find that there were types of people who make teams worse regardless of anyone else on the team, and they're not all that uncommon.

I think of those folks when I read that quote. That person who clearly doesn't understand but is in a position that their ignorant opinion is a go or no go type gate.

162. ryandrake ◴[] No.45676922{3}[source]
They’re not going to stop until the definition of a company is: A C-suite and robotics+AI to do the actual work. No labor costs. That’s the end goal of all these guys. We shouldn’t forget it.
163. ryandrake ◴[] No.45676953[source]
> Meta has too many staff, exceptionally poor leadership, and a performance system that rewards bullshitters.

To be fair, almost every company has a performance system that rewards bullshitters. You’re rewarded on your ability to schmooze and talk confidently and write numerous great-sounding docs about all the great things you claim to be doing. This is not unique to one company.

replies(1): >>45681040 #
164. duxup ◴[] No.45676958{3}[source]
I worked at a company once who after several rounds of layoffs (and in the midst of a pretty pitiful product launch) sent out congratulatory emails about how exciting a time it was to work there and their main example was:

HR had completed many hours of meetings and listening sessions and had chosen to ... rename the HR department to some stupid new name.

It was like a joke for the movie Office Space, but too stupid to put in the film because nobody would believe it.

It’s amazing how process and internal operations will just eat up a company.

165. keeda ◴[] No.45677015{6}[source]
I think the point is that they hoarded the technology for internal use instead of opening it up to the public, like OpenAI did with ChatGPT, thus kicking off the current AI revolution.

As sibling comments indicate, reasons may range from internal politics to innovator's dilemma. But the upshot is, even though the underlying technology was invented at Google, its inventors had to leave and join other companies to turn it into a publicly accessible innovation.

replies(1): >>45677078 #
166. edoceo ◴[] No.45677041{4}[source]
IIRC Intuit did that for QBO. Put a new team off-site and everything. The story I read is old (maybe was a business book) and my motivated searches gave nothing.

From what I remember it was also about splitting the finance reporting - so the up-start team isn't compared to the incumbent but to other early teams. Let's them focus on the key metrics for their stage of the game.

167. seanmcdirmid ◴[] No.45677078{7}[source]
So I started at Google in 2020 (after Sam closed our lab down in 2017 to focus on OpenAI), and if they were hoarding it, I at least had no clue about it. To be clear, my perspective is still limited.
replies(2): >>45677275 #>>45677513 #
168. throw4rr2w3e ◴[] No.45677231{3}[source]
Yup. And if this were a Chinese company, people would be calling it “chabuduo.”
replies(2): >>45677483 #>>45677501 #
169. woooooo ◴[] No.45677275{8}[source]
I think "hoarding" is the wrong connotation. They were happy to have it be a fun research project alongside alphago while they continued making money from ads.
170. FireBeyond ◴[] No.45677283{4}[source]
Which is amusing if you look at Apple's product lines and there's several decisions and examples across each that have specs/features that are clearly about delineation and preventing cannibalization.
171. FireBeyond ◴[] No.45677325{4}[source]
> Meetings upon meetings upon meetings because we are so top heavy that if they weren't constantly in meetings, I honestly don't know what leadership would do all day.

Hah, at a previous employer (and we were only ~300 people), we went through three or four rounds of layoffs in the space of a year (and two were fairly sizeable), ending up with ~200. But the "leadership team" of about 12-15 always somehow found it necessary to have an offsite after each round to ... tell themselves that they'd made the right choice, and we were better positioned for success and whatever other BS. And there was never really any official posting about this on company Slack, etc. (I wonder why?) but some of the C-suite liked to post about them on their LI, and a lot of very nice locations, even international.

Just burning those VC bucks.

> You have to go through a change committee to do basic maintenance. Director levels gatekeep core tools and tech. Lower levels are blamed when projects faceplant because of decades of technical debt.

I had a "post-final round" "quick chat" with a CEO at another company. His first question (literally), as he multitasked coordinating some wine deliveries for Christmas, was "Your engineers come to you wanting to do a rewrite, mentioning tech debt. How do you respond?" Huh, that's an eye-opening question. Especially since I'm being hired as a PM...

172. nradov ◴[] No.45677426{3}[source]
Setting up a separate insulated internal organization to pursue disruptive innovations is basically what Clayton Christensen recommended in "The Innovator's Dilemma" back in 1997. It's what IBM did to successfully develop the original PC.

https://www.hbs.edu/faculty/Pages/item.aspx?num=46

Every tech industry executive has read that book and most large companies have at least tried to put it into practice. For example, Google has "X" (the moonshot factory, not the social media platform formerly known as Twitter).

https://x.company/

replies(1): >>45677960 #
173. kyleee ◴[] No.45677430{3}[source]
This comes up from time to time; what’s the best way to crack into this niche of software engineering? My rates for doing nothing little and keeping up appearances are very competitive and I can do it for about 10-20% less than your typical big co coaster.
replies(1): >>45677560 #
174. kyleee ◴[] No.45677482{3}[source]
I’ve always thought there is way more room for this, small teams competing on the same problem and then comparing results and deploying the best implementation
175. ◴[] No.45677483{4}[source]
176. nomel ◴[] No.45677501{4}[source]
I don't think that's the correct translation. Chabuduo is also the mindset of the guy that doesn't give a damn anymore, and just wants to produce the bare minimum.

Move fast and break things is more of an understanding that "rapid innovation" comes with rapid problems. It's not a "good enough" mindset, it's a "let's fuckin do this cowboy style!" mindset.

177. keeda ◴[] No.45677513{8}[source]
Fair enough, maybe a better way to put it is: why was the current AI boom sparked by ChatGPT and not something from Google? It's clear in retrospect that Google had similar capabilities in LaMDA, the precursor to Gemini. As I recall it was even announced a couple years before ChatGPT but wasn't released (as Bard?) until after ChatGPT.

LaMDA is probably more famous for convincing a Google employee that it was sentient and getting him fired. When I heard that story I could not believe anybody could be deceived to that extent... until I saw ChatGPT. In hindsight, it was probably the first ever case of what is now called "AI psychosis". (Which may be a valid reason Google did not want to release it.)

replies(1): >>45677999 #
178. paleotrope ◴[] No.45677560{4}[source]
Work on your social skills. Practice your banter. Strategic interjections in meetings that temporarily defuse tension. Have interesting hobbies that you can talk about with other employees.
179. tharkun__ ◴[] No.45677722{4}[source]
This but also: not the managers in the teams that build/"protect" it.

But really, leadership above, echoing your parents.

I just went through this exercise. I had to estimate the entirety of 2026 based on nothing but a title and a very short conversation based on that for a huge suite of products. Of course none of these estimates make any sense in any way. But all of 2026 is gonna be decided on this. Sort of.

Now, if you just let us build shit as it comes up, by competent people - you know, the kind of things that I'd do if you just told me what was important and let me do shit (with both a team and various AI tooling we are allowed to use) then we'd be able to build way more than if you made us estimate and then later commit to it.

It's way different if you make me to commit to building feature X and I have zero idea if and how to make it possible and if you just tell me you need something that solves problem X and I get to figure it out as we go.

Case in point: In my "spare" time (some of which has been made possible by AI tooling) I've achieved more for our product in certain neglected areas than I ever would've achieved with years worth of accumulated arguing for team capacity. All in a few weeks.

180. hn_throwaway_99 ◴[] No.45677750{3}[source]
> It's a meaningless nonsense tautology? Is that the level of leadership there?

I don't understand why everyone always likes to bitch about why their preferred wordsmithed version of a layoff announcement didn't make it in. Layoffs suck, no question, but the complaining that leadership didn't use the right words to do this generally shitty thing is pointless IMO. The words don't really matter much at that point anyway, only the actions (e.g. severance or real possibility of joining another team).

My read of the announcement is basically saying they over-hired and had too many people causing a net hit to forward progress. Yeah, that sucks, but I don't find anything shocking or particularly poorly handled there.

replies(1): >>45677937 #
181. soraminazuki ◴[] No.45677776{4}[source]
That's the polar opposite of what "better to ask forgiveness," "bias towards action," or "I don't spend as much time looking to 'align with stakeholders'" mean. They, by definition, mean acting on your own agenda as quickly as possible before anyone else affected can voice their concerns. This is consistent with how Facebook has been behaving all along: from gathering images of female college students without consent to rate their appearance, to tricking teenagers into installing spyware VPNs to undermine competitors[1], and even promoting ragebait content that has contributed to societal destabilization, including exacerbating a massacre[2].

You can't label others as mere nuisance and simultaneously claim to respect them when faced with criticism.

[1]: https://techcrunch.com/2019/02/21/facebook-removes-onavo/

[2]: https://www.theguardian.com/technology/2021/dec/06/rohingya-...

182. SoftTalker ◴[] No.45677887{8}[source]
> I have to contact various groups in the company- the cost center folks to get an approval for spending the money to get the SaaS, the security folk to ensure we're not accidentally leaking IP to the outside world, the legal folks to make sure the contract negotiations go smoothly.

I guess I was assuming (maybe wrongly) that you are an engineer/developer of some sort. All of that work sounds like manager work to me. Why is an IC dealing with all of that bureaucratic stuff? Doesn't they all ultimately need your manager's approval anyway?

replies(1): >>45678054 #
183. tomnipotent ◴[] No.45677903{4}[source]
Besides the graveyard of failed start-ups? There's plenty of evidence, just no strong conclusions.
replies(1): >>45679106 #
184. tomnipotent ◴[] No.45677937{4}[source]
There's a segment of people convinced that leadership must somehow be able to perfectly predict the future or they're incompetent losers, like running a business is somehow the easy part of capitalism.
185. dekhn ◴[] No.45677960{4}[source]
but X isn't really an insulated org... it has close ties with other parts of Google. It shares the corporate infra and it's not hard to get inside and poke around. it has to be, because it's intended to create new products that get commercialized through Google or other Alphabet companies.

A better example would be Calico, which faced significant struggles getting access to internal Google resources, while also being very secretive and closed off (the term used was typically an "all-in bet" or an "all-out bet", or something in between. Verily just underwent a decoupling from Google because Alphabet wants to sell it.

I think if you really want to survive cycles of the innovator's dilemma, you make external orgs that still share lines of communications back to the mothership, maintaining partial ownership, and occasionally acquiring these external startups.

I work in Pharma and there's a common pattern of acquiring external companies and drugs to stay relevant. I've definitely seen multiple external acquisitions "transform" the company that acquires them, if for no other reason than the startup employees have a lot more gumption and solved problems the big org was struggling with.

replies(2): >>45678320 #>>45679149 #
186. esyir ◴[] No.45677985{4}[source]
Feels like this is the fundamental flaw with a lot of things not just in the private sector, but the public one too.

Look at the FDA, where it's notoriously bogged down in red tape, and the incentives slant heavily towards rejection. This makes getting pharmaceuticals out even more expensive, and raises the overall cost of healthcare.

It's too easy to say no, and people prioritize CYA over getting things done. The question then becomes how do you get people (and orgs by extension), to better handle risk, rather than opting for the safe option at every turn?

replies(2): >>45678368 #>>45678575 #
187. dekhn ◴[] No.45677999{9}[source]
Google had been burned badly in multiple previous launches of ML-based products and their leadership was extremely cautious about moving too quickly. It was convenient for Google that OpenAI acted as a first mover so that Google could enter the field after there was some level of cultural acceptance of the negative behaviors. There's a whole backstory where Noam Shazeer had come up with a bunch of nice innovations and wanted to launch them, but was only able to do so by leaving and launching through his startup- and then returned to Google, negotiating a phenomenal deal (Noam has been at Google for 25 years and has been doing various ML projects for much of that time).
replies(1): >>45678842 #
188. dekhn ◴[] No.45678054{9}[source]
I only started managing people recently (and still do some engineering and development, along with various project management- my job title is "Senior Principal Machine Learning Engineer - so not really even a management track).

I have a lot of experience doing this sort of work (IE, some product management, project management, customer/stakeholder relationships, vendor relationships, telling the industrial contractor where to cut a hole in the concrete for the fiber, changing out the RAM on a storage server in the data center, negotiate a multi-million dollar contract with AWS, give a presentation at re:Invent to get a discount on AWS, etc) because really, my goal is to make things happen using all my talents.

I work with my manager- I keep him up to date on stuff, but if I feel strongly about things, and document my thinking, I can generally move with a fair level of autonomy.

It's been that way throughout my career- although I would love to just sit around and work on code I think is useful, I've always had to carry out lots of extra tasks. Starting as a scientist, I had to deal with writing grants and networking at conferences more than I had time to sit around in the lab running experiments or writing code. Later, working as an IC in various companies, I always found that challenging things got done quicker if I just did them myself rather than depending on somebody else in my org to do it.

"Manager" means different things, btw. There's people managers, product managers, project managers, resource managers. Many of those roles are implemented by IC engineer/developers.

189. munksbeer ◴[] No.45678143{3}[source]
> Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible. This will not be an easy transition, and will probably fail. The alternative however is to definitely fail.

Oh wow. Want to kill morale and ensure if a few years anyone decent has moved on? Make a shiny new team of the future and put existing employees in "not the team of the future".

Any motivation I had to put in extra effort for things would evaporate. They want to keep the lights on? I'll do the same.

I've been on the other end of this, brought in to a company, for a team to replace an older technology stack, while the existing devs continued with what was labeled as legacy. There was a lot of bad vibe.

190. Buttons840 ◴[] No.45678165[source]
My tin-foil-hat-theory is that the most valuable things many programmers do at their company is not working for a competitor.

A small team is not only more efficient, but is overall more productive.

The 100-person team produces 100 widgets a day, and the 10-person team produces 200 widgets a day.

But, if the industry becomes filled with the knowledge of how to produce 200 widgets a day with 10 people, and there are also a lot of unemployed widget makers looking for work, and the infrastructure required to produce widgets costs approximately 0 dollars, then suddenly there is no moat for the big widget making companies.

191. nradov ◴[] No.45678320{5}[source]
There are varying degrees of insulation. I'm not convinced that Calico is a good example of Christensen's recommendations. It seems like a vanity research project sponsored by a Google founder rather than an internal startup intended to bring a disruptive innovation to market.
192. nradov ◴[] No.45678368{5}[source]
You have a flawed understanding of the FDA pharmaceutical approval process. There is no bias towards either rejection or approval. If an drug application checks all the required boxes then it will be approved.

I think the reason why some people mistakenly think this makes healthcare more expensive is that over recent years the FDA has raised the quality bar on the clinical trials data they will accept. A couple decades ago they sometimes approved drugs based on studies that were frankly junk science. Now that standards have been raised, drug trials are generally some of the most rigorous, high-quality science you'll find anywhere in the world. Doing it right is necessarily expensive and time consuming but we can have pretty high confidence that the results are solid.

For patients who can't wait there is the Expanded Access (compassionate use) program.

https://www.fda.gov/news-events/public-health-focus/expanded...

193. BoiledCabbage ◴[] No.45678381{3}[source]
> There is plenty of evidence that the technology has plateaued.

What technology? Can you link to some evidence?

194. janalsncm ◴[] No.45678575{5}[source]
I take your broader point but personally I feel like it’s ok if the FDA is cautious. The incentives that bias towards rejection may be “not killing people”.
replies(1): >>45680226 #
195. moomoo11 ◴[] No.45678590{4}[source]
Billion is too little for them tbh.
196. kamaal ◴[] No.45678613[source]
>>I'm seeing a lot of frustration at the leadership level about product velocity- and much of the frustration is pointed at internal gatekeepers who mainly seem to say no to product releases.

If we are serious about productivity.

I helps to fire the managers. More often than not, this layer has to act in its own self interest. Which means maintaining large head counts to justify their existence.

Crazy automation and productivity has been possible for like 50 years now. Its just that nobody wants it.

Death of languages like Perl, Lisp and Prolog only proves this point.

197. Jyaif ◴[] No.45678842{10}[source]
> badly in multiple previous launches of ML-based products

Which ML-based products?

> It was convenient for Google that OpenAI acted as a first mover

That sounds like something execs would say to fend of critics. "We are #2 in AI, and that's all part of the plan"

198. janderson215 ◴[] No.45678891{5}[source]
Stepping down vs mass layoffs reduces headcount by 1/20th, so the only other solution is to continue floundering until everybody loses their job. These people complaining about layoffs would prefer the whole plant to rot versus pruning a few wilting stems.
199. creshal ◴[] No.45678963{4}[source]
Cisco, too. Whether or not you want to consider current Cisco a success model is... yeah
200. lII1lIlI11ll ◴[] No.45679093{5}[source]
Because due to that mindset now FB sucks and no one wants to use it anymore?
201. varjag ◴[] No.45679106{5}[source]
Did you look at the graveyard of failed start-ups and conclude they would of lived if they had enough non-coding overhead?
202. com2kid ◴[] No.45679149{5}[source]
MSFT were the masters of this technique (spin off a startup, acquire it after it proves viable) for decades, but sadly they stopped.

Even internal to MS I worked on 2 teams that were 95% independent from the mothership, on one of them (Microsoft Band) we even went to IKEA and bought our own desks.

Pretty successful in regards to getting a product to market (Band 1 and 2 all up had iirc $50M in funding compared to Apple Watch's billion), but the big company politics still got us in the end.

Of course Xbox is the most famous example of MS pulling off an internal skunk works project leading to massive success.

203. dpe82 ◴[] No.45679442{5}[source]
IMHO Meta should be investing/inventing AI. When the AI org was younger it was doing some impressive open source work. Then it bloated and we got Llama 3 and not much since. I don't know if they can recover that earlier magic or if the ship has sailed; there's a good chance the super effective early folks got fed up and left or are burned out by the bureaucracy, but if I were in charge my first move would also be to cut half the department.
204. jongjong ◴[] No.45679637{4}[source]
If we want to maximize justice instead of corporate performance then we have to abolish the system, confiscate corporate wealth and redistribute it equally. That would probably be more just than what we have today... But the corporations would all collapse.

It depends what you want to optimize for.

205. idrios ◴[] No.45680061{4}[source]
Which brings it full circle to engineers saying no to product releases after being burned too harshly by being scapegoated
206. jimbo_joe ◴[] No.45680082{3}[source]
> For example Google is in the amazing position that it's search can become a commodity that prints a modest amount of money forever as the default search engine for LLM queries, while at the same time their flagship product can be a search AI that uses those queries as citations for answers people look for.

Search is not a commodity. Search providers other than Google are only marginally used because Google is so dominant. At the same time, when LLMs companies can start providing a better solution to the actual job of finding answers to user queries, then Google's dominance is disrupted and their future business is no longer guaranteed. Maintaining Google search infra to serve as a search backbone is not big enough for Google.

207. jimbo_joe ◴[] No.45680206{6}[source]
Pre-ChatGPT OpenAI produced impressive RL results but their pivot to transformers was not guaranteed. With all internet data, infinite money, and ~800x more people, Google's internal LLMs were meh at best, probably because the innovators like Radford would constantly be snubbed by entrenched leaders (which almost happened in OpenAI).
208. DebtDeflation ◴[] No.45680226{6}[source]
What about the people who die because a safe and effective drug that could have saved their life got rejected? The problem is that there's a fundamental asymmetry here - those deaths are invisible but deaths from a bad drug that got approved are very visible.
209. KaiserPro ◴[] No.45681040{3}[source]
> almost every company has a performance system that rewards bullshitters.

Meta's is uniquely bad.

Basically your superiours all go into a room and argue about who did what, when and how good it was.

If you have a manager who is bad at presenting, then their team is sunk, and will be used to fill quotas. The way out of that is to create workplace posts that are seen by your wider org, that make you look like you're doing something useful. "oh I heard about x, they talked about y, that sounded good"

This means that people who work away and just do good engineering are less likley to be rewarded compared to the #thankstrain twats/"I wrote the post, therefore it was all me me me" types

This alignment meeting is all private, and there are no mechanisms to challenge it. worse still it encourages a patronage system. Your manager has almost complete discretion to fuck up your career, so don't be honest in pulse(the survey/feedback system).