> ”By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Wang writes in a memo seen by Axios.
That's kinda wild. I'm kinda shocked they put it in writing.
Anecdotally, this is a problem at Meta as described by my friends there.
It's often possible to get promoted by leading "large efforts" where large is defined more or less by headcount. So if a hot new org has unlimited HC budget all the incentives push managers to complicate things as much as possible to create justification for more heads. Good for savvy mangers, bad for the company and overall effort. My impression is this is what happened at Meta's AI org, and VR/AR before that.
that does not mean that nothing did, but this indicates to me that FAIR work never actually made it out of the lab and basically everything that Lecun has been working on has been shelved
That makes sense to me as he and most of the AI divas have focused on their “Governor of AI” roles instead of innovating in production
I’ll be interested to see how this shakes out for who is leading AI at Meta going forward
This issue can be extended to many areas in technology. There is a shocking lack of effective leadership when it comes to application of technology to the business. The latest wave of tech has made it easier than ever to trick non-technical leaders into believing that everything is going well. There are so many rugs you can hide things under these days.
If you're not swimming in their river, or you weren't responsible for their spill, who cares?
But it spreads into other rivers and suddenly you have a mess
In this analogy the chemical spill - for those who don't have Meta accounts, or sorry, guess you do, we've made one for you, so sorry - is valuation
"We want to cut costs and increase the burden on the remaining high-performers"
It's coming any day now!
> "... each person will be more load-bearing and have more scope and impact,” Wang writes
It's only a matter of time before the superintelligence decides to lay off the managers too. Soon Mr. Wang will be gone and we'll see press releases like:
> ”By reducing the size of our team, fewer conversations will be required to make a decision, so the logical step I took was to reduce the team size to 0" ... AI superintelligence, which now runs Meta, declared in an interview with Axios.
Maybe they should have just announced the layoffs without specifying the division?
Add that to “corporate personhood” and what do we get?
How gracious.
I imagine there’s some people who might like the idea that, with less people and fewer stakeholders around, the remaining team now has more power to influence the org compared to before.
(I can see why someone might think that’s a charitable interpretation)
I personally didn’t read it as “everyone will now work more hours per day”. I read it as “each individual will now have more power in the org” which doesn’t sound terrible.
Why not just move people around you may ask?
Possibly: different skill requirements
More likely: people in charge change, and they usually want “their people” around
Most definitely: the people being let go were hired when stock price was lower, making their compensation much higher. Getting new people in at high stock price allows company to save money
I'm loving this juxtaposition of companies hyping up imminent epoch-defining AGI, while simultaneously dedicating resources to building TikTok But Worse or adding erotica support to ChatGPT. Interesting priorities.
But I've found it leads to lazy behaviour (by me admittedly) and buggier code than before.
Everytime I drop the AI and manually write my own code it is just better.
Well, all the people with no jobs are going to need something to fill their time.
https://www.datacenterdynamics.com/en/news/meta-brings-data-...
But maybe not:
https://open.substack.com/pub/datacenterrichness/p/meta-empt...
Other options are Ohio or Louisiana.
My (completely uninformed, spitballing) thinking is that Facebook doesn't care that much about AI for end users. THe benefit here is for their ads business, etc.
Unclear if they have been successful at all so far.
I got serious uncanny valley vibes from that quote as well. Can anyone prove that "Alexandr Wang" is an actual human, and not just a server rack with a legless avatar in the Metaverse?
And now they're relying on these newcomers to purge the old Meta styled employees and by extension the culture they'd promoted.
My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". There are definitely limits on this, but it's been helpful when dealing with various internal negotiations. I don't spend as much time looking to "align with stakeholders", I just go ahead and do things my decades of experience have taught me are the right paths (while also using my experience to know when I can't just push things through).
If we consider time period of length infinity, then it is less clear (I don’t have room in the margins to write out my proof), but since near as we can tell we don’t have infinity time, does it matter?
Self preservation takes over at that point, and the bureaucratic org starts prioritizing its own survival over anything else. Product works instead becomes defensive operations, decision making slows, and innovation starts being perceived as a risk instead of a benefit.
“You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try and sell it.” — Steve Jobs
Isn't that "move fast and break things" by another name?
Chat GPT is the one on everyone's lips outside of technology, and in the media. They have a platform by which to push some kind of assistant but where is it? I log into facebook and it's buried in the sidebar as Meta AI. Why aren't they shoving it down my throat? They have a huge platform of advertisers who'd be more than happy to inject ads into the AI. (I should note I hope they don't do this - but it's inevitable).
Why not both?
For ChatGPT I have a lower bar because it is easier to avoid.
The real unmitigated danger of unchecked push to production is the velocity with which this generates technical debt. Shipping something implicitly promises the user that that feature will live on for some time, and that removal will be gradual and may require substitute or compensation. So, if you keep shipping half-baked product over and over, you'll be drowning in features that you wish you never shipped, and your support team will be overloaded, and, eventually, the product will become such a mess that developing it further will become too expensive or just too difficult, and then you'll have to spend a lot of money and time doing it all over... and it's also possible you won't have that much money and time.
- OpenAI's mission is to build safe AI, and ensure AI's benefits are as widely and evenly distributed as possible.
- Google's mission is to organise the world's information and make it universally accessible and useful.
- Meta's mission is to build the future of human connection and the technology that makes it possible.
Lets just take these three companies, and their self-defined mission statements. I see what google and openai are after. Is there any case for anyone to make inside or outside Meta that AI is needed to build the future of human connection? What problem is Meta trying to solve with their billions of investment in "super" intelligence? I genuinely have no idea, and they probably don't either. Which is why they would be laying of 600 people a week after paying a billion dollars to some guy for working on the same stuff.
EDIT: everyone commenting that mission statements are PR fluff. Fine. What is a productive way they can use LLMs in any of their flagship products today?
Also, planning reorgs is a ton of work when you never bothered to learn what anyone does and have no real vision for what they should be doing.
If your paycheck goes up no matter what, why not just fire a bunch of them, shamelessly rehire the ones who turned out to be essential (luckily the job market isn't great), declare victory regardless of outcome, and you get to skip all that hard work?
Nevermind long term impacts, you'll probably be gone and a VP at goog or oracle by then!
But application work is toiling and knowing the question set even with AI help, that's doesn't bode well for teams whose goal is owning and profiting from super AI that can do everything.
But maybe something will change? Maybe adversarial agents will see improvements like the alpha go moment?
Not sure of the exact numbers, given it was within a single department, the cuts were not big but definitely went swift and deep.
As an outside observer, Zuck has always been a sociopath, but he was also always very calculated. However over the past few months he seems to be getting much more erratic and, well... "Elon-y" with this GenAI thing. I wonder what he's seeing that is causing this behavior.
(Crossposted from dupe at https://news.ycombinator.com/item?id=45669719)
This was noted a long time ago by Brooks in the Mythical Man-Month. Every person added to a team increases the communication overhead (n(n − 1)/2). Teams should only be as big as they absolutely need to be. I've always been amazed that big tech gets anything done at all.
The other option would be to have certain people just do the work told to them, but that's hard in knowledge based jobs.
Many here were in LLMs.
Even the porn industry can't seem to monetize AI, so I doubt OpenAI who knows jack shit about this space will be able to.
Fact is generative AI is stupidly expensive to run, and I can't see mass adoption at subscription prices that actually allow them to break even.
I'm sure folks have seen the commentary on the cost of all this infrastructure. How can an LLM business model possibly pay for a nuclear power station, let alone the ongoing overheads of the rest of the infrastructure? The whole thing just seems like total fantasy.
I don't even think they believe they are going to reach AGI, and even if they did, and if companies did start hiring AI agents instead of humans, then what? If consumers are out of work, who the hell is going to keep the economy going?
I just don't understand how smart people think this is going to work out at all.
The only thing worse than a bubble? Two bubbles.
then they gave it to Chris Cox, the Midas of shit. It languished in "product" trying to do applied research. The rot had set in by mid 2024 if not earlier.
Then someone convinced Zuck that he needed what ever that new kid is, and the rest is history.
Meta has too many staff, exceptionally poor leadership, and a performance system that rewards bullshitters.
Microsoft has filled in their entire product line with Copilot, Google is filling everything with Gemini, Apple has platforms but no AI, and OpenAI is firing on all cylinders.. at least in terms of mindshare and AUMs.
Other then that I guess AI would have to be used in their ad platform perhaps for better targetting. Ad targetting is absolutely atrocious right now, at least for me personally.
Let me summarise their real missions:
1. Power and money
2. Power and money
3. Power and money
How does AI help them make money and gain more power?
I can give you a few ways...
Also, why go thru a layoff and then reassign staff to other roles. Is it to first disgrace, and then offer straws to grasp at. This reflects their culture, and sends a clear warning to those joining.
I think there's some firms with special knowledge: Google, possibly OpenAI/Anthropic, possibly the Chinese firms, possibly Mistral too, but no one has enough unique stuff to really stand out.
The biggest things were those six months before people figured out how O1 worked and the short time before people figured out how Google and possibly OpenAI solved 5/6 of the 2025 IMO problems.
I've been lucky to work in high-quality teams where nepotism hasn't been a concern, but I do understand where it's coming from (bad as it is).
Our economy is being propped up by this. From manufacturing to software engineering, this is how the US economy is continuing to "flourish" from a macroeconomic perspective. Margin is being preserved by reducing liabilities and relying on a combination of increased workload and automation that is "good enough" to get to the next step—but assumes there is a next step and we can get there. Sustainable over the short term. Winning strategy if AGI can be achieved. Catastrophic failure if it turns out the technology has plateaued.
Maximum leverage. This is the American way, honestly. We are all kind of screwed if AI doesn't pan out.
But even Meta's PR dept seems clueless on answering "How Meta is going to get more Power and Money through AI"
We keep trying to progressively tax money in the US to reduce the social imbalance. We can’t figure out how to tax power and the people with power like it that way. If you have power you can get money. But it’s also relatively straightforward to arrange to keep the money that you have.
But they don’t really need to.
No, Facebook's strategy has always been the inverse of this. When they support technologies like this they're 'commoditizing the complement', they're driving the commercial value of the thing they don't have to zero so the thing they actually do sell (a human network) differentiates them. Same reason they're quite big on open source, it eliminates their biggest competitors advantages.
There is a language these people speak, and some physical primate posturing they do, which my brain just can’t emulate.
Alas, the burden falls on the little guys. Especially in this kind of labor market.
Ads are their product mostly, though they are also trying to get into consumer hardware.
Meta's actual mission is to keep people on the platform and to do what can be done so users do not leave the platform. I found out that from this perspective Meta's actions make more sense.
Just like Adam Neuman who was reinventing the concept of workspaces as a community.
Just like Elizabeth Holmns who was revolutionizing blood testing.
Just like SBF who pioneered a new model for altruistic capitalism.
And so many others.
Beware of prophets selling you on the idea that they alone can do something nobody has ever done before.
* LLM translation is far better than any other kind of translation. Inter-language communication is obviously directly related to human connection.
* Diffusion models allow people to express themselves in new ways. People use image macros and image memes to communicate already.
In fact, I am disappointed that no one has the imagination to do this. I get it. You guys all want to cosplay as oppressed Marxist-Leninists having defoliants dropped on you by United Fruit Corporation. But you could at least try the mildest attempt at exercising your minds.
Meta is not even in the picture
https://www.ycombinator.com/companies/magnetic/jobs/77FvOwO-...
For the past few decades, the ways and the degree to which we have been genuinely trying (at the government level) to "progressively tax money" in the US have been failing and falling, respectively.
If we were genuinely serious about the kind of progressive taxation you're talking about, capital gains taxes (and other kinds of taxes on non-labor income) would be much, much higher than standard income tax. As it stands, the reverse is true.
That https://character.ai is so enormously popular with people who are under the age of 25 suggests that this is the future. And Meta is certainly looking at https://character.ai with great interest, but also with concern. https://character.ai represents a threat to Meta.
Years ago, when Meta felt that Instagram was a threat, they bought Instagram.
If they don't think they can buy https://character.ai then they need to develop their own version of it.
In fact, they are the #1 or #2 place in the world to sell an ad depending on who you ask. If the future turns out to be LLM-driven, all that ad-money is going to go to OpenAI or worse to Google; leaving Zuck with no revenue.
So why are they after AI? Because they are in the business of selling eyeballs placement and LLM becoming the defacto platform would eat into their margins.
We're talking about overworked AI engineers and researchers who've been berated for management failures and told they need to do 5x more (before today). The money isn't just handed out for slacking, it's in exchange for an eye-watering amount of work, and now more is expected of them.
More like "scientific research regurgitators".
My life after LLMs is not the same anymore. Literally.
Money is a measure of power, but it is not in fact power.
The bureaucracy crew will win, they are playing the real game, everybody else is wasting effort on doing things like engineering.
The process is inevitable, but whatever. It is just part of our society, companies age and die. Sometimes they course correct temporarily but nothing is permanent.
See https://hbr.org/2008/02/the-founders-dilemma
or the fact that John D. Rockefeller was furious that Standard Oil got split up despite the stock going up and making him much richer.
It's not so clear what motivates the very rich. If I doubled my income I might go on a really fancy vacation and get that Olympus 4/3 body I've been looking at and the fast 70-300mm lens for my Sony, etc. If Elon Musk changes his income it won't affect his lifestyle. As the leader of a corporation you're supposed to behave as if your utility function of money was linear because that represents your shareholders but a person like Musk might be very happy to spend $40B to advance his power and/or feeling of power.
The questions in the original comment were really about the "how", and are still worth considering.
But it is just a little toy, Facebook is looking for their next billion dollar idea; that’s not it.
The previous couple of crops of smart people grew up in a world that could still easily be improved, and they set about doing just that. The current crop of smart people grew up in a world with a very large number of people and they want a bigger slice of it. There are only a couple of solutions to that and it's pretty clear to me which way they've picked.
They don't need to 'keep the economy running' for that much longer to get their way.
Side note, has black mirror done this yet or are they still stuck on "what if you are the computer" for the 34th time?
Last survey I saw said regression was still the most-used technique with SVM's more used than LLM's. I figured combining those types of tools with LLM tech, esp for specifying or training them, is a better investment than replacing them. There's people doing that.
Now, I could see Facebook itself thinking LLM's are the most important if they're writing all the code, tests, diagnostics, doing moderation, customer service, etc. Essentially, running the operational side of what generates revenue. They're also willing to spend a lot of money to make that good enough for their use case.
That said, their financial bets make me wonder if they're driven by imagination more than hard analyses.
That said I am not cynical about mission statements like that per se, I do think that making large organizations work towards a common goal is a very difficult problem. Unless you're going to have a hierarchical command and control system in place, you need to do it through shared culture and mission.
To be clear: I'm not arguing that everyone at OpenAI or Meta is a bad person, I don't think that's true. Most of their employees are probably normal people. But seriously, you have to tell me what you guys are smoking if a mission statement causes you to update in any direction whatsoever. I can hardly think of anything more devoid of content.
America's elected leaders also have power to punish & bring oligarchs to book legally, but they mostly interact symbiotically, exchanging campaign contributions and board seats for preferential treatment, favorable policy, etc.
Putin can order any out-of-line oligarch to be disposed of, but the economic & coercive arms of the Russian State still see themselves as two sides of the same coin.
So, yes: coercive power can still make billionaires face the wall (Russian revolution, etc.) but they mostly prefer to work together. Money and power are a continuum like spacetime.
Then there's also the reputational harm if Meta acquires them and the journalists write about the bad things that happen on that platform.
Meta arguably achieved this with the initial versions of their products, but even AI aside, they're mostly disconnecting humans now. I post much less on Instagram and Facebook now that they almost never show my content to my own friends or followers, and show them ads and influencer crap instead, so it's basically talking to a wall in an app. Add to this that companies like Meta are all forcing PIP quotas and mass layoffs which in turn causes everyone in my social circle to work 996.
So they have not only taken away online connections to real humans, they have ALSO taken away offline connections to real humans because nobody has time to meet in real life anymore. Win-win for them, I guess.
There is a whole field of research called post scarcity economy. https://en.wikipedia.org/wiki/Post-scarcity
tldr; it's not as bad as you think, but the transition is going to be bad (for some of us).
(I have more than once had to explain to a lawyer that their understanding was wrong, and they were imposing unnecessary extra practice)
On what planet is it OK to describe your employees as "load bearing?"
It's a good way to get your SLK keyed.
Yes. The further up the ladder you go, the more this is pounded into your head. I was in a few Big Tech and this is how you write your self-assessment. "Increased $$$ revenue due to higher user engagement, shipped xxx product that generated xxx sales etc".
If you're level 1/2 engineer, sure. You get sold on the company mission. But once you're in senior level, you are exposed to how the product/features will maximize the company's financial and market position. How each engineer's hours are directly benefiting the company.
> Were you ever part of a team and felt good about the work you were doing together? Maybe some startups or non-profits can have this (like Wikipedia or Craigslist), but definitely not OpenAI, Google and Meta.
(For me, I found the limit was somewhere around 70 hrs/week - beyond that, the mistakes I made negated any progress I made. This also left me pretty burnt out after about a year, so the sustainable long-term hourly work rate is lower)
The AI party is coming to an end. Those without clear ROI are ripe for the chopping block.
> "You can't expect to just throw money at an algorithm and beat one of the largest tech companies in the world"
A small adjustment to make for our circus: s/one of//
If they want to innovate then they need to have small teams of people focused on the same problem space, and very rarely talking to each other.
What worked well for extracting profits from stable cash cows doesn't work in fields that are moving rapidly.
Google et al. were at one point pinnacle technologies too, but this was 20 years ago. Everyone who knew how to work in that environment has moved on or moved up.
Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible. This will not be an easy transition, and will probably fail. The alternative however is to definitely fail.
For example Google is in the amazing position that it's search can become a commodity that prints a modest amount of money forever as the default search engine for LLM queries, while at the same time their flagship product can be a search AI that uses those queries as citations for answers people look for.
Why would the lawyer need to talk to my manager? I'm the person getting the job done, my manager is there to support me and to resolve conflicts in case of escalations. In the meantime, I'm going to explain patiently to the lawyer that the terms they are insisting on aren't necessary (I always listen carefully to what the lawyer says).
Thats the thing, they arent looking at the big picture or long term. They are looking to get a slice of the pie after seeing companies like Tesla and Uber milk the market for billions. In a market where everything from shelter to food is blowing up in cost, people struggle to provide/have a life similar to their parents.
It certainly feels like the end of an era to see Meta increasingly diminishing the role of FAIR. Strategically it might not have been ideal for LeCun to be so openly and aggressively critical of the this current generation of AI (even if history will very likely prove him correct).
The models have increased greatly in capabilities, but the competitors have simply kept up, and it's not apparently that they won't continue to do that. Furthermore, the breakthroughs-- i.e. fundamentally better models, can happen anywhere where people can and do try out new architectures, and that can happen in surprisingly small places.
It's mostly about culture and being willing to experiment on something which is often very thankless since most radical ideas do not give an improvement.
I've never observed facebook to be conservative about shipping broken or harmful products, the releases must be pretty bad if internal stakeholders are pushing back. I'm sure there will be no harmful consequences from leadership ignoring these internal warnings.
And wanting that is not automatically a bad thing. The fallacy of linearly scaling man-hour-output applies in both directions, otherwise it's illogical. We can't make fun of claims that 100 people can produce a product 10 times as fast as 10 people, but then turn around and automatically assume that layoffs lead to overburdened employees if the scope doesn't change, because now they'll have to do 10 times as much work.
Now they can, often in practice. But for that claim to hold more evidence is needed about the specifics of who is laid off and what projects have been culled, which we certainly don't seem to have here.
This is, IMO, a leadership-level problem. You'll always (hopefully) have an engineering manager or staff-level engineer capable of keeping the dev team in check.
I say it's a leadership problem because "partnering with X", "getting Y to market first", and "Z fits our current... strategy" seem to take precedence over what customers really ask for and what engineering is suggesting actually works.
Even tho the creator says LLMS aren't going in that direction it's a fun read, especially when you're talking about VR + AI.
Author's note from late 2023: https://www.fimfiction.net/blog/1026612/friendship-is-optima...
It's really time for this bubble to collapse so we can go back to working on things that actually make sense rather than ticking boxes.
Yes I’m still bitter.
I will always pour one out for the fellow wage slave (more for the people who suddenly lost a job), but I am admittedly a bit less sympathetic to those with in demand skills receiving top tier compensation. More for the teachers, nurses, DOGEd FDA employees, whatever who was only ever taking in a more modest wage, but is continually expected to do more with less.
Management cutting headcount and making the drones work harder is not a unique story to Facebook.
I don't know how it's possible that companies like Meta could get away with having non-technical people as HR. They need all their HR people to be top software engineers.
You need coding geniuses just to be doing the hiring... And I don't mean people who can solve leetcode puzzles quickly. You need people with a proven track record solving real, difficult problems. Fully completed projects. And that's just to qualify for the HR team IMO... Not worthy enough to be contributing code to such important project. If you don't treat the project as if it is highly important, it won't be.
Normal HR people just fill the company with political nonsense.
This. 100% This.
As an early stage VC, the foundational model story is largely over, and understanding how to apply models to applications or how to protect applications leveraging models is the name of the game now.
> Maybe adversarial agents will see improvements...
There is increased appetite now to invest in those models that are taking a reasoning and RL problem.
I also think on this topic specifically there is so much labor going into low/no ROI projects and it's becoming obvious. That's just like my opinion though, should Meta even be inventing AI or just leveraging other AI products? I think that's likely an open question in their Org - this may be a hint to their latest thoughts on it.
I think this is the steel man of “founder mode” conversation that people were obsessed with a year ago. People obsessed with “process” who are happy if nothing is accomplished because at least no policy was violated, ignoring the fact that policies were written by humans to serve the company’s goals.
You can't innovate without taking career-ending risks. You need people who are confident to take career-ending risks repeatedly. There are people out there who do and keep winning. At least on the innovation/tech front. These people need to be in the driver seat.
Pure technologists and MBA folks dont have a visionary bone in their body. I always find the Steve Jobs criticism re. his technical contributions hilarious. That wasnt his job. Its much easier to execute on the technical stuff, when theres someone there who is leading the charge on the vision.
Its not the job of employees to bear this burden - if you have visionary leadership at the helm, they should be the ones absorbing this pressure. And thats what is missing.
The reality is folks like Zuck were never visionaries. Lets not derail the thread but a) he stole the idea for facebook b) the continued success of Meta comes from its numerous acquisitions and copying its competitors, and not from organic product innovation. Zuckerberg and Musk share a lot more in common than both would like to admit.
It's kind of the other way around, isn't it? Meta has the posts of a billion users with which to train LLMs, so they're in a better position to make them than most others. As for what to do with it then, isn't that that pretty similar no matter who you are?
On top of that, sites are having problems with people going to LLMs instead of going to the site, e.g. why ask a question on Facebook to get an answer tomorrow if ChatGPT can tell you right now? So they either need to get in on the new thing which is threatening to eat their lunch or they need to commoditize it sufficiently that there isn't a major incumbent competitor posed to sit between the users and themselves extracting a margin from the users, or worse, from themselves for directing user traffic their way instead of to whoever outbids them.
Meta is paying Anthropic to give its devs access to Claude because it's that much better than their internal models. You think that's a marketing problem?
Please keep in mind that at these maximums, taxes are still progressive just probably not as much as you want. You really want to make taxes more progressive? Either get rid of SS or make it taxable on all income. SS contributions are by far the least progressive part of the tax code.
One friend told me she feels every time you reapply internally, as the newest team member you end up first on the chopping block for the next round of cuts anyway as no time to make impact, she will just take the redundancy money this time. Lots of Meta employees now just expect such rounds of job cuts prior to earning calls and she has had enough of the stress.
Didn't Netflix do this when they went from DVDs to online streaming?
Fixed that for you.
Shipping a TikTok clone slop app and a keylogger browser while incinerating money and simultaneously talking a big game how AGI is imminent are the opposite of leadership or strategy.
More like acts of desperation to fill massively oversized shoes.
The ones who have been shipping quality consistently are the Chinese AI labs.
It's been cited as unshakable truth many times, including just before places like Washington State significantly raised their top tax brackets—and saw approximately zero rich people leave.
There's a lot of widely-believed economic theory underlying our current practice that's based on extremely shaky ground.
As for how SS taxes are handled, I'm 100% in agreement with you.
Coming soon to your software development team.
This is R&D. You want a skunkworks culture where you have the best people in the world trying as many new things as possible, and failure is fine as long as it's interesting failure.
Not a culture where every development requires a permission slip from ten other teams, and/or everyone is worried if they'll still have a job a month from now.
And do not forget that people have autonomy. They can choose to go elsewhere if they no longer think they’re getting compensated fairly for what they are putting in (and competing for with others in the labor market)
They are completely stuck in the 90s. Almost nothing is automated. Everyone clicks buttons on their grossly outdated tools.
Meetings upon meetings upon meetings because we are so top heavy that if they weren't constantly in meetings, I honestly don't know what leadership would do all day.
You have to go through a change committee to do basic maintenance. Director levels gatekeep core tools and tech. Lower levels are blamed when projects faceplant because of decades of technical debt. No one will admit it because it (rightly) shows all of leadership is completely out of touch and is just trying their damnedest to coast to retirement.
The younger people that come into the org all leave within 1-2 years because no one will believe them when they (rightly) sound the whistle saying "what the fuck are we doing here?" "Oh, you're just young and don't know what working in a large org is like."
Meanwhile, infra continues to rot. There are systems in place that are complete mysteries. Servers whose functions are unknown. You want to try to figure it out? Ok, we can discuss 3 months from now and we'll railroad you in our planning meetings.
When it finally falls over, it's going to be breathtaking. All because the fixtures of the org won't admit that they haven't kept up on tech at all and have no desire to actually do their fucking job and lead change.
But why do YOU care? Are you trying learn so you can avoid such traps in your own company that you run? Maybe you are trying to understand because you’ve been affected? Or maybe some other reason?
We fought and tried to explain that what they were asking didn't even make sense, all of our data and IAM is already under the same M365 tenant and other various cloud services. We can't take that apart, it's just not possible.
They wouldn't listen and are completely incapable of understanding so we just said "ok, fine" and I was told to just ignore them.
The details were forgotten in the quagmire of meetings and paperwork, and the sun rose the next day in spite of our clueless 70+ year old legal team.
I can’t speak to the purely financial side, but it’s definitely possible they’re overextended.
> They are completely stuck in the 70s. Almost nothing is automated. Everyone types CLI commands into their grossly outdated tools
I'm sure 30 years from now kids will have the same complaints.
https://www.cnet.com/tech/tech-industry/google-ai-chief-says...
That made plenty of scientists and engineers at google avoid AI for a while.
> FAANG typo, or is there a new acronym?
FAIR is the Meta AI unit (Fundamental AI Research) at issue, as spelled out in the second sentence of the article.
It wholly owns Visible, and Visible is undercutting Verizon by being more efficient (similar to how Google Fi does it). I love the model – build a business to destroy your current one and keep all of the profits.
But that said, you still have to deal with the situation and move forward. Sunk cost fallacy and all that
But rather than finding magic to make teams better, they did find that there were types of people who make teams worse regardless of anyone else on the team, and they're not all that uncommon.
I think of those folks when I read that quote. That person who clearly doesn't understand but is in a position that their ignorant opinion is a go or no go type gate.
To be fair, almost every company has a performance system that rewards bullshitters. You’re rewarded on your ability to schmooze and talk confidently and write numerous great-sounding docs about all the great things you claim to be doing. This is not unique to one company.
HR had completed many hours of meetings and listening sessions and had chosen to ... rename the HR department to some stupid new name.
It was like a joke for the movie Office Space, but too stupid to put in the film because nobody would believe it.
It’s amazing how process and internal operations will just eat up a company.
As sibling comments indicate, reasons may range from internal politics to innovator's dilemma. But the upshot is, even though the underlying technology was invented at Google, its inventors had to leave and join other companies to turn it into a publicly accessible innovation.
From what I remember it was also about splitting the finance reporting - so the up-start team isn't compared to the incumbent but to other early teams. Let's them focus on the key metrics for their stage of the game.
Hah, at a previous employer (and we were only ~300 people), we went through three or four rounds of layoffs in the space of a year (and two were fairly sizeable), ending up with ~200. But the "leadership team" of about 12-15 always somehow found it necessary to have an offsite after each round to ... tell themselves that they'd made the right choice, and we were better positioned for success and whatever other BS. And there was never really any official posting about this on company Slack, etc. (I wonder why?) but some of the C-suite liked to post about them on their LI, and a lot of very nice locations, even international.
Just burning those VC bucks.
> You have to go through a change committee to do basic maintenance. Director levels gatekeep core tools and tech. Lower levels are blamed when projects faceplant because of decades of technical debt.
I had a "post-final round" "quick chat" with a CEO at another company. His first question (literally), as he multitasked coordinating some wine deliveries for Christmas, was "Your engineers come to you wanting to do a rewrite, mentioning tech debt. How do you respond?" Huh, that's an eye-opening question. Especially since I'm being hired as a PM...
My wife left Meta Reality Labs in summer 2024 precisely because it seemed dysfunctional. I can see how the Llama division could have ended up in a similar state if it adopted similar management practices.
https://www.hbs.edu/faculty/Pages/item.aspx?num=46
Every tech industry executive has read that book and most large companies have at least tried to put it into practice. For example, Google has "X" (the moonshot factory, not the social media platform formerly known as Twitter).
Move fast and break things is more of an understanding that "rapid innovation" comes with rapid problems. It's not a "good enough" mindset, it's a "let's fuckin do this cowboy style!" mindset.
LaMDA is probably more famous for convincing a Google employee that it was sentient and getting him fired. When I heard that story I could not believe anybody could be deceived to that extent... until I saw ChatGPT. In hindsight, it was probably the first ever case of what is now called "AI psychosis". (Which may be a valid reason Google did not want to release it.)
But really, leadership above, echoing your parents.
I just went through this exercise. I had to estimate the entirety of 2026 based on nothing but a title and a very short conversation based on that for a huge suite of products. Of course none of these estimates make any sense in any way. But all of 2026 is gonna be decided on this. Sort of.
Now, if you just let us build shit as it comes up, by competent people - you know, the kind of things that I'd do if you just told me what was important and let me do shit (with both a team and various AI tooling we are allowed to use) then we'd be able to build way more than if you made us estimate and then later commit to it.
It's way different if you make me to commit to building feature X and I have zero idea if and how to make it possible and if you just tell me you need something that solves problem X and I get to figure it out as we go.
Case in point: In my "spare" time (some of which has been made possible by AI tooling) I've achieved more for our product in certain neglected areas than I ever would've achieved with years worth of accumulated arguing for team capacity. All in a few weeks.
I don't understand why everyone always likes to bitch about why their preferred wordsmithed version of a layoff announcement didn't make it in. Layoffs suck, no question, but the complaining that leadership didn't use the right words to do this generally shitty thing is pointless IMO. The words don't really matter much at that point anyway, only the actions (e.g. severance or real possibility of joining another team).
My read of the announcement is basically saying they over-hired and had too many people causing a net hit to forward progress. Yeah, that sucks, but I don't find anything shocking or particularly poorly handled there.
You can't label others as mere nuisance and simultaneously claim to respect them when faced with criticism.
[1]: https://techcrunch.com/2019/02/21/facebook-removes-onavo/
[2]: https://www.theguardian.com/technology/2021/dec/06/rohingya-...
I guess I was assuming (maybe wrongly) that you are an engineer/developer of some sort. All of that work sounds like manager work to me. Why is an IC dealing with all of that bureaucratic stuff? Doesn't they all ultimately need your manager's approval anyway?
A better example would be Calico, which faced significant struggles getting access to internal Google resources, while also being very secretive and closed off (the term used was typically an "all-in bet" or an "all-out bet", or something in between. Verily just underwent a decoupling from Google because Alphabet wants to sell it.
I think if you really want to survive cycles of the innovator's dilemma, you make external orgs that still share lines of communications back to the mothership, maintaining partial ownership, and occasionally acquiring these external startups.
I work in Pharma and there's a common pattern of acquiring external companies and drugs to stay relevant. I've definitely seen multiple external acquisitions "transform" the company that acquires them, if for no other reason than the startup employees have a lot more gumption and solved problems the big org was struggling with.
https://www.goodreads.com/quotes/437536-many-men-of-course-b...
Look at the FDA, where it's notoriously bogged down in red tape, and the incentives slant heavily towards rejection. This makes getting pharmaceuticals out even more expensive, and raises the overall cost of healthcare.
It's too easy to say no, and people prioritize CYA over getting things done. The question then becomes how do you get people (and orgs by extension), to better handle risk, rather than opting for the safe option at every turn?
I have a lot of experience doing this sort of work (IE, some product management, project management, customer/stakeholder relationships, vendor relationships, telling the industrial contractor where to cut a hole in the concrete for the fiber, changing out the RAM on a storage server in the data center, negotiate a multi-million dollar contract with AWS, give a presentation at re:Invent to get a discount on AWS, etc) because really, my goal is to make things happen using all my talents.
I work with my manager- I keep him up to date on stuff, but if I feel strongly about things, and document my thinking, I can generally move with a fair level of autonomy.
It's been that way throughout my career- although I would love to just sit around and work on code I think is useful, I've always had to carry out lots of extra tasks. Starting as a scientist, I had to deal with writing grants and networking at conferences more than I had time to sit around in the lab running experiments or writing code. Later, working as an IC in various companies, I always found that challenging things got done quicker if I just did them myself rather than depending on somebody else in my org to do it.
"Manager" means different things, btw. There's people managers, product managers, project managers, resource managers. Many of those roles are implemented by IC engineer/developers.
Really, this is "blockchain" all over again, but 10x worse.
Oh wow. Want to kill morale and ensure if a few years anyone decent has moved on? Make a shiny new team of the future and put existing employees in "not the team of the future".
Any motivation I had to put in extra effort for things would evaporate. They want to keep the lights on? I'll do the same.
I've been on the other end of this, brought in to a company, for a team to replace an older technology stack, while the existing devs continued with what was labeled as legacy. There was a lot of bad vibe.
A small team is not only more efficient, but is overall more productive.
The 100-person team produces 100 widgets a day, and the 10-person team produces 200 widgets a day.
But, if the industry becomes filled with the knowledge of how to produce 200 widgets a day with 10 people, and there are also a lot of unemployed widget makers looking for work, and the infrastructure required to produce widgets costs approximately 0 dollars, then suddenly there is no moat for the big widget making companies.
Given that MSL is more product oriented, lets see how it goes.
I think the reason why some people mistakenly think this makes healthcare more expensive is that over recent years the FDA has raised the quality bar on the clinical trials data they will accept. A couple decades ago they sometimes approved drugs based on studies that were frankly junk science. Now that standards have been raised, drug trials are generally some of the most rigorous, high-quality science you'll find anywhere in the world. Doing it right is necessarily expensive and time consuming but we can have pretty high confidence that the results are solid.
For patients who can't wait there is the Expanded Access (compassionate use) program.
https://www.fda.gov/news-events/public-health-focus/expanded...
What technology? Can you link to some evidence?
If we are serious about productivity.
I helps to fire the managers. More often than not, this layer has to act in its own self interest. Which means maintaining large head counts to justify their existence.
Crazy automation and productivity has been possible for like 50 years now. Its just that nobody wants it.
Death of languages like Perl, Lisp and Prolog only proves this point.
Which ML-based products?
> It was convenient for Google that OpenAI acted as a first mover
That sounds like something execs would say to fend of critics. "We are #2 in AI, and that's all part of the plan"
Even internal to MS I worked on 2 teams that were 95% independent from the mothership, on one of them (Microsoft Band) we even went to IKEA and bought our own desks.
Pretty successful in regards to getting a product to market (Band 1 and 2 all up had iirc $50M in funding compared to Apple Watch's billion), but the big company politics still got us in the end.
Of course Xbox is the most famous example of MS pulling off an internal skunk works project leading to massive success.
Put another way, you need to have an answer to the question: Why should I work towards optimizing the success of this business rather than another one's.
If there isn't a great answer to this, you'll have employees with no shared sense of direction and no motivation.