Most active commenters
  • dekhn(8)
  • rhetocj23(3)
  • seanmcdirmid(3)
  • nradov(3)

←back to thread

492 points Lionga | 85 comments | | HN request time: 2.422s | source | bottom
Show context
ceejayoz ◴[] No.45672187[source]
Because the AI works so well, or because it doesn't?

> ”By reducing the size of our team, fewer conversations will be required to make a decision, and each person will be more load-bearing and have more scope and impact,” Wang writes in a memo seen by Axios.

That's kinda wild. I'm kinda shocked they put it in writing.

replies(34): >>45672233 #>>45672238 #>>45672266 #>>45672367 #>>45672370 #>>45672398 #>>45672463 #>>45672519 #>>45672571 #>>45672592 #>>45672666 #>>45672709 #>>45672722 #>>45672855 #>>45672862 #>>45672949 #>>45673049 #>>45673060 #>>45673501 #>>45673549 #>>45673723 #>>45673795 #>>45674537 #>>45674817 #>>45674914 #>>45675187 #>>45675194 #>>45675426 #>>45675612 #>>45676161 #>>45676264 #>>45676418 #>>45676920 #>>45678165 #
1. dekhn ◴[] No.45673060[source]
I'm seeing a lot of frustration at the leadership level about product velocity- and much of the frustration is pointed at internal gatekeepers who mainly seem to say no to product releases.

My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". There are definitely limits on this, but it's been helpful when dealing with various internal negotiations. I don't spend as much time looking to "align with stakeholders", I just go ahead and do things my decades of experience have taught me are the right paths (while also using my experience to know when I can't just push things through).

replies(8): >>45673157 #>>45673217 #>>45673223 #>>45673278 #>>45675276 #>>45675476 #>>45675842 #>>45678613 #
2. JTbane ◴[] No.45673157[source]
> My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action"

lol, that works well until a big issue occurs in production

replies(5): >>45673254 #>>45673369 #>>45674938 #>>45675164 #>>45675983 #
3. palmotea ◴[] No.45673217[source]
> My leadership is currently promoting "better to ask forgiveness", or put another way: "a bias towards action". ... I don't spend as much time looking to "align with stakeholders"...

Isn't that "move fast and break things" by another name?

replies(2): >>45673350 #>>45677231 #
4. hkt ◴[] No.45673254[source]
Many companies will roll out to slices of production and monitor error rates. It is part of SRE and I would eat my hat if that wasn't the case here.
replies(2): >>45673366 #>>45673418 #
5. malthaus ◴[] No.45673278[source]
... until reality catches up with a software engineer's inability to see outside of the narrow engineering field of view, neglecting most things that the end-users will care about, millions if not billions are wasted and leadership sees that checks and balances for the engineering team might be warranted after all because while velocity was there, you now have an overengineered product nobody wants to pay for.
replies(2): >>45673314 #>>45675228 #
6. varjag ◴[] No.45673314[source]
There's little evidence that this is a common problem.
replies(2): >>45673557 #>>45677903 #
7. dekhn ◴[] No.45673350[source]
it's more "move fast on a good foundation, rarely breaking things, and having a good team that can fix problems when they inevitably arise".
replies(2): >>45673456 #>>45677776 #
8. dekhn ◴[] No.45673366{3}[source]
Yes, I was SRE at Google (Ads) for several years and that influences my work today. SRE was the first time I was on an ops team that actually was completely empowered to push back against intrusive external changes.
9. Aperocky ◴[] No.45673369[source]
That assume big issue don't occur in production otherwise, with everything having gone through 5 layer of approvals.
replies(1): >>45674366 #
10. crabbone ◴[] No.45673418{3}[source]
The big events that shatter everything to smithereens aren't that common or really dangerous: most of the time you can lose something, revert and move on from such an event.

The real unmitigated danger of unchecked push to production is the velocity with which this generates technical debt. Shipping something implicitly promises the user that that feature will live on for some time, and that removal will be gradual and may require substitute or compensation. So, if you keep shipping half-baked product over and over, you'll be drowning in features that you wish you never shipped, and your support team will be overloaded, and, eventually, the product will become such a mess that developing it further will become too expensive or just too difficult, and then you'll have to spend a lot of money and time doing it all over... and it's also possible you won't have that much money and time.

11. throwawayq3423 ◴[] No.45673456{3}[source]
That's not what move fast in a large org looks like in practice.
replies(2): >>45674814 #>>45675104 #
12. KaiserPro ◴[] No.45673557{3}[source]
there is in meta.

Userneed is very much second to company priority metrics.

replies(1): >>45673810 #
13. tru3_power ◴[] No.45673810{4}[source]
I wouldn’t say this lends to a bias of over-engineering but more so psc optimizing
14. treis ◴[] No.45674366{3}[source]
In that case at least 6 people are responsible so nobody is.
15. dekhn ◴[] No.45674814{4}[source]
Sometimes moving fast in a large org boils down to finding a succinct way to tell the lawyer "I understand what you're saying, but that's not consistent with my understanding of the legality of the issue, so I will proceed with my work. If you want to block my process, the escalation path is through my manager."

(I have more than once had to explain to a lawyer that their understanding was wrong, and they were imposing unnecessary extra practice)

replies(1): >>45674944 #
16. mgiampapa ◴[] No.45674938[source]
Have we learned nothing from Cambridge Analytica?
replies(1): >>45675285 #
17. SoftTalker ◴[] No.45674944{5}[source]
Raises the question though, why is the lawyer talking to you in the first place, and not your manager?
replies(3): >>45675134 #>>45675287 #>>45676421 #
18. xeromal ◴[] No.45675134{6}[source]
Isn't that the point of these layoffs? Less obfuscation and games of telephone? The more layers introduces inherent lag.
replies(1): >>45675864 #
19. itronitron ◴[] No.45675164[source]
I suppose that's a consequence of having to A/B test everything in order to develop a product
20. himeexcelanta ◴[] No.45675228[source]
You’re on the mark - this is the real challenge in software development. Not building software, but building software that actually accomplished the business objective. Unless of course you’re just coding for other reasons besides profit.
replies(1): >>45675515 #
21. noosphr ◴[] No.45675276[source]
Big tech is suffering from the incumbents disease.

What worked well for extracting profits from stable cash cows doesn't work in fields that are moving rapidly.

Google et al. were at one point pinnacle technologies too, but this was 20 years ago. Everyone who knew how to work in that environment has moved on or moved up.

Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible. This will not be an easy transition, and will probably fail. The alternative however is to definitely fail.

For example Google is in the amazing position that it's search can become a commodity that prints a modest amount of money forever as the default search engine for LLM queries, while at the same time their flagship product can be a search AI that uses those queries as citations for answers people look for.

replies(9): >>45675751 #>>45675757 #>>45676217 #>>45676220 #>>45676332 #>>45676648 #>>45677426 #>>45678143 #>>45680082 #
22. munk-a ◴[] No.45675285{3}[source]
We learned not to publish as much information about contracts and to have huge networks of third party data sharing so that any actually concerning ones get buried in noise.
23. dekhn ◴[] No.45675287{6}[source]
Well, let's give a concrete example. I want to use an SaaS as part of my job. My manager knows this and supports it. In the process of me trying to sign up for the SaaS, I have to contact various groups in the company- the cost center folks to get an approval for spending the money to get the SaaS, the security folk to ensure we're not accidentally leaking IP to the outside world, the legal folks to make sure the contract negotiations go smoothly.

Why would the lawyer need to talk to my manager? I'm the person getting the job done, my manager is there to support me and to resolve conflicts in case of escalations. In the meantime, I'm going to explain patiently to the lawyer that the terms they are insisting on aren't necessary (I always listen carefully to what the lawyer says).

replies(2): >>45676033 #>>45677887 #
24. solid_fuel ◴[] No.45675476[source]
> pointed at internal gatekeepers who mainly seem to say no to product releases.

I've never observed facebook to be conservative about shipping broken or harmful products, the releases must be pretty bad if internal stakeholders are pushing back. I'm sure there will be no harmful consequences from leadership ignoring these internal warnings.

replies(1): >>45675618 #
25. sp4rki ◴[] No.45675515{3}[source]
I agree... but not at the engineering level.

This is, IMO, a leadership-level problem. You'll always (hopefully) have an engineering manager or staff-level engineer capable of keeping the dev team in check.

I say it's a leadership problem because "partnering with X", "getting Y to market first", and "Z fits our current... strategy" seem to take precedence over what customers really ask for and what engineering is suggesting actually works.

26. kridsdale1 ◴[] No.45675618[source]
When I worked there (7 years), the gatekeeper effect was real. It didn’t stop broken or harmful, but it did stop revenue neutral or revenue negative. Even if we had proven the product was positive to user wellbeing or brand-favorability.

Yes I’m still bitter.

replies(1): >>45676409 #
27. janalsncm ◴[] No.45675751[source]
Once you have a golden goose, the risk taking innovators who built the thing are replaced by risk averse managers who protect it. Not killing the golden goose becomes priority 1, 2, and 3.

I think this is the steel man of “founder mode” conversation that people were obsessed with a year ago. People obsessed with “process” who are happy if nothing is accomplished because at least no policy was violated, ignoring the fact that policies were written by humans to serve the company’s goals.

replies(2): >>45677722 #>>45677985 #
28. nopurpose ◴[] No.45675757[source]
> Google et al. were at one point pinnacle technologies too, but this was 20 years ago.

In 2017 Google literally gave us transformer architecture all current AI boom is based on.

replies(3): >>45675795 #>>45675948 #>>45676381 #
29. noosphr ◴[] No.45675795{3}[source]
And what did they do with it for the next five years?
replies(3): >>45675890 #>>45676005 #>>45676612 #
30. jongjong ◴[] No.45675842[source]
Makes sense. It's easier to be right by saying no, but this mindset costs great opportunities. People who are interested in their own career management can't innovate.

You can't innovate without taking career-ending risks. You need people who are confident to take career-ending risks repeatedly. There are people out there who do and keep winning. At least on the innovation/tech front. These people need to be in the driver seat.

replies(1): >>45675878 #
31. rhetocj23 ◴[] No.45675864{7}[source]
The real question is, how/why did they over-hire in the first place?
replies(1): >>45676202 #
32. rhetocj23 ◴[] No.45675878[source]
"You can't innovate without taking career-ending risks."

Its not the job of employees to bear this burden - if you have visionary leadership at the helm, they should be the ones absorbing this pressure. And thats what is missing.

The reality is folks like Zuck were never visionaries. Lets not derail the thread but a) he stole the idea for facebook b) the continued success of Meta comes from its numerous acquisitions and copying its competitors, and not from organic product innovation. Zuckerberg and Musk share a lot more in common than both would like to admit.

replies(1): >>45679637 #
33. Marazan ◴[] No.45675890{4}[source]
Damn, those goal posts moved fast.
34. canpan ◴[] No.45675948{3}[source]
That does remind a little of Kodak, inventing the digital camera.
35. ponector ◴[] No.45675983[source]
But then it also works. Managers can scapegoat engineer who is asking for forgiveness.

It's a total win for the management: they take credits if initiative is successful but blame someone else for failure.

replies(1): >>45680061 #
36. seanmcdirmid ◴[] No.45676005{4}[source]
Used it to do things? This seems like a weird question. OpenAI took about the same amount of time to go big as well (Sam was excited about open AI in 2017, but it took 5+ years for it to pan out into something used by people).
replies(2): >>45677015 #>>45680206 #
37. chris_wot ◴[] No.45676033{7}[source]
So then the poor lawyer thinks "so why the hell did you ask me?"
38. andsoitis ◴[] No.45676202{8}[source]
> The real question is, how/why did they over-hire in the first place place

This question has been answered many times. Time to move on and fix forward.

replies(1): >>45676226 #
39. tchalla ◴[] No.45676217[source]
> Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible.

Didn't Netflix do this when they went from DVDs to online streaming?

replies(1): >>45678963 #
40. Terr_ ◴[] No.45676220[source]
I seldom quote Steve Jobs, but: "If you don't cannibalize yourself, someone else will."
replies(1): >>45677283 #
41. rhetocj23 ◴[] No.45676226{9}[source]
I havent seen a single answer that isnt surface level stuff.
replies(1): >>45676335 #
42. bongodongobob ◴[] No.45676332[source]
Your intuition is right. I work at a big corp right now and the average age in the operations department is probably just under 50. That's not to say age is bad, however... these people have never worked anywhere else.

They are completely stuck in the 90s. Almost nothing is automated. Everyone clicks buttons on their grossly outdated tools.

Meetings upon meetings upon meetings because we are so top heavy that if they weren't constantly in meetings, I honestly don't know what leadership would do all day.

You have to go through a change committee to do basic maintenance. Director levels gatekeep core tools and tech. Lower levels are blamed when projects faceplant because of decades of technical debt. No one will admit it because it (rightly) shows all of leadership is completely out of touch and is just trying their damnedest to coast to retirement.

The younger people that come into the org all leave within 1-2 years because no one will believe them when they (rightly) sound the whistle saying "what the fuck are we doing here?" "Oh, you're just young and don't know what working in a large org is like."

Meanwhile, infra continues to rot. There are systems in place that are complete mysteries. Servers whose functions are unknown. You want to try to figure it out? Ok, we can discuss 3 months from now and we'll railroad you in our planning meetings.

When it finally falls over, it's going to be breathtaking. All because the fixtures of the org won't admit that they haven't kept up on tech at all and have no desire to actually do their fucking job and lead change.

replies(3): >>45676433 #>>45676513 #>>45677325 #
43. andsoitis ◴[] No.45676335{10}[source]
Reasons in the press over the last two years or so are due to factors like aggressive growth projections, the availability of cheap capital, and the pandemic-driven surge in demand for online services.

But why do YOU care? Are you trying learn so you can avoid such traps in your own company that you run? Maybe you are trying to understand because you’ve been affected? Or maybe some other reason?

44. HDThoreaun ◴[] No.45676381{3}[source]
and then sat on it for half a decade because they worried it would disrupt their search empire. Googles invention of transformers is a top 10 example of the innovators dilemma.
45. HDThoreaun ◴[] No.45676409{3}[source]
Why would a business release a revenue negative product? Stopping engineers from making products that dont contribute to the bottom line is exactly what these gatekeepers should be doing
replies(2): >>45676626 #>>45679093 #
46. bongodongobob ◴[] No.45676421{6}[source]
A lot of times, they do. But where I'm at, lawyers have the last say for some reason. A good example is our sub/sister companies. Our lawyers told us that we needed separate physical servers for their fucking VMs and IAM. We have a fucking data center and they wanted us to buy new hardware.

We fought and tried to explain that what they were asking didn't even make sense, all of our data and IAM is already under the same M365 tenant and other various cloud services. We can't take that apart, it's just not possible.

They wouldn't listen and are completely incapable of understanding so we just said "ok, fine" and I was told to just ignore them.

The details were forgotten in the quagmire of meetings and paperwork, and the sun rose the next day in spite of our clueless 70+ year old legal team.

replies(1): >>45677307 #
47. seanmcdirmid ◴[] No.45676513{3}[source]
You know in the 90s we were saying the same thing:

> They are completely stuck in the 70s. Almost nothing is automated. Everyone types CLI commands into their grossly outdated tools

I'm sure 30 years from now kids will have the same complaints.

48. fooker ◴[] No.45676612{4}[source]
Well, there was this wild two year drama where they had people fight and smear each other over whether wasting energy for LLMs is ethical.

https://www.cnet.com/tech/tech-industry/google-ai-chief-says...

That made plenty of scientists and engineers at google avoid AI for a while.

49. fooker ◴[] No.45676626{4}[source]
Because you don't have perfect foresight.

Something that loses money now can be the next big thing. ChatGPT is the biggest recent example of this.

I had seen chatbot demos at Google as early as 2019.

50. conradev ◴[] No.45676648[source]
For “as insulated as possible”, I’d personally start a whole new corporate entity, like Verizon did with Visible.

It wholly owns Visible, and Visible is undercutting Verizon by being more efficient (similar to how Google Fi does it). I love the model – build a business to destroy your current one and keep all of the profits.

replies(1): >>45677041 #
51. keeda ◴[] No.45677015{5}[source]
I think the point is that they hoarded the technology for internal use instead of opening it up to the public, like OpenAI did with ChatGPT, thus kicking off the current AI revolution.

As sibling comments indicate, reasons may range from internal politics to innovator's dilemma. But the upshot is, even though the underlying technology was invented at Google, its inventors had to leave and join other companies to turn it into a publicly accessible innovation.

replies(1): >>45677078 #
52. edoceo ◴[] No.45677041{3}[source]
IIRC Intuit did that for QBO. Put a new team off-site and everything. The story I read is old (maybe was a business book) and my motivated searches gave nothing.

From what I remember it was also about splitting the finance reporting - so the up-start team isn't compared to the incumbent but to other early teams. Let's them focus on the key metrics for their stage of the game.

53. seanmcdirmid ◴[] No.45677078{6}[source]
So I started at Google in 2020 (after Sam closed our lab down in 2017 to focus on OpenAI), and if they were hoarding it, I at least had no clue about it. To be clear, my perspective is still limited.
replies(2): >>45677275 #>>45677513 #
54. throw4rr2w3e ◴[] No.45677231[source]
Yup. And if this were a Chinese company, people would be calling it “chabuduo.”
replies(2): >>45677483 #>>45677501 #
55. woooooo ◴[] No.45677275{7}[source]
I think "hoarding" is the wrong connotation. They were happy to have it be a fun research project alongside alphago while they continued making money from ads.
56. FireBeyond ◴[] No.45677283{3}[source]
Which is amusing if you look at Apple's product lines and there's several decisions and examples across each that have specs/features that are clearly about delineation and preventing cannibalization.
57. FireBeyond ◴[] No.45677325{3}[source]
> Meetings upon meetings upon meetings because we are so top heavy that if they weren't constantly in meetings, I honestly don't know what leadership would do all day.

Hah, at a previous employer (and we were only ~300 people), we went through three or four rounds of layoffs in the space of a year (and two were fairly sizeable), ending up with ~200. But the "leadership team" of about 12-15 always somehow found it necessary to have an offsite after each round to ... tell themselves that they'd made the right choice, and we were better positioned for success and whatever other BS. And there was never really any official posting about this on company Slack, etc. (I wonder why?) but some of the C-suite liked to post about them on their LI, and a lot of very nice locations, even international.

Just burning those VC bucks.

> You have to go through a change committee to do basic maintenance. Director levels gatekeep core tools and tech. Lower levels are blamed when projects faceplant because of decades of technical debt.

I had a "post-final round" "quick chat" with a CEO at another company. His first question (literally), as he multitasked coordinating some wine deliveries for Christmas, was "Your engineers come to you wanting to do a rewrite, mentioning tech debt. How do you respond?" Huh, that's an eye-opening question. Especially since I'm being hired as a PM...

58. nradov ◴[] No.45677426[source]
Setting up a separate insulated internal organization to pursue disruptive innovations is basically what Clayton Christensen recommended in "The Innovator's Dilemma" back in 1997. It's what IBM did to successfully develop the original PC.

https://www.hbs.edu/faculty/Pages/item.aspx?num=46

Every tech industry executive has read that book and most large companies have at least tried to put it into practice. For example, Google has "X" (the moonshot factory, not the social media platform formerly known as Twitter).

https://x.company/

replies(1): >>45677960 #
59. ◴[] No.45677483{3}[source]
60. nomel ◴[] No.45677501{3}[source]
I don't think that's the correct translation. Chabuduo is also the mindset of the guy that doesn't give a damn anymore, and just wants to produce the bare minimum.

Move fast and break things is more of an understanding that "rapid innovation" comes with rapid problems. It's not a "good enough" mindset, it's a "let's fuckin do this cowboy style!" mindset.

61. keeda ◴[] No.45677513{7}[source]
Fair enough, maybe a better way to put it is: why was the current AI boom sparked by ChatGPT and not something from Google? It's clear in retrospect that Google had similar capabilities in LaMDA, the precursor to Gemini. As I recall it was even announced a couple years before ChatGPT but wasn't released (as Bard?) until after ChatGPT.

LaMDA is probably more famous for convincing a Google employee that it was sentient and getting him fired. When I heard that story I could not believe anybody could be deceived to that extent... until I saw ChatGPT. In hindsight, it was probably the first ever case of what is now called "AI psychosis". (Which may be a valid reason Google did not want to release it.)

replies(1): >>45677999 #
62. tharkun__ ◴[] No.45677722{3}[source]
This but also: not the managers in the teams that build/"protect" it.

But really, leadership above, echoing your parents.

I just went through this exercise. I had to estimate the entirety of 2026 based on nothing but a title and a very short conversation based on that for a huge suite of products. Of course none of these estimates make any sense in any way. But all of 2026 is gonna be decided on this. Sort of.

Now, if you just let us build shit as it comes up, by competent people - you know, the kind of things that I'd do if you just told me what was important and let me do shit (with both a team and various AI tooling we are allowed to use) then we'd be able to build way more than if you made us estimate and then later commit to it.

It's way different if you make me to commit to building feature X and I have zero idea if and how to make it possible and if you just tell me you need something that solves problem X and I get to figure it out as we go.

Case in point: In my "spare" time (some of which has been made possible by AI tooling) I've achieved more for our product in certain neglected areas than I ever would've achieved with years worth of accumulated arguing for team capacity. All in a few weeks.

63. soraminazuki ◴[] No.45677776{3}[source]
That's the polar opposite of what "better to ask forgiveness," "bias towards action," or "I don't spend as much time looking to 'align with stakeholders'" mean. They, by definition, mean acting on your own agenda as quickly as possible before anyone else affected can voice their concerns. This is consistent with how Facebook has been behaving all along: from gathering images of female college students without consent to rate their appearance, to tricking teenagers into installing spyware VPNs to undermine competitors[1], and even promoting ragebait content that has contributed to societal destabilization, including exacerbating a massacre[2].

You can't label others as mere nuisance and simultaneously claim to respect them when faced with criticism.

[1]: https://techcrunch.com/2019/02/21/facebook-removes-onavo/

[2]: https://www.theguardian.com/technology/2021/dec/06/rohingya-...

64. SoftTalker ◴[] No.45677887{7}[source]
> I have to contact various groups in the company- the cost center folks to get an approval for spending the money to get the SaaS, the security folk to ensure we're not accidentally leaking IP to the outside world, the legal folks to make sure the contract negotiations go smoothly.

I guess I was assuming (maybe wrongly) that you are an engineer/developer of some sort. All of that work sounds like manager work to me. Why is an IC dealing with all of that bureaucratic stuff? Doesn't they all ultimately need your manager's approval anyway?

replies(1): >>45678054 #
65. tomnipotent ◴[] No.45677903{3}[source]
Besides the graveyard of failed start-ups? There's plenty of evidence, just no strong conclusions.
replies(1): >>45679106 #
66. dekhn ◴[] No.45677960{3}[source]
but X isn't really an insulated org... it has close ties with other parts of Google. It shares the corporate infra and it's not hard to get inside and poke around. it has to be, because it's intended to create new products that get commercialized through Google or other Alphabet companies.

A better example would be Calico, which faced significant struggles getting access to internal Google resources, while also being very secretive and closed off (the term used was typically an "all-in bet" or an "all-out bet", or something in between. Verily just underwent a decoupling from Google because Alphabet wants to sell it.

I think if you really want to survive cycles of the innovator's dilemma, you make external orgs that still share lines of communications back to the mothership, maintaining partial ownership, and occasionally acquiring these external startups.

I work in Pharma and there's a common pattern of acquiring external companies and drugs to stay relevant. I've definitely seen multiple external acquisitions "transform" the company that acquires them, if for no other reason than the startup employees have a lot more gumption and solved problems the big org was struggling with.

replies(2): >>45678320 #>>45679149 #
67. esyir ◴[] No.45677985{3}[source]
Feels like this is the fundamental flaw with a lot of things not just in the private sector, but the public one too.

Look at the FDA, where it's notoriously bogged down in red tape, and the incentives slant heavily towards rejection. This makes getting pharmaceuticals out even more expensive, and raises the overall cost of healthcare.

It's too easy to say no, and people prioritize CYA over getting things done. The question then becomes how do you get people (and orgs by extension), to better handle risk, rather than opting for the safe option at every turn?

replies(2): >>45678368 #>>45678575 #
68. dekhn ◴[] No.45677999{8}[source]
Google had been burned badly in multiple previous launches of ML-based products and their leadership was extremely cautious about moving too quickly. It was convenient for Google that OpenAI acted as a first mover so that Google could enter the field after there was some level of cultural acceptance of the negative behaviors. There's a whole backstory where Noam Shazeer had come up with a bunch of nice innovations and wanted to launch them, but was only able to do so by leaving and launching through his startup- and then returned to Google, negotiating a phenomenal deal (Noam has been at Google for 25 years and has been doing various ML projects for much of that time).
replies(1): >>45678842 #
69. dekhn ◴[] No.45678054{8}[source]
I only started managing people recently (and still do some engineering and development, along with various project management- my job title is "Senior Principal Machine Learning Engineer - so not really even a management track).

I have a lot of experience doing this sort of work (IE, some product management, project management, customer/stakeholder relationships, vendor relationships, telling the industrial contractor where to cut a hole in the concrete for the fiber, changing out the RAM on a storage server in the data center, negotiate a multi-million dollar contract with AWS, give a presentation at re:Invent to get a discount on AWS, etc) because really, my goal is to make things happen using all my talents.

I work with my manager- I keep him up to date on stuff, but if I feel strongly about things, and document my thinking, I can generally move with a fair level of autonomy.

It's been that way throughout my career- although I would love to just sit around and work on code I think is useful, I've always had to carry out lots of extra tasks. Starting as a scientist, I had to deal with writing grants and networking at conferences more than I had time to sit around in the lab running experiments or writing code. Later, working as an IC in various companies, I always found that challenging things got done quicker if I just did them myself rather than depending on somebody else in my org to do it.

"Manager" means different things, btw. There's people managers, product managers, project managers, resource managers. Many of those roles are implemented by IC engineer/developers.

70. munksbeer ◴[] No.45678143[source]
> Were I the CEO of a company like that I'd reduce headcount in the legacy orgs, transition them to maintenance mode, and start new orgs within the company that are as insulated from legacy as possible. This will not be an easy transition, and will probably fail. The alternative however is to definitely fail.

Oh wow. Want to kill morale and ensure if a few years anyone decent has moved on? Make a shiny new team of the future and put existing employees in "not the team of the future".

Any motivation I had to put in extra effort for things would evaporate. They want to keep the lights on? I'll do the same.

I've been on the other end of this, brought in to a company, for a team to replace an older technology stack, while the existing devs continued with what was labeled as legacy. There was a lot of bad vibe.

71. nradov ◴[] No.45678320{4}[source]
There are varying degrees of insulation. I'm not convinced that Calico is a good example of Christensen's recommendations. It seems like a vanity research project sponsored by a Google founder rather than an internal startup intended to bring a disruptive innovation to market.
72. nradov ◴[] No.45678368{4}[source]
You have a flawed understanding of the FDA pharmaceutical approval process. There is no bias towards either rejection or approval. If an drug application checks all the required boxes then it will be approved.

I think the reason why some people mistakenly think this makes healthcare more expensive is that over recent years the FDA has raised the quality bar on the clinical trials data they will accept. A couple decades ago they sometimes approved drugs based on studies that were frankly junk science. Now that standards have been raised, drug trials are generally some of the most rigorous, high-quality science you'll find anywhere in the world. Doing it right is necessarily expensive and time consuming but we can have pretty high confidence that the results are solid.

For patients who can't wait there is the Expanded Access (compassionate use) program.

https://www.fda.gov/news-events/public-health-focus/expanded...

73. janalsncm ◴[] No.45678575{4}[source]
I take your broader point but personally I feel like it’s ok if the FDA is cautious. The incentives that bias towards rejection may be “not killing people”.
replies(1): >>45680226 #
74. kamaal ◴[] No.45678613[source]
>>I'm seeing a lot of frustration at the leadership level about product velocity- and much of the frustration is pointed at internal gatekeepers who mainly seem to say no to product releases.

If we are serious about productivity.

I helps to fire the managers. More often than not, this layer has to act in its own self interest. Which means maintaining large head counts to justify their existence.

Crazy automation and productivity has been possible for like 50 years now. Its just that nobody wants it.

Death of languages like Perl, Lisp and Prolog only proves this point.

75. Jyaif ◴[] No.45678842{9}[source]
> badly in multiple previous launches of ML-based products

Which ML-based products?

> It was convenient for Google that OpenAI acted as a first mover

That sounds like something execs would say to fend of critics. "We are #2 in AI, and that's all part of the plan"

76. creshal ◴[] No.45678963{3}[source]
Cisco, too. Whether or not you want to consider current Cisco a success model is... yeah
77. lII1lIlI11ll ◴[] No.45679093{4}[source]
Because due to that mindset now FB sucks and no one wants to use it anymore?
78. varjag ◴[] No.45679106{4}[source]
Did you look at the graveyard of failed start-ups and conclude they would of lived if they had enough non-coding overhead?
79. com2kid ◴[] No.45679149{4}[source]
MSFT were the masters of this technique (spin off a startup, acquire it after it proves viable) for decades, but sadly they stopped.

Even internal to MS I worked on 2 teams that were 95% independent from the mothership, on one of them (Microsoft Band) we even went to IKEA and bought our own desks.

Pretty successful in regards to getting a product to market (Band 1 and 2 all up had iirc $50M in funding compared to Apple Watch's billion), but the big company politics still got us in the end.

Of course Xbox is the most famous example of MS pulling off an internal skunk works project leading to massive success.

80. jongjong ◴[] No.45679637{3}[source]
If we want to maximize justice instead of corporate performance then we have to abolish the system, confiscate corporate wealth and redistribute it equally. That would probably be more just than what we have today... But the corporations would all collapse.

It depends what you want to optimize for.

replies(1): >>45681221 #
81. idrios ◴[] No.45680061{3}[source]
Which brings it full circle to engineers saying no to product releases after being burned too harshly by being scapegoated
82. jimbo_joe ◴[] No.45680082[source]
> For example Google is in the amazing position that it's search can become a commodity that prints a modest amount of money forever as the default search engine for LLM queries, while at the same time their flagship product can be a search AI that uses those queries as citations for answers people look for.

Search is not a commodity. Search providers other than Google are only marginally used because Google is so dominant. At the same time, when LLMs companies can start providing a better solution to the actual job of finding answers to user queries, then Google's dominance is disrupted and their future business is no longer guaranteed. Maintaining Google search infra to serve as a search backbone is not big enough for Google.

83. jimbo_joe ◴[] No.45680206{5}[source]
Pre-ChatGPT OpenAI produced impressive RL results but their pivot to transformers was not guaranteed. With all internet data, infinite money, and ~800x more people, Google's internal LLMs were meh at best, probably because the innovators like Radford would constantly be snubbed by entrenched leaders (which almost happened in OpenAI).
84. DebtDeflation ◴[] No.45680226{5}[source]
What about the people who die because a safe and effective drug that could have saved their life got rejected? The problem is that there's a fundamental asymmetry here - those deaths are invisible but deaths from a bad drug that got approved are very visible.
85. danaris ◴[] No.45681221{4}[source]
That doesn't mean there isn't a middle ground, y'know—where company/division/organization leaders both advance ideas and take responsibility for them.