←back to thread

The man who killed Google Search?

(www.wheresyoured.at)
1884 points elorant | 5 comments | | HN request time: 0.834s | source
Show context
gregw134 ◴[] No.40136741[source]
Ex-Google search engineer here (2019-2023). I know a lot of the veteran engineers were upset when Ben Gomes got shunted off. Probably the bigger change, from what I've heard, was losing Amit Singhal who led Search until 2016. Amit fought against creeping complexity. There is a semi-famous internal document he wrote where he argued against the other search leads that Google should use less machine-learning, or at least contain it as much as possible, so that ranking stays debuggable and understandable by human search engineers. My impression is that since he left complexity exploded, with every team launching as many deep learning projects as they can (just like every other large tech company has).

The problem though, is the older systems had obvious problems, while the newer systems have hidden bugs and conceptual issues which often don't show up in the metrics, and which compound over time as more complexity is layered on. For example: I found an off by 1 error deep in a formula from an old launch that has been reordering top results for 15% of queries since 2015. I handed it off when I left but have no idea whether anyone actually fixed it or not.

I wrote up all of the search bugs I was aware of in an internal document called "second page navboost", so if anyone working on search at Google reads this and needs a launch go check it out.

replies(11): >>40136833 #>>40136879 #>>40137570 #>>40137898 #>>40137957 #>>40138051 #>>40140388 #>>40140614 #>>40141596 #>>40146159 #>>40166064 #
banish-m4 ◴[] No.40137898[source]
Thanks for writing this insightful piece.

The pathologies of big companies that fail to break themselves up into smaller non-siloed entities like Virgin Group does. Maintaining the successful growing startup ways and fighting against politics, bureaucracy, fiefdoms, and burgeoning codebases is difficult but is a better way than chasing short-term profits, massive codebases, institutional inertia, dealing with corporate bullshit that gets in the way of the customer experience and pushes out solid technical ICs and leaders.

I'm surprised there aren't more people on here who decide "F-it, MAANG megacorps are too risky and backwards not representative of their roots" and form worker-owned co-ops to do what MAANGs are doing, only better, and with long-term business sustainability, long tenure, employee perks like the startup days, and positive civil culture as their central mission.

replies(5): >>40138159 #>>40138551 #>>40139151 #>>40140147 #>>40140217 #
1. godelski ◴[] No.40139151[source]
What's odd to me is how everything is so metricized. Clearly over metricization is the downfall of any system that looks meritocratic. Due to the limitations of metrics and how they are often far easier to game than to reach through the intended means.

An example of this I see is how new leaders come in and hit hard to cut costs. But the previous leader did this (and the one before them) so the system/group/company is fairly lean already. So to get anywhere near similar reductions or cost savings it typically means cutting more than fat. Which it's clear that many big corps are not running with enough fat in the first place (you want some fat! You just don't want to be obese!). This seems to create a pattern that ends up being indistinguishable from "That worked! Let's not do that anymore."

replies(2): >>40140242 #>>40163330 #
2. jaynate ◴[] No.40140242[source]
Agree you have to mix qualitative with the quantitative, but the best metrics systems don't just measure one quantity metric. They should be paired with a quality metric.

Example: User Growth & Customer Engagement

Have to have user growth and retention. If you looked at just one or the other, you'd be missing half the equation.

replies(2): >>40143399 #>>40163425 #
3. DanielHB ◴[] No.40143399[source]
I think that a good portion of the problem is that there are groups involved in metrics:

1) People setting the metrics

2) People implementing/calculating the metrics

3) People working on improving the metrics (ie product work)

2 is specially complicated for a lot of software products because it can some times be really hard to measure and can be tweaked/manipulated. For example, the MAU twitter figures from the buyout that Musk keeps complaining about, or Blizzard constantly switching their MAU definition.

Often 2 and 3 are the same people and 1 is almost always upper management. I argue that 1 and 2 should be a single group of people (that doesn't work on the product at all) and not directly subject to upper management and not tracked by the same metrics they implement (or tracked by any metrics at all).

4. banish-m4 ◴[] No.40163330[source]
Oh god. The blind faith in reductive, objectivist, rationalist meritocracy that somehow "everything can be measured perfectly" and "whatever happens is completely unbiased as proscribed by a black-and-white, mechanical formula". No, sorry, that's insufficiently holistic in accounting for intangibles and supportive effort, and more of a throwback ideology that should've died in the 1920's. Some degree of discretion is needed because there is no shortcut to "measuring" performance.
5. banish-m4 ◴[] No.40163425[source]
Absurdity, unfairness, and failure often result from selective blindness to reality, whether willful or unintentional. Hyperlogical people sometimes lack empathy or an ability to conceive of, to understand, or prefer to trivialize ambiguous situations, politics, biases, human factors, or nonfunctional requirements. Always keep looking for one's own and organizational blind spots.