←back to thread

The man who killed Google Search?

(www.wheresyoured.at)
1884 points elorant | 1 comments | | HN request time: 0.371s | source
Show context
gregw134 ◴[] No.40136741[source]
Ex-Google search engineer here (2019-2023). I know a lot of the veteran engineers were upset when Ben Gomes got shunted off. Probably the bigger change, from what I've heard, was losing Amit Singhal who led Search until 2016. Amit fought against creeping complexity. There is a semi-famous internal document he wrote where he argued against the other search leads that Google should use less machine-learning, or at least contain it as much as possible, so that ranking stays debuggable and understandable by human search engineers. My impression is that since he left complexity exploded, with every team launching as many deep learning projects as they can (just like every other large tech company has).

The problem though, is the older systems had obvious problems, while the newer systems have hidden bugs and conceptual issues which often don't show up in the metrics, and which compound over time as more complexity is layered on. For example: I found an off by 1 error deep in a formula from an old launch that has been reordering top results for 15% of queries since 2015. I handed it off when I left but have no idea whether anyone actually fixed it or not.

I wrote up all of the search bugs I was aware of in an internal document called "second page navboost", so if anyone working on search at Google reads this and needs a launch go check it out.

replies(11): >>40136833 #>>40136879 #>>40137570 #>>40137898 #>>40137957 #>>40138051 #>>40140388 #>>40140614 #>>40141596 #>>40146159 #>>40166064 #
1. baryphonic ◴[] No.40140614[source]
I'm glad you shared this.

My priors before reading this article were that an uncritical over-reliance on ML was responsible for the enshittification of Google search (and Google as a whole). Google seemed to give ML models carte blanche, rather than using the 80-20 rule to handle the boring common cases, while leaving the hard stuff to the humans.

I now think it's possible both explanations are true. After all, what better way to mask a product's descent into garbage than more and more of the core algorithm being a black box? Managers can easily take credit for its successes and blame the opacity for failures. After all, the "code yellow" was called in the first place because search growth was apparently stagnant. Why was that? We're the analysts manufacturing a crisis, or has search already declined to some extent?