←back to thread

The man who killed Google Search?

(www.wheresyoured.at)
1884 points elorant | 2 comments | | HN request time: 0.427s | source
Show context
gregw134 ◴[] No.40136741[source]
Ex-Google search engineer here (2019-2023). I know a lot of the veteran engineers were upset when Ben Gomes got shunted off. Probably the bigger change, from what I've heard, was losing Amit Singhal who led Search until 2016. Amit fought against creeping complexity. There is a semi-famous internal document he wrote where he argued against the other search leads that Google should use less machine-learning, or at least contain it as much as possible, so that ranking stays debuggable and understandable by human search engineers. My impression is that since he left complexity exploded, with every team launching as many deep learning projects as they can (just like every other large tech company has).

The problem though, is the older systems had obvious problems, while the newer systems have hidden bugs and conceptual issues which often don't show up in the metrics, and which compound over time as more complexity is layered on. For example: I found an off by 1 error deep in a formula from an old launch that has been reordering top results for 15% of queries since 2015. I handed it off when I left but have no idea whether anyone actually fixed it or not.

I wrote up all of the search bugs I was aware of in an internal document called "second page navboost", so if anyone working on search at Google reads this and needs a launch go check it out.

replies(11): >>40136833 #>>40136879 #>>40137570 #>>40137898 #>>40137957 #>>40138051 #>>40140388 #>>40140614 #>>40141596 #>>40146159 #>>40166064 #
1. AlbertCory ◴[] No.40136879[source]
Amit was definitely against ML, long before "AI" had become a buzzword.
replies(1): >>40138025 #
2. mike_hearn ◴[] No.40138025[source]
He wasn't the only one. I built a couple of systems there integrated into the accounts system and "no ML" was an explicit upfront design decision. It was never regretted and although I'm sure they put ML in it these days, last I heard as of a few years ago was that at the core were still pages and pages of hand written logic.

I got nothing against ML in principle, but if the model doesn't do the right thing then you can just end up stuck. Also, it often burns a lot of resources to learn something that was obvious to human domain experts anyway. Plus the understandability issues.