←back to thread

The man who killed Google Search?

(www.wheresyoured.at)
1884 points elorant | 2 comments | | HN request time: 0.627s | source
Show context
gregw134 ◴[] No.40136741[source]
Ex-Google search engineer here (2019-2023). I know a lot of the veteran engineers were upset when Ben Gomes got shunted off. Probably the bigger change, from what I've heard, was losing Amit Singhal who led Search until 2016. Amit fought against creeping complexity. There is a semi-famous internal document he wrote where he argued against the other search leads that Google should use less machine-learning, or at least contain it as much as possible, so that ranking stays debuggable and understandable by human search engineers. My impression is that since he left complexity exploded, with every team launching as many deep learning projects as they can (just like every other large tech company has).

The problem though, is the older systems had obvious problems, while the newer systems have hidden bugs and conceptual issues which often don't show up in the metrics, and which compound over time as more complexity is layered on. For example: I found an off by 1 error deep in a formula from an old launch that has been reordering top results for 15% of queries since 2015. I handed it off when I left but have no idea whether anyone actually fixed it or not.

I wrote up all of the search bugs I was aware of in an internal document called "second page navboost", so if anyone working on search at Google reads this and needs a launch go check it out.

replies(11): >>40136833 #>>40136879 #>>40137570 #>>40137898 #>>40137957 #>>40138051 #>>40140388 #>>40140614 #>>40141596 #>>40146159 #>>40166064 #
JohnFen ◴[] No.40136833[source]
> where he argued against the other search leads that Google should use less machine-learning

This better echoes my personal experience with the decline of Google search than TFA: it seems to be connected to the increasing use of ML in that the more of it Google put in, the worse the results I got were.

replies(3): >>40137620 #>>40137737 #>>40137885 #
potatolicious ◴[] No.40137620[source]
It's also a good lesson for the new AI cycle we're in now. Often inserting ML subsystems into your broader system just makes it go from "deterministically but fixably bad" to "mysteriously and unfixably bad".
replies(5): >>40137968 #>>40138119 #>>40138995 #>>40139020 #>>40147693 #
munk-a ◴[] No.40138119[source]
I think - I hope, rather - that technically minded people who are advocating for the use of ML understand the short comings and hallucinations... but we need to be frank about the fact that the business layer above us (with a few rare exceptions) absolutely does not understand the limitations of AI and views it as a magic box where they type in "Write me a story about a bunny" and get twelve paragraphs of text out. As someone working in a healthcare adjacent field I've seen the glint in executive's eyes when talking about AI and it can provide real benefits in data summarization and annotation assistance... but there are limits to what you should trust it with and if it's something big-i Important then you'll always want to have a human vetting step.
replies(4): >>40138577 #>>40138723 #>>40138897 #>>40139084 #
munificent ◴[] No.40138723[source]
> I hope, rather - that technically minded people who are advocating for the use of ML understand the short comings and hallucinations.

The people I see who are most excited about ML are business types who just see it as a black boxes that makes stock valuation go vroom.

The people that deeply love building things, really enjoy the process of making itself, are profoundly sceptical.

I look at generative AI as sort of like an army of free interns. If your idea of a fun way to make a thing is to dictate orders to a horde of well-meaning but untrained highly-caffienated interns, then using generative AI to make your thing is probably thrilling. You get to feel like an executive producer who can make a lot of stuff happen by simply prompting someone/something to do your bidding.

But if you actually care about the grit and texture of actual creation, then that workflow isn't exactly appealing.

replies(2): >>40138898 #>>40139496 #
fragmede ◴[] No.40139496[source]
We get it, you're skeptical of the current hype bubble. But that's one helluva no true Scotsman you've got going on there. Because a true builder, one that deeply loves building things wouldn't want to use text to create an image. Anyone who does is a business type or an executive producer. A true builder wouldn't think about what they want to do in such nasty thing as words. Creation comes from the soul, which we all know machines, and business people, don't have.

Using English, instead of C, to get a computer to do something doesn't turn you into a beaurocrat any more than using Python or Javascript instead does.

Only a person that truly loves building things, far deeper than you'll ever know, someone that's never programmed in a compiled language, would get that.

replies(4): >>40139565 #>>40139626 #>>40140078 #>>40140255 #
1. xarope ◴[] No.40140078[source]
using English has been tried many times in the history computing; Cobol, SQL, just to name a very few.

Still needed domain experts back then, and, IMHO, in years/decades to come

replies(1): >>40140262 #
2. WWLink ◴[] No.40140262[source]
Or you can draw pretty pictures in LabVIEW lol