←back to thread

The man who killed Google Search?

(www.wheresyoured.at)
1884 points elorant | 8 comments | | HN request time: 0.42s | source | bottom
Show context
gregw134 ◴[] No.40136741[source]
Ex-Google search engineer here (2019-2023). I know a lot of the veteran engineers were upset when Ben Gomes got shunted off. Probably the bigger change, from what I've heard, was losing Amit Singhal who led Search until 2016. Amit fought against creeping complexity. There is a semi-famous internal document he wrote where he argued against the other search leads that Google should use less machine-learning, or at least contain it as much as possible, so that ranking stays debuggable and understandable by human search engineers. My impression is that since he left complexity exploded, with every team launching as many deep learning projects as they can (just like every other large tech company has).

The problem though, is the older systems had obvious problems, while the newer systems have hidden bugs and conceptual issues which often don't show up in the metrics, and which compound over time as more complexity is layered on. For example: I found an off by 1 error deep in a formula from an old launch that has been reordering top results for 15% of queries since 2015. I handed it off when I left but have no idea whether anyone actually fixed it or not.

I wrote up all of the search bugs I was aware of in an internal document called "second page navboost", so if anyone working on search at Google reads this and needs a launch go check it out.

replies(11): >>40136833 #>>40136879 #>>40137570 #>>40137898 #>>40137957 #>>40138051 #>>40140388 #>>40140614 #>>40141596 #>>40146159 #>>40166064 #
JohnFen ◴[] No.40136833[source]
> where he argued against the other search leads that Google should use less machine-learning

This better echoes my personal experience with the decline of Google search than TFA: it seems to be connected to the increasing use of ML in that the more of it Google put in, the worse the results I got were.

replies(3): >>40137620 #>>40137737 #>>40137885 #
jokoon ◴[] No.40137885[source]
that's not something ML people would like to hear
replies(2): >>40137911 #>>40144802 #
oblio ◴[] No.40137911[source]
Is ML the new SOAP? Looks like a silver bullet and 5 years later you're drowning in complexity for no discernible reason?
replies(5): >>40137975 #>>40137976 #>>40138686 #>>40139546 #>>40141708 #
1. ajross ◴[] No.40139546[source]
So... obviously SOAP was dumb[1], and lots of people saw that at the time. But SOAP was dumb in obvious ways, and it failed for obvious reasons, and really no one was surprised at all.

ML isn't like that. It's new. It's different. It may not succeed in the ways we expect; it may even look dumb in hindsight. But it absolutely represents a genuinely new paradigm for computing and is worth studying and understanding on that basis. We look back to SOAP and see something that might as well be forgotten. We'll never look back to the dawn of AI and forget what it was about.

[1] For anyone who missed that particular long-sunken boat, SOAP was a RPC protocol like any other. Yes, that's really all it was. It did nothing special, or well, or that you couldn't do via trivially accessible alternative means. All it had was the right adjective ("XML" in this case) for the moment. It's otherwise forgettable, and forgotten.

replies(3): >>40140801 #>>40144845 #>>40147759 #
2. tensor ◴[] No.40140801[source]
ML has already succeeded to the point that it is ubiquitous and taken for granted. OCR, voice recognition, spam filters, and many other now boring technologies are all based on ML.

Anyone claiming it’s some sort of snake oil shouldn’t be taken seriously. Certainly the current hype around it has given rise to many inappropriate applications, but it’s a wildly successful and ubiquitous technology class that has no replacement.

replies(3): >>40141824 #>>40142095 #>>40144116 #
3. yen223 ◴[] No.40141824[source]
Thank you for this.

Reading these comments I thought I stepped into some alternate timeline when we don't already have widespread ML all over the place.

Like, nobody does rules-based image recognition for a decade now already!

4. oblio ◴[] No.40142095[source]
That ML I have no problem with.

This new ML that's supposed to be the basis for an entire new economic wave, that I mostly dislike.

But I guess that's how we build new things... We explore and throw away 80% of what we've built.

5. Nullabillity ◴[] No.40144116[source]
Call me back when you have voice recognition that doesn't constantly fail spectacularly.
replies(1): >>40144538 #
6. tensor ◴[] No.40144538{3}[source]
Voice recognition will never be rule based.
7. ◴[] No.40144845[source]
8. x0x0 ◴[] No.40147759[source]
Yeah, I'm staring at my use of chatgpt to write a 50 line python program that connected to a local sqlite db and ran a query; for each element returned, made an api call or ran a query against a remote postgres db; depending on the results of that api call, made another api call; saved the results to a file; and presented results in a table.

Chatgpt generated the entirety of the above w/ me tweaking one line of code and putting creds in. I could have written all of the above, but it probably would have taken 20-30 minutes. With chatgpt I banged it out in under a minute, helped a colleague out, and went on my way.

Chatgpt absolutely is a real advancement. Before they released gpt4, there was no tech in the world that could do what it did.