←back to thread

1743 points caspii | 4 comments | | HN request time: 0.627s | source
Show context
ilamont ◴[] No.27428272[source]
Same story for various Wordpress plugins and widgety things that live in site footers.

Google has turned into a cesspool. Half the time I find myself having to do ridiculous search contortions to get somewhat useful results - appending site: .edu or .gov to search strings, searching by time periods to eliminate new "articles" that have been SEOed to the hilt, or taking out yelp and other chronic abusers that hijack local business results.

replies(19): >>27428410 #>>27428439 #>>27428441 #>>27428466 #>>27428594 #>>27428652 #>>27428717 #>>27428807 #>>27429076 #>>27429483 #>>27429797 #>>27429818 #>>27429843 #>>27429859 #>>27430023 #>>27430207 #>>27430285 #>>27430707 #>>27430783 #
elchupanebre ◴[] No.27430207[source]
The reason for that is actually rational: when Amit Singhal was in charge the search rules were written by hand. Once he was fired, the Search Quality team switched to machine learning. The ML was better in many ways: it produced higher quality results with a lot less effort. It just had one possibly fatal flaw: if some result was wrong there was no recourse. And that's what you are observing now: search quality is good or excellent most of the time while sometimes it's very bad and G can't fix it.
replies(5): >>27430295 #>>27430301 #>>27430306 #>>27430308 #>>27430753 #
robbrown451 ◴[] No.27430295[source]
I wouldn't call that rational. There is no reason you can't apply human weighting on top of ML.

Honestly, I don't believe for a minute they "can't fix it." They do this sort of thing all the time, for instance when ML shows dark skinned people for a search for gorilla, they obviously have recourse.

replies(1): >>27430378 #
1. htrp ◴[] No.27430378[source]
You do know that Google basically slapped a patch on that one right?

https://www.theverge.com/2018/1/12/16882408/google-racist-go...

replies(2): >>27430763 #>>27438366 #
2. brigandish ◴[] No.27430763[source]
I’m confused. I read that article and it has this:

> But, as a new report from Wired shows, nearly three years on and Google hasn’t really fixed anything. The company has simply blocked its image recognition algorithms from identifying gorillas altogether — preferring, presumably, to limit the service rather than risk another miscategorization.

Is that not an example of human intervention in ML?

3. robbrown451 ◴[] No.27438366[source]
Yes but then they fixed it right.
replies(1): >>27441012 #
4. htrp ◴[] No.27441012[source]
Fixing it right would be re-training the ML algo.... they basically told the algo to never ID anything as a gorilla (even actual gorillas)