SEO used to be extremely gameable (seniority of site, keyword stuffing, backlinks), but these levers aren't as obvious now, if at all.
The format goes like this: Lately people are searching for XYZ but is it safe to search for XYZ? What experts say for XYZ? To find out continue to read our article.
Then it's followed by wall of text made of keywords(in sentences that don't make sense), if you are lucky there would be the opening hours(which are often not accurate) somewhere down the text.
But that doesn't stop there. Even actual news articles are written for the consumption of the Google bot, the sentences often don't make sence, they are repeated multiple times with the synonyms of one of the words, making it into a lengthy article that doesn't have any meat beyond the title.
I argue that the problem is not SEO experts with low ethics, the problem is the way the business is structured. SEO experts don't do it for the sake of the art but because they are paid to do it. They are paid to do it because it has a positive ROI on bringing eyeballs and people pay Google for eyeballs, then Google pays those who generate the eyeballs.
Isn't it better for Google and everyone involved if you can't find what you are looking for, continuing your search brings more eyeballs? It's not like you are going to switch to Bing? You are also not going to abandon the internet and go to a library.
I've noticed a rise of that as well. With some searches such spam is all I've received. But that's really a problem in all languages Google supports I think.
There's even malware that infects websites and generates such content, not sure what's the point of that. Anyone knows?
One day Google may introduce multiple search rankings, where one of them is SEO and another is the "useful things". But I don't hold my breath.
From personal experience, I switched to another tool (DDG) a couple of years ago. When I occasionally try Google, for 95% of common requests I'm appalled by the results: the top is only SEO garbage. For very specific and precise searches (where people are not trying to game the system), Google is still the best, though.
Good thing /etc/hosts has no size limit.
What about trust-based systems. You choose who you trust and get information that they found not to be SEO-garbage, like trust-rings. When the system can't do it alone, user-centric feedback may work. That could give interesting inputs besides the ones Google already gets using its standard metrics.
I changed the default search engine from Google to Bing and DDG in all browsers. Google does have better results, so sometimes I still need to use them. But for 90% of generic queries such as the weather, product information, or finding a company's website, Bing is good enough.
The result was astonishing: In the first page most results were similar, except for the order. Specifically a first result in Google was only second in the first page in the company's search engine. But in overall the difference was mostly in the presentation, not in the results.
There was something Spartan in Google's page UI that made it more credible and informative. At the time for most people including academics, they were the good guys and us (Telcos) the bad boys.
I guess academics advices were very influential on young adults who will shape the world the next years.
I guess also the erratic management by France Telecom was for something in the demise of Voila.fr
I suspect this is actually one of those fundamentally hard problems.
1. Old domain names bought solely for their old SEO rank.
2. Apps on mobile app stores are sold, and updates begin to include shady privacy-invading malware.
3. Old free software projects on various registries (npm etc.) are sold, with the same result as (2).
Recipes would ultimately be a list of ingredients, concise instructions and maybe a picture or two. It should be trivial to train a classifier to detect SEO spam in this context.
I think Google doesn't really have an incentive to do this, as SEO spam typically includes ads which can contain Google ads or analytics/Google Tag Manager which helps Google, thus prioritizing better results would work against their bottom line.
Otherwise, it seems really like a cat and mouse game. Another option may be to force SEO to be indistinguishable from the best content. Is that the current goal?
Entertainment/news sites are chock full of pages like "<whatever>, what we know so far, release date, cast, will it be renewed, has it been cancelled..." pages that spend many paragraphs saying "we know nothing, randomly plucking crap out of thin air we could guess something-or-other but that remains to be confirmed". A new news story, film, show, or even just a hint of something, and the pages go up to try capture early clicks. Irritatingly they are often not updated quickly when real information becomes available or that information changes (particularly over the last year that has affected release dates). I have several sites DNS blocked because that annoys me less than getting one of these useless/out-of-date pages more often than not when I follow one of their links.
BTW, news websites in question are not doing it only for opening times but for any popular search phrase they can come up. Would be such a shame if outlets like BBC, WSJ and others adopted that kind of SEO.
Maybe it's just because I'm searching for technical stuff but DDG and Google are both a big source of frustration for me,
DDG thinks I mistype most of my queries and will desperately try to correct my 'mistake' because "surely nobody is really searching for documentation about ARM32 bootloaders, they just mistyped when they were really trying to look for a webshop that sells 32 different ARMchairs and ARMy boots.".
Google will understand my input at least half of the time but uses that power to show me the power of websites that do some article/keyword scraping and run GPT on it, or this great new Medium blogpost with two paragraphs of someone copying a Wikipedia summary of what ARM is and copy pasting build instructions from a GitHub README.
I've tried searching github.com itself but that's just a nice way to find out that apparently most of the data they store is just scraped websites, input for ML models or dictionaries and they will happily show me all 9K forks of the one repo that contains the highest density of these keywords.
/rant
The best is minus operands acting more like plus or quotes.
A decade from now, Google will have made no improvement.
So, if Google altered their algorithm such that "recipe" content had to be shorter-form in order to perform better in SERPs, how would this change anything? The sites that profit from search traffic would be the ones with their fingers on the pulse of the algorithm, and the resources to instantly alter their content in order to ensure that they continued to rank for the terms that were driving traffic.
It would need an option to ignore any form of news media in search results.