> Is that WP Core or a result of plugins?
Combination of all ... Take in account, its been 8 years when i last worked in PHP and wordpress, so maybe things have improved but i doubt it as some issues are structural.
* PHP is a fire and forget programming language. So whenever you do a request, there is no persistence of data (unless you offload to a external cache server). This result in total rerendering of the PHP code.
* Then we have WP core, that is not exactly shy in its calls to the DB. The way they store data in a key/value system really hurts the performance. Remember what i said above about PHP, ... So if you have a design that is heavy, and your language need to redo all the calls.
* Followed by ... extensions that are, lets just say, not always optimally written. The plugins are often the main reason why you see so many leaked databases on the internet.
The issue of WP is that its design is like 25 years old. It gain most of its popularity because it was free and you where able to extend it with plugins. But its that same plugin system, that made it harder for the WP developers to really tackle the performance issues, as breaking a ton of plugins, often results in losing marketshare.
The main reason why WP has survived the increased web traffic, has been that PHP has increased in performance by a factor of 3x over the years, combined with server hardware itself getting faster and faster. It also helped that cache plugins exist for WP.
But now as you have noticed, when you have a ton of passive or aggressive scrapers hitting WP websites, the cache plugins what have been the main protection layer to keep WP sites functional, they can not handle this. As scrapers hit every page, even pages that are non-popular/archived/... and normally never get cached. Because your getting hit on those non-popular pages, this then shows the fundamental weakness of WP.
The only way you can slightly deal with this type of behavior (beyond just blocking scrapers), is by increasing your database memory limits by a ton, so your not doing constant swapping. Increase the caching of the pages on your actual WP cache extensions, so more is held into memory. Your probably also looking at increasing the amount of PHP instances your server can load, more DB ...
But that assumes you have control over your WP hosting environment. And the companies that often host 100.000 or millions of sites, are not exactly motivated to throw tons of money into the problem. They prefer that you "upgrade" to more expensive packages that will only partially mitigate the issue.
In general, everybody is f___ed ... The amount of data scraping is only going to get worse.
Especially now that LLM's have tool usage, as in, they can search the internet for information themselves. This is going to results in tens of millions of requests from LLMs. Somebody searching for cookie requests, may results in dozens of page hits, in a second, where a normal user in the past first did a google search (hits Google cache), and only then opens a page, ... not what they want, go back, somewhere else. What may have been 10 requests over multiple sites, over a 5, 10 min time frame, is now going to be parallel dozens of request per second.
LLMs are great search engines, but as the tech goes more to consumer level hardware, your going to see this only getting worse.
Solutions are a fundamental rework of a lot of websites. One of the main reasons i switch out of PHP years ago, and eventually settled on Go, was because even at that time, was that we hit hitting limits already. Its one of the reasons that Facebook made Hack (PHP with persistence and other optimizations). The days you can render complete pages, is just giving away performance. The days you can not internal cache data, ... you get the point.
> This is actually an interesting question, I do wonder if WP users are over-represented in these complaints and if there's a potential solution there. If AI scrapers can be detected, you can serve them content that's cached for much longer because I doubt either party cares for temporally-sensitive content (like flash sales).
The issue is not cache content, is that they go for all the data in your database. They do not care if your articles are from 1999.
The only way you can solve this issue, is by having API endpoints to every website, where scraper can directly feed on your database data directly (so you avoid needing to render complete pages), AND where they can feed on /api/articles/latest-changed or something like that.
And that assumes that this is standardized over the industry. Because if its not, its just easier for scraper to go after all pages.
Fyi: I wrote my own scraper in Go, a dual core VPS that costs 3 Euro in the month, what can do 10.000 scraper per second (we are talking direct scraps, not over browser to deal with JS detection).
Now, do you want to guess the resource usage on your WP server, if i let it run wild ;) Your probably going to spend 10 to 50x more money, just to feed my scraper without me taking your website down.
Now, do i do 10.000 per second request. No ... Because 1r/s per website, is still 86400 page hits per day. And because i combined this with actually looking up websites that had "latest xxxx", and caching that content. I knew that i only needed to scrap X amount of new pages every 24h. So it took me a month or 3 for some big website scraping, and later you do not even see me as i am only doing page updates.
But that takes work! You need to design this for every website, some websites do not have any good spot where you can hook into for a low resource "is there something new".
And i do not even talk about websites that actively try to make scraping difficult (like constantly changing tags, dynamic html blocks on renders, js blocking, captcha forcing), what ironically, hurt them more as this can result in full rescraps of their sites.
So ironically, the most easy solution that for less scrupulous scrapers is to simply throw resource at the issue. Why bother with "is there something new" effort on every website, when you can just rescrap every page link you find using a dumb scraper, and compare that with your local cache checksum, and then update your scraped page result. And then you get those over aggressive scraper that ddos websites. Combine that with half of the internet being WP websites +lol+
The amount of resource to scrap, is so small, and the more you try to prevent scrapers, the more your going to hinder your own customers / legit users.
And again, this is just me doing scraping for some novel/manga websites for my own private usage / datahoarding. The big boys have access to complete IP blocks, can resort to using home ips (as some sites detect if your coming from a datacenter leased IP or home ISP ip), have way more resources available to them.
This has been way too long but the only way to win against scrapers, is that we will need a standardized way for legit scraping. Ironically we used to have this with RSS feeds years ago but everybody gave up on them. When you have a easier endpoint for scrapers, there is less incentive to just scrap your every page for a lot of them. Will there be bad guys, yep, but it then becomes easier to just target them until they also comply.
But the internet will need to change to something new for it to survive the new era ... And i think standardized API endpoints will be that change. Or everybody needs to go behind login pages, but yea, good luck with that because even those are very easy to bypass with account creations solutions.
Yea, everybody is going to be f___ed because forget about making money with advertisement for the small website. The revenue model is going to also change. We already see this with reddit selling their data directly to google.
And this has been way too much text.