This might make an arbitrary number go up in test suites, at the cost of massively increasing build complexity and reducing ease of working on the project all for very minimal if any improvement for the hypothetical end user (who will be subject to much greater forces out of the developer's control like their network speed)
I see so much stuff like this, then regularly see websites that are riddled with what I would consider to be very basic user interface and state management errors. It's absolutely infuriating.
To me thinking about how CSS loads is task #1, but I probably have some unique needs.
We were losing clients due to our web offering scoring poorly on page speed tests. Page speed being part of how a page is ranked can affect SEO (depending on who you ask), so it is very important to our clients. It's not my job to explain how I think SEO works, it's my job to make our clients happy.
I had to design a whole new system to get page speed scores to 100% on Google Lighthouse, which many of our clients were using to test their site's performance. When creating a site optimized for performance, how the CSS and JS and everything loads needs to be thought about before implementing the pages. It can be pretty difficult to optimize these things after-the-fact. We made pretty much everything on the page load in-line including JS and CSS, and the CSS for what displays "above the fold" loads above the HTML it styles. Everything "below the fold" gets loaded below the fold. No FOUC, nothing blocking the rendering of the page. No extra HTTP requests are made to load any of the content. A lot of the JS "below the fold" does not even get evaluated until it is scrolled into view, because that can also slow down page load speed. I took all the advice Google Lighthouse was giving me, and implemented our pages in a way that satisfies it completely. It wasn't really that difficult but it required me changing my thinking about how to approach building websites.
We were coming from a system that we didn't control, where they decided to load all of the CSS for the entire website on every page, which was amounting to about 3 to 4 MB of CSS alone, and the Javascript was even worse. There was never an attempt to optimize that system from the start, and now many years later they can't seem to optimize it at all. I won't name that system because we still build on it, but it's a real problem for us when a client compares their SEO and page speed scores to their competitors and then they leave us for our competitors, which score only a bit better for page speed.
If performance is the goal, there is no such thing as premature optimization, it has to be thought about from the start. So far our clients have been very happy about their 100% page speed scores (100% even on mobile), and our competition can't come anywhere close, unless they put in the work and start thinking differently about the problem.
I actually tried the tool that is the subject of this post on my sites, and it wouldn't work - likely because there is nothing to optimize. We simply don't do any HTTP requests for CSS, and the CSS we need "above the fold" is already "above the fold". I tried it on one of the old pages and it did give a result, but I don't need it because we don't build pages like we used to anymore.
We were doing this optimization more than a decade ago when I worked at HuffPost.
On the contrary, the more complex the css is or the more resources loaded, the less this would be worthwhile.
The thing i think they are trying to optimize is latency due to RTT. When you first request the html file, the browser needs to read it before knowing the next thing to request. This requires a round trip to the server, which has latency (pesky speed of light). The larger your (critical) css, the more expensive this optimisation is so the less likely it is a net benefit.
Over-obsession with KPIs/arbitrary numbers is one of the side-effects of managerial culture that badly needs to die.