Most active commenters
  • stevenpotts(3)

←back to thread

Critical CSS

(critical-css-extractor.kigo.studio)
234 points stevenpotts | 13 comments | | HN request time: 1.027s | source | bottom
1. oneeyedpigeon ◴[] No.43903048[source]
Feels like premature optimisation to me. Are there really cases where the CSS is so complex or the page loads so many resources that this effort is worthwhile? Maybe with the most complex web apps, I guess, but for almost all cases, I would have thought writing clean CSS, HTML, and JavaScript would render this unnecessary or even counterproductive.
replies(7): >>43903086 #>>43904549 #>>43906407 #>>43907541 #>>43908043 #>>43908178 #>>43928444 #
2. Gabrys1 ◴[] No.43903086[source]
I would pay good money for this tool ~12 years ago. We had a site with enormous amounts of CSS that accumulated over the years and it was really unclear which rules are and which aren't critical
replies(1): >>43920470 #
3. dan-bailey ◴[] No.43904549[source]
Oh my god, yes, this is useful. I do some freelance dev work for a small marketing agency, and I inherit a lot of Wordpress sites that show all the hallmarks of passing through multiple developers/agencies over the years, and the CSS and Javascript are *always* crufty with years of accumulated bad practices. I'm eager to try this.
4. dimmke ◴[] No.43906407[source]
Seriously. When I look at the modern state of front-end development, it's actually fucking bonkers to me. Stuff like Lighthouse has caused people to reach for optimizations that are completely absurd.

This might make an arbitrary number go up in test suites, at the cost of massively increasing build complexity and reducing ease of working on the project all for very minimal if any improvement for the hypothetical end user (who will be subject to much greater forces out of the developer's control like their network speed)

I see so much stuff like this, then regularly see websites that are riddled with what I would consider to be very basic user interface and state management errors. It's absolutely infuriating.

replies(2): >>43909128 #>>43928484 #
5. leptons ◴[] No.43907541[source]
>Feels like premature optimisation to me.

To me thinking about how CSS loads is task #1, but I probably have some unique needs.

We were losing clients due to our web offering scoring poorly on page speed tests. Page speed being part of how a page is ranked can affect SEO (depending on who you ask), so it is very important to our clients. It's not my job to explain how I think SEO works, it's my job to make our clients happy.

I had to design a whole new system to get page speed scores to 100% on Google Lighthouse, which many of our clients were using to test their site's performance. When creating a site optimized for performance, how the CSS and JS and everything loads needs to be thought about before implementing the pages. It can be pretty difficult to optimize these things after-the-fact. We made pretty much everything on the page load in-line including JS and CSS, and the CSS for what displays "above the fold" loads above the HTML it styles. Everything "below the fold" gets loaded below the fold. No FOUC, nothing blocking the rendering of the page. No extra HTTP requests are made to load any of the content. A lot of the JS "below the fold" does not even get evaluated until it is scrolled into view, because that can also slow down page load speed. I took all the advice Google Lighthouse was giving me, and implemented our pages in a way that satisfies it completely. It wasn't really that difficult but it required me changing my thinking about how to approach building websites.

We were coming from a system that we didn't control, where they decided to load all of the CSS for the entire website on every page, which was amounting to about 3 to 4 MB of CSS alone, and the Javascript was even worse. There was never an attempt to optimize that system from the start, and now many years later they can't seem to optimize it at all. I won't name that system because we still build on it, but it's a real problem for us when a client compares their SEO and page speed scores to their competitors and then they leave us for our competitors, which score only a bit better for page speed.

If performance is the goal, there is no such thing as premature optimization, it has to be thought about from the start. So far our clients have been very happy about their 100% page speed scores (100% even on mobile), and our competition can't come anywhere close, unless they put in the work and start thinking differently about the problem.

I actually tried the tool that is the subject of this post on my sites, and it wouldn't work - likely because there is nothing to optimize. We simply don't do any HTTP requests for CSS, and the CSS we need "above the fold" is already "above the fold". I tried it on one of the old pages and it did give a result, but I don't need it because we don't build pages like we used to anymore.

replies(1): >>43928506 #
6. acjohnson55 ◴[] No.43908043[source]
For many sites, this probably is a premature optimization. But for sites that live off of click-through, like news/media, getting the text on screen is critical. Bounce rate starts to go up and ad revenue drops as soon as page loads are less than "immediate", which is about 1 second. The full page can actually be quite heavy once all the ads, scripts, and media load.

We were doing this optimization more than a decade ago when I worked at HuffPost.

7. bawolff ◴[] No.43908178[source]
> Are there really cases where the CSS is so complex or the page loads so many resources that this effort is worthwhile?

On the contrary, the more complex the css is or the more resources loaded, the less this would be worthwhile.

The thing i think they are trying to optimize is latency due to RTT. When you first request the html file, the browser needs to read it before knowing the next thing to request. This requires a round trip to the server, which has latency (pesky speed of light). The larger your (critical) css, the more expensive this optimisation is so the less likely it is a net benefit.

8. rglover ◴[] No.43909128[source]
Yup. Give people a number or stat to obsess over and they'll obsess over it (while ignoring the more meaningful work like stability and fixing real, user-facing bugs).

Over-obsession with KPIs/arbitrary numbers is one of the side-effects of managerial culture that badly needs to die.

replies(1): >>43912163 #
9. mediumsmart ◴[] No.43912163{3}[source]
It’s just a few meaningful numbers like 0 accessibility errors, A+ for the securityheaders, flawless result on webkolls 5july net plus below 1 second loading time on pagespeed mobile. Once that has been achieved obsessing over stabilizing a flaky bloat pudding while patching over bugs aka features that annoy any user will have died.
10. korm ◴[] No.43920470[source]
The mod_pagespeed filter "prioritize_critical_css" was released exactly 12 years ago in early May 2013. At least 3 more popular critical css tools were released the following year, integrating with Grunt, Gulp, and later Webpack.
11. stevenpotts ◴[] No.43928444[source]
Oh, writing clean css, html and js is THE WAY TO GO but you might inherit a messy project, download a template or even work on a project you coded poorly
12. stevenpotts ◴[] No.43928484[source]
I know... to be fair, I did test this for my use cases on older phones with throttled slower connections and it did improve the UX but I get what you're saying, I think it also depends on your target audience, who cares if your site is poorly graded by Lighthouse if your user base has high end devices in places with great internet? not even google cares since the Core Web Vitals show up in green
13. stevenpotts ◴[] No.43928506[source]
nice!