Whether or not Google deprecating XSLT is a “political” decision (in authors words), I don’t know that I know for sure, but I can imagine running the Chrome project and steering for more simplicity.
Whether or not Google deprecating XSLT is a “political” decision (in authors words), I don’t know that I know for sure, but I can imagine running the Chrome project and steering for more simplicity.
Yes, it's a problem that Chrome has too much market share, but XSLT's removal isn't a good demonstration of that.
[1] Yes, I already know about your one European law example that you only found out exists because of this drama.
Removing a feature that is used, while possibly making chrome more "simple", also forces all the users of that feature to react to it, lest their efforts are lost to incompatibility. There is no way this can not be a political decision, given that either way one side will have to cope with the downsides of whatever is (or isn't) done.
PS: I don't know how much the feature is actually used, but my rationale should apply to any X where X is a feature considered to be pruned.
If there isn't enough usage of a feature to justify prioritizing engineering hours to it instead of other features, so it's removed, that's just a regular business-as-usual decision. Nothing "political" about it. It's straightforward cost-benefit.
However, if the decision is based on factors beyond simple cost-benefit -- maintaining or removing a feature because it makes some influential group happy, because it's part of a larger strategic plan to help or harm something else, then we call that a political decision.
That's how the term "political decision" in this kind of context is used, what it means.
What example is that?
This has to be proven by Google (and other browser vendors), not by people coming up with examples. The guy pushing "intent to deprecate" didn't even know about the most popular current usage (displaying podcast RSS feeds) until after posting the issue and until after people started posting examples: https://github.com/whatwg/html/issues/11523#issuecomment-315...
Meanwhile Google's own document says that's not how you approach deprecation: https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...
Also, "no one uses it" is rich considering that XSLT's usage is 10x the usage of features Google has no trouble shoving into the browser and maintaining. Compare XSLT https://chromestatus.com/metrics/feature/timeline/popularity... with USB https://chromestatus.com/metrics/feature/timeline/popularity... or WebTransport: https://chromestatus.com/metrics/feature/timeline/popularity... or even MIDI (also supported by Firerox) https://chromestatus.com/metrics/feature/timeline/popularity....
XSLT deprecation is a symptom of how browser vendors, and especially Google, couldn't give two shits about the stated purposes of the web.
To quote Rich Harris from the time when Google rushed to remove alert/confirm: "the needs of users and authors (i.e. developers) should be treated as higher priority than those of implementors (i.e. browser vendors), yet the higher priority constituencies are at the mercy of the lower priority ones" https://dev.to/richharris/stay-alert-d
> I will keep using XSLT, and in fact will look for new opportunities to rely on it.
This is the closest I’ve seen, but it’s not an explanation of why it was important before the deprecation. It’s a declaration that they’re using it as an act of rebellion.
Edit: then account for the fact that this rare breed of content uploader doesn't use an FTP client... there's absolutely no reason to have FTP client code in a browser. It's an attack surface that is utterly unnecessary.
Comparing absolute usage of an old standard to newer niche features isn’t useful. The USB feature is niche, but very useful and helpful for pages setting up a device. I wouldn’t expect it to show up on a large percentage of page loads.
XSLT was supposed to be a broad standard with applications beyond single setup pages. The fact that those two features are used similarly despite one supposedly being a broad standard and the other being a niche feature that only gets used in unique cases (device setup or debugging) is only supportive of deprecating XSLT, IMO
I've used XSLT plenty for transforming XML data for enterprises but that's all backend stuff.
Until this whole kerfuffle I never knew there was support for it in the browser in the first place. Nor, it seems, did most people.
If there's some enterprise software that uses it to transform some XML that an API produces into something else client-side, relying on a polyfill seems perfectly reasonable. Or just move that data transformation to the back-end.
Having flashbacks of “<!--[if IE 6]> <script src="fix-ie6.js"></script> <![endif]-->”
You say that like it's a bad thing. The proposal was already accepted. The most useful way to get feedback about which sites would break is to actually make a build without XSLT support and see what breaks.
But I suppose forcing one's self to use XSLT just to spite Google would constitute its own punishment.
Actually, you can make an RSS feed user-browsable by using JavaScript instead. You can even run XSLT in JavaScript, which is what Google's polyfill does.
I've written thousands of lines of XSLT. JavaScript is better than XSLT in every way, which is why JavaScript has thrived and XSLT has dwindled.
This is why XSLT has got to go: https://www.offensivecon.org/speakers/2025/ivan-fratric.html
So, if XSLT sees 10x usage of USB we can consider it a "niche technology that is 10x useful tan USB"
> The fact that those two features are used similarly
You mean USB is used on 10x fewer pages than XSLT despite HN telling me every time that it is an absolutely essential technology for PWAs or something.
It can work great when you have XML you want to present nicely in a browser by transforming it into XHTML while still serving the browser the original XML. One use I had was to show the contents of RSS/Atom feeds as a nice page in a browser.
But it ultimately doesn't matter either way. A major selling point/part of the "contract" the web platform has with web developers is backwards compatibility. If you make a web site which only relies on web standards (i.e. not vendor specific features or 3rd party plugins), you can/could expect it to keep working forever. Browser makers choosing to break that "contract" is bad for the internet regardless of how popular XSLT is.
Oh, and as the linked article points out, the attack surface concerns are obviously bad faith. The polyfil means browser makers could choose to sandbox it in a way that would be no less robust than their existing JS runtime.
Then why is Google actively shoving multiple hardware APIs into the browser (against the objection of other vendors) if their usage is 10x less than that of XSLT?
They have no trouble finding the resource to develop and maintain those
> Google is willing to remove standards-compliant XML support as well.
> They're the same picture.
To spell it out, "if it's inconvenient, it goes", is something that the _owner_ does. The culture of the web was "the owners are those who run the web sites, the servants are the software that provides an entry point to the web (read or publish or both)". This kind of "well, it's dashed inconvenient to maintain a WASM layer for a dependency that is not safe to vendor any more as a C dependency" is not the kind of servant-oriented mentality that made the web great, not just as a platform to build on, but as a platform to emulate.
The post also fails to mention that all browsers want to remove XSLT. The topic was brought up in several meetings by Firefox reps. It's not a Google conspiracy.
I also see that the site is written in XHTML and think the author must just really love XML, and doesn't realize that most browser maintainers think that XHTML is a mistake and failure. Being strict on input in failing to render anything on an error is antithetical to the "user agent" philosophy that says the browser should try to render something useful to the user anyway. Forgiving HTML is just better suited for the messy web. I bet this fuels some of their anger here.
That’s not true for XSLT, except in the super-niche case of formatting RSS prettily via linking to XSLT like a stylesheet, and the intersection of “people who consume RSS” and “people who regularly consume it directly through the browser” has to be vanishingly small.
I'm not so sure that's problematic. Probably browser just aren't a great platform for doing a lot of XML processing at this point.
Preserving the half implemented frozen state of the early 2000s really doesn't really serve anyone except those maintaining legacy applications from that era. I can see why they are pulling out complex C++ code related to all this.
It's the natural conclusion of XHTML being sidelined in favor of HTML 5 about 15-20 years ago. The whole web service bubble, bloated namespace processing, and all the other complexity that came with that just has a lot of gnarly libraries associated with it. The world kind of has moved on since then.
From a security point of view it's probably a good idea to reduce the attack surface a bit by moving to a Rust based implementation. What use cases remain for XML parsing in a browser if XSLT support is removed? I guess some parsing from javascript. In which case you could argue that the usual solution in the JS world of using polyfills and e.g. wasm libraries might provide a valid/good enough alternative or migration path.
Try having an opposition party that isn't appointing judges like Amit Mehta. Or pardoning torturers, and people who engineered the financial crash, and people who illegally spied on everyone, etc., etc. But good luck with that, we can't even break up a frozen potato monopoly.
No, this is wrong.
Maintaining XSLT support has a cost, both in providing an attack surface and in employee-hours just to keep it around. Suppose it is not used at all, then removing it would be unquestionably good, as cost & attack surface would go down with no downside. Obviously it's not the case that it has zero usage, so it comes down to a cost-benefit question, which is where popularity comes in.
For the specific use case of showing RSS and Atom feeds in the browser, it seems like a better solution would be to have built-in support in the browser, rather than relying on the use of XSLT.
Obviously not in every way. XSLT is declarative and builds pretty naturally off of HTML for someone who doesn't know any programming languages. It gives a very low-effort but fairly high power (especially considering its neglect) on-ramp to templated web pages with no build steps or special server software (e.g. PHP, Ruby) that you need to maintain. It's an extremely natural fit if you want to add new custom HTML elements. You link a template just like you link a CSS file to reuse styles. Obvious.
The equivalent Javascript functionality's documentation[0] starts going on about classes and callbacks and shadow DOM, which is by contrast not at all approachable for someone who just wants to make a web page. Obviously Javascript is necessary if you want to make a web application, but those are incredibly rare, and it's expected that you'll need a programmer if you need to make an application.
Part of the death of the open web is that the companies that control the web's direction don't care about empowering individuals to do simple things in a simple way without their involvement. Since there's no simple, open way to make your own page that people can subscribe to (RSS support having been removed from browsers instead of expanded upon for e.g. a live home page), everyone needs to be on e.g. Facebook.
It's the same with how they make it a pain to just copy your music onto your phone or backup your photos off of it, but instead you can pay them monthly for streaming and cloud storage.
[0] https://developer.mozilla.org/en-US/docs/Web/API/Web_compone...
Their board syphons the little money that is left out of their "foundation + corporation" combo, and they keep cutting people from Firefox dev team every year. Of course they don't want to maintain pieces of web standards if it means extra million for their board members.
When you have something that's been around for a long time and still shows virtually no usage, it's fine to pull the plug. It's a kind of evolution. You can kill things that are proven to be unpopular, while building things and giving them the time to see if they become popular.
That's what product feature iteration is.
I know that other independent browsers that I used to use back in the day just gave up because the pace of divergence pushed by the major implementations meant that it wasn't feasible to keep up independently.
I still miss Konqueror.
XSLT RIP
Dropping XSLT is about something different. It's not bad an in an obvious way. It's things like code complexity vs applicability. It's definitely not as clear of an argument to me, and I haven't touched XSLT in the past 20 years of web development, so I am not sure about the trade-offs.
They both were just responding to similar market demands because end users didn't want to use RSS. Users want to use social media instead.
>This is a trillion-dollar ad company who has been actively destroying the open web for over a decade
Google has both done more for and invested more into progressing the open web than anyone else.
>The WHATWG aim is to turn the Web into an application delivery platform
This is what web developers want and browsers our reacting to the natural demands of developers, who are reacting to demands of users. It was an evolutionary process that got it to that state.
>but with their dependency on the Blink rendering engine, controlled by Google, they won't be able to do anything but cave
Blink is open source and modular. Maintaining a fork is much less effort than the alternative of maintaining a different browser engine.
Even the browsers created by individuals or small groups don't have, as far as I've ever seen, a "servant-oriented mindset": like all software projects, they are ultimately developed and supported at the discretion of their developer(s).
This is how you get interesting quirks like Opera including torrent support natively, or Brave bundling its own advertising/cryptocurrency thing.
The fact that you put "contract" in quotes suggests that you know there really is no such thing.
Backwards compatibility is a feature. One that needs to be actively valued, developed and maintained. It requires resources. There really is no "the web platform." We have web browsers, servers, client devices, telecommunications infrastructure - including routers and data centres, protocols... all produced and maintained by individual parties that are trying to achieve various degrees of interoperability between each other and all of which have their own priorities, values and interests.
The fact that the Internet has been able to become what it is, despite the foundational technologies that it was built upon - none of which had anticipated the usage requirements placed on their current versions, really ought to be labelled one of the wonders of the world.
I learned to program in the early to mid 1990s. Back then, there was no "cloud", we didn't call anything a "web application" but I cut my teeth doing the 1990s equivalent of building online tools and "web apps." Because everything was self-hosted, the companies I worked for valued portability because there was customer demand. Standardization was sought as a way to streamline business efficiency. As a young developer, I came to value standardization for the benefits that it offered me as a developer.
But back then, as well as today, if you looked at the very recent history of computing; you had big endian vs little endian CPUs to support, you had a dozen flavours of proprietary UNIX operating systems - each with their own vendor-lock-in features; while SQL was standard, every single RDBMS vendor had their own proprietary features that they were all too happy for you to use in order to try and lock consumers into their systems.
It can be argued that part of what has made Microsoft Windows so popular throughout the ages is the tremendous amount of effort that Microsoft goes through to support backwards compatibility. But even despite that effort, backwards compatibility with applications built for earlier version of Windows can still be hit or miss.
For better or worse, breaking changes are just part and parcel of computing. To try and impose some concept of a "contract" on the Internet to support backwards compatibility, even if you mean it purely figuratively, is a bit silly. The reason we have as much backwards compatibility as we do is largely historical and always driven by business goals and requirements, as dictated by customers. If only an extreme minority of "customers" require native xslt support in the web browser, to use today's example, it makes zero business sense to pour resources into maintaining it.
Disclaimer: I work on Chrome and have occasionally dabbled in libxml2/libxslt in the past, but I'm not directly involved in any of the current work.
Functional programming languages can often feel declarative. When XSL is doing trivial, functional transformations, when you keep your hands off of xsl:for-each, XSL feels declarative, and doesn't feel that bad.
The problem is: no clean API is perfectly shaped for UI, so you always wind up having to do arbitrary, non-trivial transformations with tricky uses of for-each to make the output HTML satisfy user requirements.
XSL's "escape hatch" is to allow arbitrary Turing-complete transformations, with <xsl:variable>, <xsl:for-each>, and <xsl:if>. This makes easy transformations easy and hard transformations possible.
XSL's escape hatch is always needed, but it's absolutely terrible, especially compared to JS, especially compared to modern frameworks. This is why JS remained popular, but XSL dwindled.
> It gives a low-effort but fairly high power (especially considering its neglect) on-ramp to templated web pages with no build steps or special server software (e.g. PHP, Ruby) that you need to maintain. It's an extremely natural fit if you want to add new custom HTML elements.
JavaScript is a much better low-effort high-power on-ramp to templated web pages with no build steps or server software. JavaScript is the natural fit for adding custom HTML elements (web components).
Seriously, XSLT is worse than JavaScript in every way, even at the stuff that XSLT is best at. Performance/bloat? Worse. Security? MUCH worse. Learnability / language design? Unimaginably worse.
EDIT: You edited your post, but the Custom Element API is for interactive client-side components. If you just want to transform some HTML on the page into other HTML as the page loads, you can use querySelectorAll, the jQuery way.
> This is what web developers want
I don't think it is what web developers want, it is what customers expect.
Of course there are plenty of situation where the page is totally bloated and could be much leaner, but the overall trend to build web applications instead of web pages is dictated by user expectations and, as a consequence, requirements.
As someone who's interested in sustainable open source development, I also find the circumstances around the deprecation to be interesting and worth talking about. The XSLT implementation used by all the browsers is a 25 year old C library whose maintainer recently resigned due to having to constantly deal with security bugs reported by large companies who don't provide any financial contribution or meaningful assistance to the project. It seems like the browser vendors were fine with the status quo of having XSLT support as long as they didn't have to contribute any resources to it. As soon as that free maintenance went away and they were faced with either paying someone to continue maintenance or writing a new XSLT library in a safer language, they weren't willing to pay the market value for what it would cost to do this and decided to drop the feature instead.
This is patently false. It is much better for security if you use one of the many memory-safe implementations of it. This is like saying “SSL is insecure because I use an implementation with bugs”. No, the technology is fine. It's your buggy implementation that's the problem.
Have you ever met a single non-programmer who successfully picked up XSLT of their own volition and used it productively?
I'd be willing to bet good money that the Venn diagram of users that fit the intersection of "authoring content for the web", "care about separating content from HTML", "comfortable with HTML", "not comfortable with JavaScript", and "able to ramp up on XSLT" is pretty small.
At some point, we have to just decide "sorry, this use case is too marginal for every browser to maintain this complexity forever".
The fact that the web's new owners have decided that making web pages is too marginal a use-case for the Web Platform is my point.
Wow. I can see the proposed scrapping of XSLT being a huge problem for all of the seven people who do this.
This is why so many people find this objectionable. If you want to have a basic blog, you need some HTML docments and and RSS/Atom feed. The technologies required to do this are HTML for the documents and XSLT to format the feed. Google is now removing one of those technologies, which makes it essentially impossible to serve a truly static website.
If someone wants to make a web page they need to learn HTML and CSS.
Why would adding a fragile and little-used technology like XSLT help?
That's what CSS does.
And no, it really isn't a cost benefit question. Or if you'd prefer, the _indirect_ costs of breaking backwards compatibility are much higher than the _direct_ cost. As it stood, as a web developer you only needed to make sure that your code followed standards and it would continue to work. If the browser makers can decide to depriciate those standards, developers have to instead attempt to divine whether or not the features they want to use will remain popular (or rather, whether browser makers will continue to _think_ they're popular, which is very much not the same thing).
How so? You're just generating static pages. Generate ones that work.
I'm convinced Mozilla is purposefully engineered to be rudderless: C-suite draw down huge salaries, approve dumb, mission-orthgonal objectives, in order to keep Mozilla itself impotent in ever threatening Google.
Mozilla is Google's antitrust litigation sponge. But it's also kept dumb and obedient. Google would never want Mozilla to actually be a threat.
If Mozilla had ever wanted a healthy side business, it wasn't in Pocket, XR/VR, or AI. It would have been in building a DevEx platform around MDN and Rust. It would have synergized with their core web mission. Those people have since been let go.
https://www.youtube.com/watch?v=U1kc7fcF5Ao
But it is quite interesting and especially learning about the security problems of the document() function (described @ 19:40-25:38) made me feel more convinced that removing XSLT is a good decision.
WhatWG is focused on maintaining specs that browsers intend to implement and maintain. When Chrome, Firefox, and Safari agree to remove XSLT that effectively decides for WhatWG's removal of the spec.
I wouldn't put too much weight behind who originally proposed the removal. It's a pretty small world when it comes to web specifications, the discussions likely started between vendors before one decided to propose it.
This is an attempt to rewrite history.
Early browser like NCSA Mosaic were never even released as Open Source Software.
Netscape Navigator made headlines by offering a free version for academic or non-profit use, but they wanted to charge as much as $99 (in 1995 dollars!) for the browser.
Microsoft got in trouble for bundling a web browser with their operating system.
The current world where we have true open source browser options like Chromium is probably closer to a true open web than what some people have retconned the early days of the web as being.
How does that become a market demand to remove RSS? There are tons of features within browsers which most users don't use. But they do no harm staying there.
Your old job's broken workflow is not a good reason for keeping a fundamentally broken protocol that relies on allowing Remote Code Execution as a privileged user around.
See all these "static site generators" everyone's into these days? We used those in the mid-90s. They were called "Makefiles".
HN has historically been relatively free of such dogma, but it seems times are changing, even here
Which seems to be a sane decision given the XML language allows for data blow-ups[^0]. I'm not sure what specific subset of XML `xml-rs` implements, but to me it seems insane to fully implement XML because of this.
It's also 32 million lines of code which is borderline prohibitive to maintain if you're planning any importantly different browser architecture, without a business plan or significant funding.
There's lots of things perfectly forkable and maintainable in the world is better for them (shoutout Nextcloud and the various Syncthing forks). But Chromium, insofar as it's a test of the health and openness of the software ecosystem, I think is not much of a positive signal on account of what it would realistically require to fork and maintain for any non-trivial repurposing.
What’s happening is that Google (along with Mozilla and Safari) are changing the html spec to drop support for xslt. If you want to argue that this is bad because it “breaks the web”, that’s fine, but it has nothing at all to do with whether the web is “open”. The open web means anyone can run a web server. Anyone can write a web site. Anyone can build their own compatible browser (hypothetically; this has become prohibitively expensive). It means anyone can use the tech, not that the tech includes everything possible.
If you want to complain about Google harming the open web, there are some real examples out there. Google Reader deprecation probably hurt RSS more than anything else. AMP was/is an attempt to give Google tighter control over more web traffic. Chrome extension changes were pushed through seemingly to give Google tighter control over ad blockers. Gemini in the search results is an attempt to keep Google users from ever actually clicking through to web sites for information.
XSLT in the browser has been dead for years. The reality is that no browser developer has cared about xslt since 1.0. Don’t blame Google for the death of xslt when xslt 2.0 was standardized before Chrome was even released and no one else cared enough to implement it. The removal of xslt doesn’t change the openness of the web and the reality is that it breaks very little while eliminating a source of real security errors.
If browser vendors had made it easy for mainstream users, would there have been as much "market demand"?
Between killing off Google Reader and failing to support RSS/Atom, Google handed social media to Facebook et al.
XSLT is a functional transform language. The equivalent JavaScript would be something like registry of pure functions of Node -> Node and associated selectors and a TreeWalker that walks the XML document, invokes matching functions, and emits the result into a new document.
Or you could consume the XML as data into a set of React functions.
Seems like getting XSLT (and offering a polyfill replacement) is just a move in the direction of stopping applications from pushing their complexity into the browser.
I made a website to promote doing using XSLT for RSS/Atom feeds. Look at the before/after screenshots: which one will scare off a non-techie user?
I think XML has some good features, but in general infatuation with it as either a key representation or key transmission protocol has waned over the years. Everything I see on the wire these days is JSON or some flavor of binary RPC like protobuffer; I hardly ever see XML on the wire anymore.
I don't see any evidence supporting your assertion of them acting in bad faith, so I didn't reply to the point. Sandboxes are not perfect, they don't transform insecure code into perfectly secure code. And as I've said, it's not only a security risk, it's also a maintenance cost: maintaining the integration, building the software, and testing it, is not free either.
It's fine to disagree on the costs/benefits and where you draw the line on supporting the removal, but fundamentally it's just a cost-benefit question. I don't see anyone at Chrome acting in bad faith with regards to XSLT removal. The drama here is really overblown.
> the _indirect_ costs of breaking backwards compatibility are much higher than the _direct_ cost ... If the browser makers can decide to deprecate those standards, developers have to instead attempt to divine whether or not the features they want to use will remain popular.
This seems overly dramatic. It's a small streamlining of an important software, by removing an expensive feature with almost zero usage. No one actually cares about this feature, they just like screaming at Google. (To be fair, so do I! But you gotta pick your battles, and this particular argument is a dud.)
One could also make that case about Microsoft with Microsoft office in the '90s. Embrace extend extinguish always involves being a contributor in the beginning.
>Blink is open source and modular. Maintaining a fork is much less effort than the alternative of maintaining a different browser engine.
Yeah and winning Asia Physical 100 is easier than winning a World's Strongest Man competition, and standing in a frying pan is preferable to jumping in a fire.
I'm baffled by appeals to the open source nature of Blink and Chromium to suggest that they're positive indicators of an open web that any random Joe could jump in and participate in. That's only the case if you're capable of the monumental weightlifting that comes with the task.
From what I gather here, XSLT's functionality OTOH is easily replaced, and unlike the useful hardware support you're raging against, is a behemoth to support.
Admittedly this was 20ish years ago, but I used to teach the business analysts XSLT so they could create/edit/format their own reports.
At the time Crystal Reports had become crazy expensive so I developed a system that would send the data to the browser as XML and then an XSLT to format the report. It provided basic interactivity and could be edited by people other than me. Also, if I remember, at the time it only worked in IE because it was the only browser with the transform function.
It involved driving a steak through the heart of Google reader. Perhaps the most widely used RSS reader on the planet, and ripple effects that led to the de-emphasis of RSS across the internet. Starting the historical timeline after those choices in summarizing it as an absence of market demand overlooks the fact that intentional choices were made on this front to roll it back rather than to emphasize it and make it accessible.
Hey fam. I remember NPAPI. I wrote a very large NPAPI plugin.
The problem with NPAPI is that it lets people run arbitrary code as your browser. it was barely sandboxed. At best, it let any plugin do its level best to crash your browser session. At worst, it's a third-party binary blob you can't inspect running in the same thing you use to control your bank account.
NPAPI died for a good reason, and it has little to do with someone wanting to control your experience and everything to do with protecting you, the user, from bad actors. I think the author tips their hand a little too far here; the world they're envisioning is one where the elite hackers among us get to keep using the web and everyone else just gets owned by mechanisms they can't understand, and that's fine because it lets us be "wild" and "free" like we were in the nineties and early aughts again. Coupled with the author's downplaying of the security concerns in the XSLT lib, the author seems comfortable with the notion that security is less important than features, and I think there's a good reason that the major browser creators and maintainers disagree.
The author's dream, at the bottom, "a mesh of building blocks," is asking dozens upon dozens upon dozens of independent operators to put binary blobs in your browser outside the security sandbox. We stopped doing that for very, very good reasons.
Google is killing the open web - https://news.ycombinator.com/item?id=44949857 - Aug 2025 (181 comments)
Also related. Others?
XSLT RIP - https://news.ycombinator.com/item?id=45873434 - Nov 2025 (459 comments)
Removing XSLT for a more secure browser - https://news.ycombinator.com/item?id=45823059 - Nov 2025 (337 comments)
Intent to Deprecate and Remove XSLT - https://news.ycombinator.com/item?id=45779261 - Nov 2025 (149 comments)
XSLT removal will break multiple government and regulatory sites - https://news.ycombinator.com/item?id=44987346 - Aug 2025 (146 comments)
Google did not unilaterally decide to kill XSLT - https://news.ycombinator.com/item?id=44987239 - Aug 2025 (128 comments)
"Remove mentions of XSLT from the html spec" - https://news.ycombinator.com/item?id=44952185 - Aug 2025 (535 comments)
Should we remove XSLT from the web platform? - https://news.ycombinator.com/item?id=44909599 - Aug 2025 (96 comments)
It's in quotes because people seem keen to remind everyone that there's no legal obligation on the part of the browser makers not to break backwards compatibility. The reasoning seems to be that if we can't sue google for a given action, that action must be fine and the people objecting to it must be wrong. I take a rather dim view of this line of reasoning.
> The reason we have as much backwards compatibility as we do is largely historical and always driven by business goals and requirements, as dictated by customers.
As you yourself pointed out, the web is a giant pile of cobbled together technologies that all seemed like a good idea at the time. If breaking changes were an option, there is a _long_ list of potential depreciation to pick from which would greatly simplify development of both browsers and websites/apps. Further, new features/standards would be able to be added with much less care, since if problems were found in those standards they could be removed/reworked. Despite those huge benefits, no such changes are/should be made, because the costs breaking backwards compatibility are just that high. Maintaining the implied promise that software written for the web will continue to work is a business requirement, because it's crucial for the long term health of the ecosystem.
Other vectors probably mean a single vector: external entities, where a) you process untrusted XML on server and b) allow the processor to read external entities. This is not a bug, but early versions of XML processors may lack an option to disallow access to external entities. This also has been fixed.
XSLT has no exploits at all, that is no features that can be misused.
Mozilla…are they actually competing? Like really and truly.
>usage of Google Reader has declined
https://googlereader.blogspot.com/2013/03/powering-down-goog...
1. Google has engaged in a lot of anticompetitive behavior to maintain and extend their web monopoly.
2. Removing XSLT support from browsers is a good idea that is widely supported by all major browser vendors.
Looking only at how many sites use a feature gives you an incomplete view. If a feature were only used by Wikipedia, it'd still be inappropriate to deprecate it with a breaking change and a short (1yr) migration window. You work with the important users to retire it and then start pulling the plug publicly to notify everyone you might have missed.
"Such vision is in direct contrast with that of the Web as a repository of knowledge, a vast vault of interconnected documents whose value emerges from organic connections, personalization, variety, curation and user control. But who in the WHATWG today would defend such vision?"
"Maybe what we need is a new browser war. Not one of corporation versus corporation -doubly more so when all currently involved parties are allied in their efforts to enclose the Web than in fostering an open and independent one- but one of users versus corporations, a war to take back control of the Web and its tools."
It should be up to the www user not the web developer to determine how they prefer the documents to appear on their screen
Contrast this with one or a few software programs, i.e, essentially a predetermined selection (no choice), that purport to offer all possible preferences to all www users, i.e., the so-called "modern" browser. These programs are distributed by companies that sell ad services and their business partners (Mozilla)
Documents can be published in a "neutral" format, JSON or whatever, and users can choose to convert this, if desired, to whatever format they prefer. This is more or less the direction the web has taken however at present the conversion is generally being performed by web developers using (frequently obfuscated) Javascript, intended to be outside the control of the user
Although from a technical standpoint, there is nothing that requires (a) document retrieval and (b) document display to be performed by the same program, commercial interests have tried to force users toward using one program for everything (a "do everything program")^1
When users run "do everything programs" from companies selling ad services and their business partners to perform both (a) and (b), they end up receiving "documents" they never requested (ads) and getting tracked
If users want such "do everything" corporate browsers, if they prefer "do everything programs", then they are free to choose them, but there should be other choices and it should be illegal to discriminate against other software as long as rules of "netiquette" are followed. A requirement to use some "do everything program" is not a valid rule
"There's more to the Internet than the World Wide Web built around the HTTP protocol and the HTML file format. There used to be a lot of the Internet beyond the Web, and while much of it still remains as little more than a shadow of the past, largely eclipsed by the Web and what has been built on top of it (not all of it good) outside of some modest revivals, there's also new parts of it that have tried to learn from the past, and build towards something different."
Internet subscribers pay a relatively high price for access in many countries
According to one RFC author the www became the "the new waist"
But to use expensive internet access only for "the web", especially a 100% commercial, obsessively surveilled one filled with ads, is also a "waste", IMHO
1. Perhaps the opposite of "do one thing well". America's top trillionaire wants to create another of these "do everything programs", one to rule them all. These "do everything programs" will always exist but they should never be the only viable options. They should never be "required"
But it doesn't really make a difference to my broader point that browser devs have never had "servant-mindset"
I use RSS all the time... To keep up-to-date on podcasts. But for keeping up to date on news, people use social media. RSS isn't the missing piece of the puzzle for changing that, an app on top of RSS is. And in the absence of Reader, nothing has shown up to fill that role that can compete with just trading gossip on Facebook.
It really feels like the developer has over-constrained the problem to work with browsers as they are right now in this context.
And, indeed, if the protocol was one killer app deprecation and removal away from being obsolete, the problem was the use case, not the protocol.
(Personally, I don't think RSS is dead; it's very much alive in podcasting. What's dead is people consuming content from specific sites as a subscription model instead of getting most of their input slop-melanged in through their social media feeds; they don't care about the source of the info, they just want the info. I don't think that's something we fix with improved RSS support; it's a behavior issue looking for a better experience than Facebook, not for everyone to wake up one day and decide to install their own feed reader and stop browsing Facebook or Twitter or even Mastodon for links all day).
Safari is what I'm concerned about. Without Apple's monopoly control, Safari is guaranteed to be a dead engine. WebKit isn't well-enough supported on Linux and Windows to compete against Blink and Gecko, which suggests that Safari is the most expendable engine of the three.
Say I have an XML document that uses XSLT, how do I modify it to apply your suggestion?
I've previously suggested the XML stylesheet tag should allow
<?xml-stylesheet type="application/javascript" href="https://example.org/script.js"?>
which would then allow the script to use the service-worker APIs to intercept and transform the request.But with the implementation available today, I see no way to provide a first-class XSLT-like experience with JS.
I'm not good enough with XSLT to know if it is worth creating the problem that fits the solution.
Would you also suggest I use separate URLs for HTTP/2 and HTTP/1.1? Maybe for a gzipped response vs a raw response?
It's the same content, just supplied in a different format. It should be the same URL.
Which build process are you talking about? Which XSLT library would you recommend for running on microcontrollers?
HN still has less dogma than Reddit, but it's closer than it used to be in my estimation. Reddit is still getting more dogma each day, but HN is slowly catching up.
I don't know where to turn to for online discourse that is at least mostly free from dogma these days. This used to be it.
I had also defined a "hashed:" scheme for specifying the hash of the file that is referenced by the URL, and this is a scheme that includes another URL. (The "jar:" scheme is another one that also includes other URL, and is used for referencing files within a ZIP archive.)
If browser makers had simply said that maintaining all the web standards was too much work and they were opting to depreciate parts of it, I'd likely still object but I wouldn't be calling it bad faith. As it stands however, they and their defenders continue to cite alleged security problems as one of if not the primary reason to remove XSLT. This alleged security justification is a lie. We know it's a lie because there exists a trivial way to virtually completely remove the security burden presented by XSLT to browser maintainers without depreciating it, and the chrome team is well aware of this option. There is no significant difference in security between "shipping an existing polyfil which implements XSLT from inside the browser's sandbox instead of outside it" and "removing all support for XSLT", so security isn't the reason they're very deliberately choosing the former over the latter.
> This seems overly dramatic. It's a small streamlining of an important software, by removing an expensive feature with almost zero usage
This isn't a counter argument, you've just repeated your point that XSLT (allegedly) isn't sufficiently well used to justify maintaining it, ignoring the fact that said tradeoff being made by browser maintainers in the first place is a problem.
Now this obviously isn't critical infrastructure, but it sucks getting stepped on and I'm getting stepped on by the removal of XSLT.
(However, I also think that generally you should not require too many features, if it can be avoided, whether those features are JavaScripts, TLS, WebAssembly, CSS, and XSLT. However, they can be useful in many circumstances despite that.)
They're just turning up the heat, even more so since AI became a thing.
And the same is true in the other direction, I want RSS to be a success but that would hinge on affirmative choices by major actors in the space choosing to sustain it.
I am sympathetic to the stance of the article, but this line really turned me off and made me wonder if I was giving the writer too much credit. This kind of "if you're not with me, then you suck" outlook is childish and off-putting.
I know it's hard for some terminally political people to understand, but some of us really, really think it's a strength to work with teammates who hold different opinions than our own.
However JS is not 100% backwards compatible either, it is in many cases, largely backwards compatible, but there are rare cases of bug fixes, or deprecated APIs that might be removed and break old code, but this is not even JS itself, it's more like web/engine standards.
But in the case of docs (eg XML-FO for docbook, DITA etc) XSLT does actually separate content from styling.
JS is backwards compatible: new engines support code using old features.
JS is not forward compatible: old engines don't support code using new features.
Regarding your iPad woes, the problem is not the engine but websites breaking compat with it.
The distinction matters as it means that once a website is published it will keep working. The only way to break an existing website is to publish a new version usually. The XSLT situation is note-worthy as it's an exception to this rule.
<xsl:template match="abc">
<def ghi="jkl"/>
</xsl:template>
This is one of simplest ways to do things. With JavaScript you what? Call methods? CreateElement("def").setAttribute("def", "jkl")
There is a ton of "template engines" (all strictly worse than XSLT); why people keep writing them? Why people invented JSX with all the complicated machinery if plain JavaScript is better?This is a fantasy world that does not exist.
People used PHP, or a tool which created HTML (DreamWeaver), or a website, or maybe a LLM today.
That seems to fail occam's razor pretty hard, given the competing hypotheses for each of their decisions include "Mozilla staff think they're doing a smart thing but they're wrong" and "Mozilla staff are doing a smart thing, it's just not what you would have done".
And then you want to merely render this semantically rich document into HTML. This is where XSLT comes in.
It seems like most open source projects either have:
1. A singular developer, who controls what contributions are accepted and sets the direction of the project 2. An in-group / foundation / organization / etc that does the same.
Do you have an example of an open source project whose roadmap is community-driven, any more than Google or Mozilla accepts bug reports and feature reports and patches and then decides if they want to merge them?
That statement was accurate enough if you’re willing to read actively and provide people with the most minimal benefit of the doubt.
1. As we're seeing here, browser developers determine what content the browser will parse and process. This happens in both directions: tons of what is now common JS/CSS shipped first as browser-specific behavior that was then standardized, and also browsers have dropped support for gopher, for SSLv2, and Flash, among other things.
2. Browsers often explicitly provide a transformation point where users can modify content. Ad blockers work specifically because the browser is not a "servant" of whatever the server returns.
3. Plenty of content can be hosted on servers but not understood or rendered by browsers. I joked about Opera elsewhere on the thread, which notably included a torrent client, but Chrome/Firefox/Safari did not: torrent files served by the server weren't run in those browsers.
What in particular do you find objectionable about this implementation? It's only claiming to be an XML parser, it isn't claiming to validate against a DTD or Schema.
The XML standard is very complex and broad, I would be surprised if anyone has implemented it in it's entirety beyond a company like Microsoft or Oracle. Even then I would question it.
At the end of the day, much of XML is hard if not impossible to use or maintain. A lot of it was defined without much thought given to practicality and for most developers they will never had to deal with a lot of it's eccentricities.
Users and web developers seemed much less on board though[1][2], enough that Google referenced that in their announcement.
[1] https://github.com/whatwg/html/issues/11578 [2] https://github.com/whatwg/html/issues/11523
The first sentence isn't wrong, but the last sentence is confused in the same way that people who assume that Wikimedia employees have been largely responsible for the content on Wikipedia are confused about how stuff actually makes it into Wikipedia. In reality, WMF's biggest contribution is providing infrastructure costs and paying engineers to develop the Mediawiki platform that Wikipedia uses.
Likewise, a bunch of the people who built up MDN weren't and never could be "let go", because they were never employed by Mozilla to work on MDN to begin with.
(There's another problem, too, which is that addition to selling short a lot of people who are responsible for making MDN as useful as it is but never got paid for it, it presupposes that those who were being paid to work on MDN shouldn't have been let go.)
> google has been the party leading the charge arguing for the removal.
and
> many here seem to think that was largely driven by google though that's speculation
I'm saying that I don't see any evidence that this was "driven by google". All the evidence I see is that Google, Mozilla, and Apple were all pretty immediately in agreement that removing XSLT was the move they all wanted to make.
You're telling us that we shouldn't think too hard about the fact that a Mozilla staffer opened the request for removal, and that we should notice that Google "led the charge". It would be interesting if somebody could back that up with something besides vibes, because I don't even see how there was a charge to lead. Among the groups that agreed, that agreement appears to have been quick and unanimous.
I responded essentially “it was indeed also the browser”, which it seems you agree with so I don’t know what you’re even trying to argue about.
> willing to read actively and provide people with the most minimal benefit of the doubt.
Indeed
There can be two kind of extensions, sandboxed VM codes (e.g. WebAssembly) and native codes; the app store will only allow sandboxed VM codes, and any native codes that you might want must be installed and configured manually.
There is also the issue of such things as: identification of file formats (such as MIME), character sets, proxies, etc.
I hade made up Scorpion protocol and file format which is intended to be between Gemini and "WWW as it should be if it was designed better". This uses ULFI rather than MIME (to avoid some of the issues of MIME), and supports TRON character code, and the Scorpion conversion file can be used to specify a way to handle unknown file formats (there are several ways that this can be specified, including by a uxn code).
So, an implementation can be versatile to support things that can be useful beyond only MIME and Unicode etc.
Adding some additional optional specifications to WWW might also help, e.g. a way to specify that certain parts of the document are supposed be overridden by the user specifications in the client when they are available (although in some cases the client could guess, e.g. if a CSS only selects by HTML commands and media queries and not by anything else (or no CSS at all), then it should be considered unnecessary and the user's specifications of CSS can be used instead if they have been specified). Something like the Scorpion conversion file would be another possibility to have, possibly by adding a response header.
The previous "Google is killing the open web" article also mentions some similar things, but also a few others:
> in 2015, the WHATWG introduces the Fetch API, purportedly intended as the modern replacement for the old XMLHttpRequest; prominently missing from the new specification is any mention or methods to manage XML documents, in favor of JSON
Handling XML or JSON should probably better be a separate function than the function for downloading files, though. (Also, DER is better for many things)
> in 2024, Google discontinues the possibility to submit RSS feeds for review to be included in Google News
This is not an issue having to do with web browsers, although it is related to the issues that do have to do with web browsers (not) handling RSS.
> in 2025, Google announces a change in their Chrome Root Program Policy that within 2026 they will stop supporting certificate with an Extended Key Usage that includes any usage other than server [...]; this effectively kills certificates commonly used for mutual authentication
While I think they should not have stopped supporting such certificates (whoever the certificate is issued to probably should better make their own decision), it is usually helpful to use different certificates for client authentication anyways, so this is not quite as bad as they say, although it is still bad.
(X.509 client authentication would also have many other benefits, which I had described several times in the past.)
> in 2021, Google tried to remove [alert(), prompt(), and confirm()], again citing “security” as reason, despite the proposed changes being much more extensive than the purported security threat, and better solutions being proposed
Another issue is blocking events and JavaScript execution (which can sometimes be desirable; in the case of frames it should be better to only block one frame though), and modal dialog boxes potentially blocking other functions in the browser (which is undesirable). For the former case, there other other things that can be done, though, such as a JavaScript object that controls the execution of another JavaScript context which can then be suspended like a generator function (without needing to be a generator function).
> I don't recall a part of the web where browser developers were viewed as not having agency
Being a servant isn't "not having agency", it's "who do I exercise my agency on behalf of". Tools don't have agency, servants do.
(Also, they could make XSLT (and many other things that are built-in) into an extension instead, therefore making the core system more simpler.)
Hi! I'm a non-programmer who picked up XSLT of my own volition and spent the last five-ish years using it to write a website. I even put up all the code on github: https://github.com/zmodemorg/wyrm.org
I spent a few weeks converting the site to use a static site generator, and there were a lot of things I could do in XSLT that I can't really do in the generator, which sucks. I'd revert the entire website in heartbeat if I knew that XSLT support would actually stick around (in fact, that's one of the reasons I started with XSLT in the first place, I didn't think that support would go away any time soon, but here we are)
A few years ago I bought a bunch of Skylanders for practically nothing when the toys to life fad faded away. To keep track of everything I made a quick and dirty XSLT script that sorted and organized the list of figures and formatted each one based on their 'element'. That would have been murderous to do in plain HTML and CSS: https://wyrm.org/inventory/skylanders.xml
There are reasons to do this sometimes, but usually it would be better to put them inside of the security sandbox (if the security sandbox can be designed in a good way).
The user (or system administrator) could manually install and configure any native code extensions (without needing to recompile the entire browser), but sandboxed VM codes would also be available and would be used for most stuff, rather than the native code.
I used it to develop a website because I'm not a programmer, but I still want to have some basic templates on my webpage without having to set up a dev environment or a static site generator. XML and XSLT extend HTML _just enough_ to let me do some fun things without me having to become a full-on programmer.
I guess if you don't use social media or facebook you're out of luck?
And, indeed, part of the deprecation of XSLT proposal involves, in essence, moving XSLT processing from the browser-native layer to wasm as a polyfill that a site author can opt into.
I don't know that road maps are any more or less "community driven" than anything else given the nature of their structures, but one can draw a distinction between them and the degree of corporate alignment like React (Facebook), Swift (Apple).
I'm agreeable enough to your characterization of open source projects. It's broad but, I think, charitably interpreted, true enough. But I think you can look at the range of projects and see ones that are multi stakeholder vs those with consolidated control and their degree of alignment with specific corporate missions.
When Google tries to, or is able to, muscle through Manifest v3, or FLoC or AMP, it's not trying to model benevolent actor standing on open source principles.
(It could still try to render in case of an error, but display the error message as well, perhaps.)
There's a lot of back and forth on every discussion about XSLT removal. I don't know if I would categorize that as 'without upsetting too many people'
What I'm saying, though, is if you don't use social media at this point you're already an outlier (I am, it should be noted, using the term broadly: you are using social media. Right now. Hacker News is in the same category as Facebook, Twitter, Mastodon, et. al. in this context: it's a place you go to get information instead of using a collection of RSS feeds, and I think the reason people do this instead of that may be instructive as to the ultimate fate of RSS for that use-case).
Open source principles have to do with the source being available and users being able to access/use/modify the source. Chrome is an open source project.
To try to expand "open source principles" to suggest that if the guiding entity is a corporation and they have a heavy hand in how they steer their own project, they're not meeting those principles, is just incorrect.
The average open source project is run by a person or group with a set of goals/intentions for the project, and they make decisions about the project based on those goals. That includes sometimes taking input from users and sometimes ignoring it.
> People see Google doing anything and automatically assume it's a bad thing and that it's only happening because Google are evil.
Sure, but a person also needs to be conscious of the role that this perception plays in securing premature dismissal of anyone who ventures to criticize.
(In quoting your comment above, I've deliberately separated the first sentence from the second. Notice how easily the observation of the phenomenon described in the second sentence can be used to undergird the first claim, even though the first claim doesn't actually follow as a necessary consequence from the second.)
I eventually started using server-side XSL processing (https://nginx.org/en/docs/http/ngx_http_xslt_module.html) because I wanted my site to be viewable in text-based browsers, too, but it uses the same XSLT library that the browsers use and I don't know how long it's going to be around.
I think you're entirely missing the point of RSS by saying that. RSS doesn't and should require NOT Javascript.
Now feeds could somehow be written in some bastard HTML5 directly, but please don't bring Javascript in that debate.
XSLT allows to transform a XML document into an HTML presentation, without the need for javascript, that's its purpose.
The circulation for my local newspaper is so small that they now get printed at a press a hundred miles away and are shipped in every morning to the handful of subscribers who are left. I don't even know the last time I saw a physical newspaper in person.
> Hacker News... it's a place you go to get information instead of using a collection of RSS feeds
No, it's a place I go to _in addition_ to RSS feeds. An anonymous news aggregator with web forum attached isn't really social media. Maybe some people hang out here to socialize, but that's not a use case for me
Contrasting the other use case you dabble in (that makes you an outlier) of pulling content from specific sources (I'm going to assume generating original content, not themselves link aggregators, otherwise this topic is moot) via RSS. Most people see that as redundant if they have access to something like HN, or Fark, or Reddit, or Facebook. RSS readers alone, in general, don't let you share your thoughts with other people reading the article, so it's not as popular a tool.
The difference between HTTP/2 and HTTP/1.1 is exactly like the difference between plugging your PC in with a green cable or a red cable. The client neither knows nor cares.
> It's the same content, just supplied in a different format. It should be the same URL.
So what do I put as the URL of an MP3 and an Ogg of the same song? It's the same content, just supplied in a different format.
Just like protocol negotiation, HTTP has format negotiation and XML postprocessing for exactly the same reason.
> So what do I put as the URL of an MP3 and an Ogg of the same song? It's the same content, just supplied in a different format
Whatever you want? If I access example.org/example.png, most websites will return a webp or avif instead if my browser supports it.
Similarly, it makes sense to return an XML with XSLT for most browsers and a degraded experience with just a simple text file for legacy browsers such as NCSA Mosaic or 2027's Google Chrome.
I wouldn’t spend 5 minutes making that feed look pretty for browser users because no one will ever see it. I don’t know who these mythical visitors are who 1) know what RSS is and 2) want to look at it in Chrome or Safari or Firefox.
You can't polyfill many things. Should we just dump everything into the browser? Well, Google certainly thinks so. But that makes the question about "but this feature is unused, why support it" moot.
And Google has no intention to support a polyfill, or ship it with the browser. The same person who didn't even know that XSLT is used on podcast sites scribbled together some code, said "here, it's easy", and that's it.
And the main metric they use for deprecations is the number of sites/page uses. So even that doesn't work in favor of all the hardware APIs (and a few hundred others) that Google just shoved into the browser.
At least there's consensus on removing XSLT, right? But there are many, many objections about USB, HID, etc. And still that doesn't stop Google from developing, shipping and maintaining them.
Basically, the entire discussion around XSLT struck a nerve partly because all of the arguments can immediately be applied to any number of APIs that browsers, and especially Chrome, have no trouble shipping. And that comes on top of the mismanaged disaster that was the attempt to remove alert/confirm several years ago (also, "used on few sites", "security risk", "simpler code", "full browser consensus" etc.)
Let me quote from my comment, again:
--- start quote ---
The guy pushing "intent to deprecate" didn't even know about the most popular current usage (displaying podcast RSS feeds) until after posting the issue and until after people started posting examples
--- end quote ---
I would like to see more evidence than "we couldn't care less, remove it" before a consensus on removal, before an "intent to deprecate" and before opening a PR to Chrome removing the feature.
Funnily enough, even the "browser consensus" looks like this: "WebKit is cautiously supportive. We'd probably wait for one implementation to fully remove support": https://github.com/whatwg/html/issues/11523#issuecomment-314...
BTW. Literally the only "evidence" originally presented was "nearly 100% of sites use JS, while 1/10000 of those use XSLT.": https://github.com/whatwg/html/issues/11523#issuecomment-315... which was immediately called into question: https://github.com/whatwg/html/issues/11523#issuecomment-315... and https://github.com/whatwg/html/issues/11523#issuecomment-315... and that's before we account for google's own docs saying they have a blind spot in the enterprise/corporate setting where people suspect the usage may be higher.
Also, as I say. I think the main issue isn't XSLT itself. XSLT is a symptom.
> The Internet is for End Users
> This document explains why the IAB believes that, when there is a conflict between the interests of end users of the Internet and other parties, IETF decisions should favor end users. It also explores how the IETF can more effectively achieve this.
Google does lead the charge on it, immediately having a PR to remove it from Chromium and stating intent to remove even though the guy pushing it didn't even know about XSLT uses before he even opened either of them.
XSLT is a symptom of how browser vendors approach the web these days. And yes, Google are the worst of them.
--- start quote ---
In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors; which in turn should be given more weight than costs to implementors; which should be given more weight than costs to authors of the spec itself, which should be given more weight than those proposing changes for theoretical reasons alone. Of course, it is preferred to make things better for multiple constituencies at once.
--- end quote ---
However, the needs of browser implementers have long been the one and only priority.
Oh. It's also Google's own policy for deprecation: https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...
--- start quote ---
First and foremost we have a responsibility to users of Chromium-based browsers to ensure they can expect the web at large to continue to work correctly.
The primary signal we use is the fraction of page views impacted in Chrome, usually computed via Blink’s UseCounter UMA metrics. As a general rule of thumb, 0.1% of PageVisits (1 in 1000) is large, while 0.001% is considered small but non-trivial. Anything below about 0.00001% (1 in 10 million) is generally considered trivial. There are around 771 billion web pages viewed in Chrome every month (not counting other Chromium-based browsers). So seriously breaking even 0.0001% still results in someone being frustrated every 3 seconds, and so not to be taken lightly!
--- end quote ---
The RFC8890 doesn't suggest anything that overlaps with my understanding of what the word "servant" means or implies. The library in my town endeavors to make decisions that promote the knowledge and education of people in my town. But I wouldn't characterize them as having a "servant-mindset". Maybe the person above meant "service"?
FWIW, Google/Mozilla/Apple appear to believe they're making the correct decision for the benefit of end users, by removing code that is infrequently used, unmaintained, and thus primarily a security risk for the majority of their users.
I don't see the XML-based SVG image format going anywhere.
The ODF, EPUB, and other formats also use XML. Those are not dying.
None of the above reads like a "servant-oriented mindset". It reads like "this is the framework by which we decide what's valuable". And by that framework, they're saying that keeping XSLT around is not the right call. You can disagree with that, but nothing you've quoted suggests that they're trying to prioritize any group over the majority of their users.
Think about it from a non-technical user's perspective: they click on a RSS link and get a wall of XML text. What are they going to do? Back button and move on. How are they ever going to get introduced to RSS and feed readers and such like?
I think a lot of feeds never get hit by a browser because there isn't a hyperlink to them. For example: HN has feeds, but no link in the HTML body, so I'm pretty confident they don't get browser hits. And no one who doesn't already know about feeds will ever use them.
What about people who don't "1) Know what RSS is"???
And what if you could make it friendly for them in 4 minutes? You could by dropping in a XSLT file and adding a single line to the XML file. I bet you could do it in 3 minutes.
Just having users submit links that other users can comment on doesn't make it social media. I can't follow particular users or topics, I can't leave myself a note about some user that I've had a positive or negative experience with, I can't ignore someone who I don't want to read, etc. Heck, usernames are so de-emphasized on this site that I almost always forget that they're there.
The XSLT view of the RSS feed so people (especially newcomers) aren't met with a wall of XML text. It should still be a valid XML feed.
Plus it needs to work with static site generators.
"But for keeping up to date on news, people use link aggregation boards where other users post links to stuff on the web and then talk to each other about them. RSS isn't the missing piece of the puzzle for changing that, an app on top of RSS is. And in the absence of Reader, nothing has shown up to fill that role that can compete with just trading gossip on Hacker News."
... that would be the same point. RSS, by itself, is a protocol for finding out some site created new content and is just not particularly compelling by itself for the average user when they can use "link aggregation boards where other users post links to stuff on the web and then talk to each othe about them" instead.
what's missing is social infrastructure to direct attention to this (and maybe it's missing because people are too dumb when it comes to adblockers, or they are not bothered that much, or ...)
and of course, also maintaining a fork that does the usual convenience features/services that Google couples to Chrome is hard and obviously this has antitrust implications, but nowadays not enough people care about this either
Even cheaper than shipping the client an XML and an XSLT is just shipping them the HTML the XSLT would output in the first place.
I am sharing my view, though, that Google engineers have been the majority share of browser engineer comments I've seen arguing for removing XSLT.
This appears to be what they are doing, in fact!
> What’s happening is that Google (along with Mozilla and Safari) are changing the html spec to drop support for xslt. If you want to argue that this is bad because it “breaks the web”, that’s fine,
Not only does it not break the web, they are flat out lying about that being the reason they’re doing it. That is also very dangerous.
You’re doing a lot of sideways handwaving to say killing off this specific technology is not killing the open web, but others are.
XSLT is not a source of security errors and this is your disingenuous argument from last time, (please state if you work for any of these companies). Libxslt has security vulnerabilities not XSLT itself. Futhermore there are replacement processors they could contribute and implement to and a myriad of other solutions, but they have chosen to kill instead.
That is killing the Open web.
But do you see how removing a feature from a major browser makes it seem like RSS did just go away and how RSS will eventually go away?
What a terrible disingenuous argument. Anyone not in line with big tech deserves to be pushed aside eh?
Without built-in support, XSLT is inconvenient. Without built-in support, things like WebUSB cannot possibly exist.
That’s why I think they can’t be compared directly.
<script src="https://example.org/script.js"
xmlns="http://www.w3.org/1999/xhtml"></script>
You can also put CSS in there, like this: <style xmlns="http://www.w3.org/1999/xhtml">
* { color: red; }
</style>
Or like this: <link href="https://example.org/style.css"
rel="stylesheet" xmlns="http://www.w3.org/1999/xhtml"/>But instead I’ll point out that W3C no longer maintains the html spec. They ceded that to the WHATWG which was spun by the major browser developers in response to the stagnation and what amounted to abandonment of html by the W3C.
On second thought, that wouldn't allow me to modify the DOM before it's being parsed, I'd have to wipe the DOM and polyfill the entire page load, right?
<?xml-stylesheet type="application/javascript" href="https://example.org/script.js"?>
which would then allow the script to use the service-worker APIs to intercept and transform the request.Your definition of “open web“ appears to be “never deprecating a feature ever”. And it’s fine that you want browsers to support features forever. I don’t think that has anything to do with the open web though. Exactly like the author of this blog post, you believe things that were never even part of the “web”, such as gopher, should be supported in the name of an “open web”.
> Not only does it not break the web, they are flat out lying about that being the reason they’re doing it.
The library is known to have multiple security vulnerabilities. They have declared that it is not sustainable to maintain this dependency. And they have also declared that it’s not worth replacing it. I don’t see the lie in that. I don’t think anyone is claiming that they actually cannot support xslt. They are saying that it requires more investment to support, and the ROI is too low.
I also clarified this exact point last time. You are willfully misunderstanding the messaging because acknowledging the engineering trade offs here would force you to consider that this isn’t just an issue of lazy developers or evil PMs as you also claimed.
> please state if you work for any of these companies
I work for Microsoft who I don’t believe has chimed in on this conversation, though if Chromium removes it, Edge presumably will too. I have no visibility into the Edge position on this feature, though.
They're not even removing the ability for the browser to render XML. They're just removing an in-browser formatter for XML (a feature that can be supported by server-side rendering or client-side polyfill).
Honestly the one thing I don’t begrudge them is taking Google’s money to make them the default search engine. That’s a very easy deal with the devil to make especially because it’s so trivial to change your default search engine which I imagine a large percentage of Firefox users do with glee. But what they have focused on over the last couple of years has been very strange to watch.
I know Proton gets mixed feelings around here, but to me it’s always seemed like Proton and Mozilla should be more coordinated. Feel like they could do a lot of interesting things together
And no, XSLT doesn't have much to do with how much RSS thrives or not. RSS is basically consumed by RSS reader backends, not directly by users on their browser.
One of the web platform's problems, is that it accumulates untold cruft from every failed experiment. The entire XHTML exercise turned out to be an expensive mistake, but we can't remove that because too many pages depend on it, and it ended up in a whole lot of places, including the EPUB definition. But at least XSLT could get removed. Yay for that.
However, what I do care about is that it _remains viewable and usable_. Imagine if Microsoft Word one day decided you couldn't open .doc or .rtf files from the early 2000's? The browser vendors have decided that the web is now an application delivery platform where developers must polyfill backwards compatibility, past documents be damned.
And just as the article drives the point home, it doesn't have to be this way. They could just provide the polyfill within the browser, negating any purported security issues with ancient XML libraries.
... Actually, that seems like a fine idea...
Passionate nerds giving a shit can build a far more rosy world than whatever that represents, so I don’t see why anyone should give a damn if this happens to be somewhat niche.
Moreover, Google docs says that even even 0.0001% shouldn't be taken lightly.
As I keep saying, the person who's pushing for XSLT removal didn't even know about XSLT uses until after he posted "intent to remove", and the PR to remove to Chrome. And the usage stats he used have been questioned: https://news.ycombinator.com/item?id=45958966
I didn't look at all documents, but Working Mode describing how specs are added or removed doesn't mention users even once. It's all about implementors: https://whatwg.org/working-mode
https://github.com/dfabulich/style-xml-feeds-without-xslt
Google has recommended a polyfill for XSLT ever since they announced their plan to remove it. https://developer.chrome.com/docs/web-platform/deprecating-x...
I’m not surprised they focus on implementors in “working mode”, though. WHATWG specifically started because implementers felt like the W3C was holding back web apps. And it kind of was.
WHATWG seemed to be created with an intent to return to the earlier days of browser development, where implementors would build the stuff they felt was important and tell other implementors how to be compatible. Less talking and more shipping.
I used to work in a web dev job where when they brought in "time tracking" they wanted everyone to update a spreadsheet with what they were doing every half an hour. A spreadsheet, as literally a .xls, on a shared Windows drive. Everyone spent more time waiting for access to the spreadsheet than they did doing any work.
This situation persisted for about two weeks, and the manager that came up with the genius idea about two weeks longer than that, before we eventually told the other managers we were downing tools and leaving if he didn't either get "promoted to customer" or lay off the charlie during work hours.
So, you need a lot of cleverness on the browser to detect which format the client needs, and return the correct thing?
Kind of not the same situation as emitting an XML file and a chunk of XSLT with it, really.
If you're going to make the server clever, why not just make the server clever enough to return either an RSS feed or an HTML page depending on what it guesses the client wants?
Can you describe any real-world application where ftp is the best solution for a problem anyone has right now?
Consider the impact of an internet-exposed service that allows unauthenticated clients to remotely run code as root on your server.
You can then recurse wide. In theory it's best to allow only X placeables of up to Y size.
The point is, Doctype/External entities do a similar thing to XSLT/XSD (replacing elements with other elements), but in a positively ancient way.
XXE injection (which comes in several flavors), remote DTD retrieval, and quadratic blowup (a sort of twin to the billion laughs attack).
You aren't wrong though. They all live in <!DOCTYPE definition. Hence, my puzzlement.
Why process it at all? If this is as security focused as Google claims, fill the DOCTYPE with molten tungsten and throw it into the Mariana Trench. The external entities definition makes XSLT look well designed in comparison.
There's no cleverness involved, this is an inherent part of the HTTP protocol. But Chrome still advertises full support for XHTML and XML:
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
But importantly, for audio/video files, that's still just serving static files, which is very different from having to dynamically generate different files.> when Mozilla bent over to Google's pressure to kill off RSS by removing the “Live Bookmarks” features from the browser
if hypothetically the browsers stop supporting the format, nothing stops dedicated rss/atom/json feed readers to work as normal. might be my myopic point of view but most users to still use the standard has predominantly used this approach since google reader days.
Nonetheless, by that same argument you could just kill HN off. A lot of projects have a benefit that far outweighs their raw usage numbers.
Maybe! How much Javascript would I have to learn before I could come up with a 'trivial' solution?
> the hundreds of lines of XSL you wrote.
Those hundreds of lines are the same copy/pasted if statement with 5 different conditions. For each game, I create a table by: alphabetizing the XML > going through the list searching for figures that match the game > each time I find one go through the color list to find the color to use for the table row. There are 10 color choices per game, which means that I repeated a 10-choice if statement 5 times.
There's nothing difficult here, it's just verbose.
I migrated to an Org-mode-based workflow a couple of weeks ago because I can see the writing on the wall, but most of the XML and XSLT files are still in place because cool URIs don't change(1).
Who knows how many other XML and XSLT-based sites still exist on the internet because Google refuses to index that content
Anything is better than nothing, if anyone actually listens to the feedback they get instead of taking it and ignoring it.
<table>
<tr><th>Skylanders figure</th><th>Note</th></tr>
<xsl:apply-templates select="skylanders/figure[name/@series=1]">
<xsl:sort select="name"/>
</xsl:apply-templates>
</table>
If you further refactor the XML, you could do e.g. <skylanders>
<figure name="Hijinx" element="undead" series="3" note=""/>
<figure name="Eye Small" element="undead" series="3" note=""/>
<figure name="Air Screamer" element="air" series="3" note="Storm Warning"/>
...
<figure name="Blast Zone" element="fire" series="2" note="Bottom only"/>
<series name="Skylanders Giants" id="1"/>
<series name="Skylanders SWAP Force" id="2"/>
<series name="Skylanders Trap Team" id="3"/>
<series name="Skylanders Superchargers" id="4"/>
<series name="Skylanders Imaginators" id="5"/>
</skylanders>
And then you can entirely eliminate the verbosity in your XSL. The templates become: <xsl:template match="series">
<h2><xsl:value-of select="@name"/></h2>
<table>
<tr><th>Skylanders figure</th><th>Note</th></tr>
<xsl:apply-templates select="/skylanders/figure[@series=current()/@id]"><xsl:sort select="name"/></xsl:apply-templates>
</table>
</xsl:template>
<xsl:template match="figure">
<tr class="element-{ @element }">
<td><xsl:value-of select="@name" /></td>
<td><xsl:value-of select="@note" /></td>
</tr>
</xsl:template>
... <style>
.element-air td { background: skyblue; color: black; }
.element-dark td { background: dimgrey; color: black; }
.element-earth td { background: saddlebrown; color: white; }
.element-fire td { background: firebrick; color: white; }
.element-life td { background: darkgreen; color: white; }
.element-light td { background: ivory; color: black; }
.element-magic td { background: purple; color: white; }
.element-tech td { background: orangered; color: white; }
.element-undead td { background: midnightblue; color: white; }
.element-water td { background: blue; color: white; }
.element-none td { background: black; color: white; }
</style>
...Then in your body area:
<xsl:apply-templates select="skylanders/series"><xsl:sort select="id"/></xsl:apply-templates>
You can actually use XSL to do the XML refactor too! ChatGPT happily obliges a template to do so: <?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" indent="yes" />
<xsl:template match="/skylanders">
<skylanders>
<xsl:apply-templates select="figure"/>
</skylanders>
</xsl:template>
<xsl:template match="figure">
<figure>
<xsl:attribute name="name">
<xsl:value-of select="name"/>
</xsl:attribute>
<xsl:attribute name="element">
<xsl:call-template name="convert-element">
<xsl:with-param name="code" select="name/@element"/>
</xsl:call-template>
</xsl:attribute>
<xsl:attribute name="series">
<xsl:value-of select="name/@series"/>
</xsl:attribute>
<xsl:attribute name="note">
<xsl:value-of select="normalize-space(note)"/>
</xsl:attribute>
</figure>
</xsl:template>
<xsl:template name="convert-element">
<xsl:param name="code"/>
<xsl:choose>
<xsl:when test="$code = 0">air</xsl:when>
<xsl:when test="$code = 1">dark</xsl:when>
<xsl:when test="$code = 2">earth</xsl:when>
<xsl:when test="$code = 3">fire</xsl:when>
<xsl:when test="$code = 4">life</xsl:when>
<xsl:when test="$code = 5">light</xsl:when>
<xsl:when test="$code = 6">magic</xsl:when>
<xsl:when test="$code = 7">tech</xsl:when>
<xsl:when test="$code = 8">undead</xsl:when>
<xsl:when test="$code = 9">water</xsl:when>
<xsl:when test="$code = 10">none</xsl:when>
<xsl:otherwise>unknown</xsl:otherwise>
</xsl:choose>
</xsl:template>
</xsl:stylesheet>
Then `xsltproc refactor.xsl skylanders.xml > skylanders-refactored.xml`As I've said elsewhere, I like XSL for its beginner-approachability, so not doing a bunch of factoring is fine, but I also like it for its power: such factoring into simple templates is possible once you wrap your head around the idea (as with CSS). Using for-each or choose should be a sign you're doing it wrong. Ideally if you did your data model well, you just do simple template expansions everywhere.
Come up with the worst possible way to present information over a web page.
What device with 2kB of RAM is going to generate any kind of useful RSS feed? Why would you not use something more capable, which is not only going to have more memory but also a lower power consumption?
Because it's was a shitty company and I only worked there for one month. I absolutely hate any type of time tracking or attempts to micro manage.
Such devices usually don't generate RSS feeds, but e.g. sensor measurements as XML (which can be processed directly, or opened in a browser with XSLT to generate a website and an SVG chart from it)
> Why would you not use something more capable, which is not only going to have more memory but also a lower power consumption?
Because anything else will have >100× more power consumption?
Yes, but the "bug" here was "a single website is broken". Here, we are talking about an outcome that will break many websites (more than removing USB support would break) and that is considered acceptable.
> That's a proposal for Javascript, whose controlling body is TC39
Yes, and the culture of TC39 used to be the culture of those who develop tools for using the web (don't break the Space Jam website, etc.)
Also, the entire measurement is fundamentally just part of the decision. Removing Flash broke tons of sites, and it was done anyways because Flash was a nightmare.
It is rather terrible to have two different pages, because that requires either server or toolchain support, and complicates testing. The XSLT approach was tried, tested, and KISS – provided you didn't have any insecure/secure context mismatches, or CORS issues, which would stop the XSL stylesheet from loading. (But that's less likely to spontaneously go wrong than an update to a PHP extension breaking your script.)
I wouldn't say that I did it wrong, I just didn't do it efficiently. And I knew that at the time.
I appreciate the work, but I've said it elsewhere: I'm not a programmer. This was something I spent a couple of afternoons on five years ago and never looked at again after getting the results I wanted.
The thing about doing it wrong was meant as a reply to the comment upthread about for-each etc. being necessary. For something like you have, they're absolutely not. It's fine if that was the easiest way for you to do it though. My whole point was that I've always seen XSLT as much more of an approachable, enabling technology than modern JS approaches.
A static site is a collection of static files. It doesn't need a server, you could just open it locally (in browsers that don't block file:// URI schemes). If you need some special configuration of the server, it is no longer a static site. The server is dynamically selecting which content is served.
Are XML technologies better or safer? Probably. However practice sets the standards. Is it a good thing? It remains to be seen.
Personally I am not satisfied with the "Web" experience. I find it unsafe, privacy disrespecting, slow and non-standards compliant.
Also it is not complexity if XSLT lives in a third-party library with a well defined interface.
Thei problem is control. They gain control in 2 ways. They will get more involved in xml code base and the bad actors run in the JS sandbox.
That is why we have standards though. To relinquish control through interoperability.
But agreed; if your web server is just reflecting the filesystem, add this to the pile of "things that are hard with that kind of web server." But perhaps worth noting: even Apache and Python's http.server can select the file to emit based on the Accept header.
Less than the amount of XSL you'd need.
> Those hundreds of lines are the same copy/pasted if statement with 5 different conditions.
With a programming language, you could have used loops.
>you could definitely use a static site generator to create multiple versions of the site data and then configure your web server to select which data is emitted
And this web-server configuration would not exist within the static site. The static site generator could not output it, therefore it is not a part of the static site. It is not contained within the files output by the static site generator. It is additional dynamic content added by the web server.
It breaks the fundamental aspect of a static site, that it can be deployed simply to any service without change to the content. Just upload a zip file, and you are done.
<xsl:template match="series">
<h2><xsl:value-of select="@name"/></h2>
<table>
<tr><th>Skylanders figure</th><th>Note</th></tr>
<xsl:apply-templates select="/skylanders/figure[@series=current()/@id]"><xsl:sort select="name"/></xsl:apply-templates>
</table>
</xsl:template>
<xsl:template match="figure">
<tr class="element-{ @element }">
<td><xsl:value-of select="@name" /></td>
<td><xsl:value-of select="@note" /></td>
</tr>
</xsl:template>
No explicit loops necessary.I get your meaning; I've just heard "static site" used to refer to a site where the content isn't dynamically computed at runtime, not a site where the server is doing a near-direct-mapping from the filesystem to the HTTP output.
> Just upload a zip file, and you are done.
This is actually how I serve my static sites via Dreamhost. The zipfile includes the content negotiation rules in the `.htaccess` file.
(Perhaps worth remembering: even the rule "the HTTP responses are generated by looking up a file matching the path in the URL and echoing that file as the body of the GET response" is still a per-server rule; there's no aspect of the HTTP spec that declares "The filesystem is directly mirrored to web access" is a thing. It's rather a protocol used by many simple web servers, and most of them allow overrides to do something slightly more complicated while being one step away from "this is just the identity function on whatever is in your filesystem, well, not technically the identity function because unless someone did something very naughty, I don't serve anything for http://example.com/../uhoh-now-i-am-in-your-user-directory").
> Shipping the feature in Firefox Nightly caused at least one popular website to break.
and links to https://bugzilla.mozilla.org/show_bug.cgi?id=1443630 which points to a single site as being broken. There's no check as to the size of the impacted user base, but there is a link in the blog post to https://www.w3.org/TR/html-design-principles/#support-existi... which says:
> Existing content often relies upon expected user agent processing and behavior to function as intended. Processing requirements should be specified to ensure that user agents implementing this specification will be able to handle most existing content. In particular, it should be possible to process existing HTML documents as HTML 5 and get results that are compatible with the existing expectations of users and authors, based on the behavior of existing browsers. It should be made possible, though not necessarily required, to do this without mode switching.
> Content relying on existing browser behavior can take many forms. It may rely on elements, attributes or APIs that are part of earlier HTML specifications, but not part of HTML 5, or on features that are entirely proprietary. It may depend on specific error handling rules. In rare cases, it may depend on a feature from earlier HTML specifications not being implemented as specified
Which is the "servant-oriented" mindset I'm talking about here.
> Removing Flash broke tons of sites
Yes, but Flash wasn't part of a standard, it was an ad-hoc thing that each browser _happened_ to support (rough consensus and working code). There were no "build on this and we'll guarantee it will continue to work" agreement between authors and implementers of the web. XSLT 1.0, as painful as it is, is part of that agreement.
Flash doesn’t have an RFC because it was a commercial design by Adobe, not because it wasn’t a defined spec that was supported by browsers.
Meanwhile SSLv2 and v3 and FTP and gopher have RFCs and have been removed.
Making an RFC about a technology is not a commitment of any kind to support it for any length of time.
You’ve conjured a mystique around historical browser ideology that doesn’t exist, and that’s why what you’re seeing today that feel at odds with that fantasy.
SSLv2 and v3 all are protocol versions that anyone can still support, and removing support for them breaks certain web properties. This is less of a problem because the implementations of the protocol are themselves time-limited (you can't get an SSL certificate that is valid until the heat death of the universe).
FTP and gopher support wasn't removed from the browser without a redirect (you can install an FTP client or a Gopher client and the browser will still route-out-to-it).
The point isn't "RFC = commitment", the point is that "the culture of the web" has, for a very long time, been "keep things working for the users" and doing something like removing built-in FTP support was something that was a _long_ time in coming. Whereas, as I understand it, there is a perfectly valid way forward for continuing to support this tech as-is in a secure manner (WASM-up-the-existing-lib) and instead of doing that, improving security for everyone and keeping older parts of the web online, the developers of the browsers have decided that "extra work" of writing that one-time integration and keeping it working in perpetuity is too burdensome for _them_. It feels like what is being said by the browser teams is, "Yes, broken websites are bad for end users, yes, there are more end users than developers, yes, those users are less technical and therefore likely are going to loose access to goods they previously had ... but c'est la vie. Use {Dusk, Temple}OS if you don't want the deal altered any further." And I object to what I perceive as a lack of consideration of those who use the web. Who are the people that we serve.
Is your proposal that we replace those relatively heavyweight standards with something more primitive that we could then build the behavior on top of? I think there's meat on those bones. Quite frankly, the amount of work we do to push intent to fit the constraints of HTML and CSS in web apps is a little absurd relative to the frameworks and languages we have to do that in non-web widget toolkits. I'm not actually convinced that "Tk as an abstraction in the browser that we build HTML and CSS on top of" would be a bad thing (although we probably want to use something better than Tk, with more security guarantees).
... However, if we did that, we would really damage the accessibility story as it currently stands (since accessibility hinting is built on top of the HTML spec) and that's probably a bridge too far. We already have enough site developers who put zero thought into their accessibility; removing even the defaults HTML provides with its structure would be a bad call.