←back to thread

418 points akagusu | 2 comments | | HN request time: 0.41s | source
Show context
nwellnhof ◴[] No.45955183[source]
Removing XSLT from browsers was long overdue and I'm saying that as ex-maintainer of libxslt who probably triggered (not caused) this removal. What's more interesting is that Chromium plans to switch to a Rust-based XML parser. Currently, they seem to favor xml-rs which only implements a subset of XML. So apparently, Google is willing to remove standards-compliant XML support as well. This is a lot more concerning.
replies(11): >>45955239 #>>45955425 #>>45955442 #>>45955667 #>>45955747 #>>45955961 #>>45956057 #>>45957011 #>>45957170 #>>45957880 #>>45977574 #
zetafunction ◴[] No.45955667[source]
https://issues.chromium.org/issues/451401343 tracks work needed in the upstream xml-rs repository, so it seems like the team is working on addressing issues that would affect standards compliance.

Disclaimer: I work on Chrome and have occasionally dabbled in libxml2/libxslt in the past, but I'm not directly involved in any of the current work.

replies(2): >>45955710 #>>45956175 #
1. inejge ◴[] No.45956175[source]
I hope they will also work on speeding it up a bit. I needed to go through 25-30 MB SAML metadata dumps, and an xml-rs pull parser took 3x more time than the equivalent in Python (using libxml2 internally, I think.) I rewrote it all with quick-xml and got a 7-8x speedup over Python, i.e., at least 20x over xml-rs.
replies(1): >>45957967 #
2. nwellnhof ◴[] No.45957967[source]
Python ElementTree uses Expat, only lxml uses libxml2. Right now, I'm working on SIMD acceleration in my not yet released, GPL-licensed fork of libxml2. If you have lots of character data or large attribute values like in SVG, you will see tremendous speed improvements (gigabytes per second). Unfortunately, this is unlikely to make it into web browsers.