←back to thread

418 points akagusu | 1 comments | | HN request time: 0s | source
Show context
Aurornis ◴[] No.45955140[source]
I have yet to read an article complaining about XSLT deprecation from someone who can explain why they actually used it and why it’s important to them.

> I will keep using XSLT, and in fact will look for new opportunities to rely on it.

This is the closest I’ve seen, but it’s not an explanation of why it was important before the deprecation. It’s a declaration that they’re using it as an act of rebellion.

replies(10): >>45955238 #>>45955283 #>>45955351 #>>45955795 #>>45955805 #>>45955821 #>>45956141 #>>45956722 #>>45956976 #>>45958239 #
James_K ◴[] No.45955821[source]
I use XSLT because I want my website to work for users with JavaScript disabled and I want to present my Atom feed link as an HTML document on a statically hosted site without breaking standards compliance. Hope this helps.
replies(2): >>45955882 #>>45958444 #
matthews3 ◴[] No.45955882[source]
Could you run XSLT as part of your build process, and serve the generated HTML?
replies(4): >>45955943 #>>45955956 #>>45956760 #>>45959294 #
James_K ◴[] No.45955943[source]
No because then it would not be an Atom feed. Atom is a syndication format, the successor to RSS. I must provide users with a link to a valid Atom XML document, and I want them to see a web page when this link is clicked.

This is why so many people find this objectionable. If you want to have a basic blog, you need some HTML docments and and RSS/Atom feed. The technologies required to do this are HTML for the documents and XSLT to format the feed. Google is now removing one of those technologies, which makes it essentially impossible to serve a truly static website.

replies(2): >>45955974 #>>45956484 #
ErroneousBosh ◴[] No.45955974[source]
> Google is now removing one of those technologies, which makes it essentially impossible to serve a truly static website.

How so? You're just generating static pages. Generate ones that work.

replies(1): >>45956162 #
James_K ◴[] No.45956162{3}[source]
You cannot generate a valid RRS/Atom document which also renders as HTML.
replies(1): >>45956531 #
shadowgovt ◴[] No.45956531{4}[source]
So put them on separate pages because they are separate protocols (HTML for the browser and XML for a feed reader), with a link on the HTML page to be copied and pasted into a feed reader.

It really feels like the developer has over-constrained the problem to work with browsers as they are right now in this context.

replies(1): >>45956725 #
kuschku ◴[] No.45956725{5}[source]
> So put them on separate pages because they are separate protocols

Would you also suggest I use separate URLs for HTTP/2 and HTTP/1.1? Maybe for a gzipped response vs a raw response?

It's the same content, just supplied in a different format. It should be the same URL.

replies(3): >>45956969 #>>45958694 #>>45959471 #
1. zzo38computer ◴[] No.45956969{6}[source]
There are separate URLs for "https:" vs "http:" although they are usually the same content when both are available (although I have seen some where it isn't the same), although the compression (and some other stuff) is decided by headers. However, it might make sense to include some of these things optionally within the URL (within the authority section and/or scheme section somehow), for compression, version of the internet, version of the protocol, certificate pinning, etc, in a way that these things are easily delimited so that a program that understands this convention can ignore them. However, that might make a mess.

I had also defined a "hashed:" scheme for specifying the hash of the file that is referenced by the URL, and this is a scheme that includes another URL. (The "jar:" scheme is another one that also includes other URL, and is used for referencing files within a ZIP archive.)