Most active commenters

    ←back to thread

    392 points _kush | 19 comments | | HN request time: 1.478s | source | bottom
    Show context
    badmintonbaseba ◴[] No.44394985[source]
    I have worked for a company that (probably still is) heavily invested in XSLT for XML templating. It's not good, and they would probably migrate from it if they could.

      1. Even though there are newer XSLT standards, XSLT 1.0 is still dominant. It is quite limited and weird compared to the newer standards.
    
      2. Resolving performance problems of XSLT templates is hell. XSLT is a Turing-complete functional-style language, with performance very much abstracted away. There are XSLT templates that worked fine for most documents, but then one document came in with a ~100 row table and it blew up. Turns out that the template that processed the table is O(N^2) or worse, without any obvious way to optimize it (it might even have an XPath on each row that itself is O(N) or worse). I don't exactly know how it manifested, but as I recall the document was processed by XSLT for more than 7 minutes.
    
    JS might have other problems, but not being able to resolve algorithmic complexity issues is not one of them.
    replies(9): >>44395187 #>>44395285 #>>44395306 #>>44395323 #>>44395430 #>>44395839 #>>44396146 #>>44397330 #>>44398324 #
    nolok ◴[] No.44395323[source]
    It's generally speaking part of the problem with the entire "XML as a savior" mindset of that earlier era and a big reason of why we left them, doesn't matter if XSLT or SOAP or even XHTML in a way ... Those were defined as machine language meant for machine talking to machine, and invariably something go south and it's not really made for us to intervene in the middle; it can be done but it's way more work than it should be; especially since they clearly never based it on the idea that those machine will sometime speak "wrong", or a different "dialect".

    It looks great, then you design your stuff and it goes great, then you deploy to the real world and everything catches on fire instantly and everytime you stop one another one starts.

    replies(6): >>44395678 #>>44396319 #>>44396490 #>>44398819 #>>44398980 #>>44399134 #
    1. diggan ◴[] No.44395678[source]
    > It's generally speaking part of the problem with the entire "XML as a savior" mindset of that earlier era and a big reason of why we left them

    Generally speaking I feel like this is true for a lot of stuff in programming circles, XML included.

    New technology appears, some people play around with it. Others come up with using it for something else. Give it some time, and eventually people start putting it everywhere. Soon "X is not for Y" blogposts appear, and usage finally starts to decrease as people rediscover "use the right tool for the right problem". Wait yet some more time, and a new technology appears, and the same cycle begins again.

    Seen it with so many things by now that I think "we'll" (the software community) forever be stuck in this cycle and the only way to win is to explicitly jump out of the cycle and watch it from afar, pick up the pieces that actually make sense to continue using and ignore the rest.

    replies(2): >>44396131 #>>44396688 #
    2. colonwqbang ◴[] No.44396131[source]
    There have been many such cycles, but the XML hysteria of the 00s is the worst I can think of. It lasted a long time and the square peg XML was shoved into so many round holes.
    replies(1): >>44396345 #
    3. 0x445442 ◴[] No.44396345[source]
    IDK, the XML hysteria is similar by comparison to the dynamic and functional languages hysterias. And it pales in comparison to the micro services, SPA and the current AI hysterias.
    replies(3): >>44397347 #>>44397462 #>>44398988 #
    4. colejohnson66 ◴[] No.44396688[source]
    A controversial opinion, but JSON is that too. Not as bad as XML was (̶t̶h̶e̶r̶e̶'̶s̶ ̶n̶o̶ ̶"̶J̶S̶L̶T̶"̶)̶, but wasting cycles to manifest structured data in an unstructured textual format has massive overhead on the source and destination sides. It only took off because "JavaScript everywhere" was taking off — performance be damned. Protobufs and other binary formats already existed, but JSON was appealing because it's easily inspectable (it's plaintext) and easy to use — `JSON.stringify` and `JSON.parse` were already there.

    We eventually said, "what if we made databases based on JSON" and then came MongoDB. Worse performance than a relational database, but who cares! It's JSON! People have mostly moved away from document databases, but that's because they realized it was a bad idea for the majority of usecases.

    replies(4): >>44396805 #>>44397719 #>>44397813 #>>44398873 #
    5. ako ◴[] No.44396805[source]
    There is JSLT: https://github.com/schibsted/jslt and it can be useful if you need to transform a json document into another json structure.
    replies(2): >>44397295 #>>44397614 #
    6. ◴[] No.44397295{3}[source]
    7. xorcist ◴[] No.44397347{3}[source]
    Agreed. Also, Docker.
    8. homebrewer ◴[] No.44397462{3}[source]
    IMHO it's pretty comparable, the difference is only in the magnitude of insanity. After all, the industry did crap out these hardware XML accelerators that were supposed to improve performance of doing massive amounts of XML transformations — is it not the GPU/TPU craze of today?

    https://en.wikipedia.org/wiki/XML_appliance

    E.g.

    https://www.serverwatch.com/hardware/power-up-xml-data-proce...

    replies(2): >>44398329 #>>44399317 #
    9. nolok ◴[] No.44397614{3}[source]
    The people who made that are either very funny in a sarcastic, way or in severe lack of a history lesson of the area they're working in.
    replies(1): >>44398742 #
    10. diggan ◴[] No.44397719[source]
    Yup, agree with everything you said!

    I think the only left out part is about people currently believing in the current hyped way, "because this time it's right!" or whatever they claim. Kind of the way TypeScript people always appear when you say that TypeScript is currently one of those hyped things and will eventually be overshadowed by something else, just like the other languages before it, then soon sure enough, someone will share why TypeScript happen to be different.

    11. imtringued ◴[] No.44397813[source]
    The fact that you bring up protobufs as the primary replacement for JSON speaks volumes. It's like you're worried about a problem that only exists in your own head.

    >wasting cycles to manifest structured data in an unstructured textual format

    JSON IS a structured textual format you dofus. What you're complaining about is that the message defines its own schema.

    >has massive overhead on the source and destination sides

    The people that care about the overhead use MessagePack or CBOR instead.

    I personally hope that I will never have to touch anything based on protobufs in my entire life. Protobuf is a garbage format that fails at the basics. You need the schema one way or another, so why isn't there a way to negotiate the schema at runtime in protobuf? Easily half or more of the questionable design decisions in protobuffers would go away if the client retrieved the schema at runtime. The compiler based workflow in Protobuf doesn't buy you a significant amount of performance in the average JS or JVM based webserver since you're copying from a JS object or POJO to a native protobuf message anyway. It's inviting an absurd amount of pain for essentially zero to no benefits. What I'm seeing here is a motte-bailey justification for making the world a worse place. The motte being the argument that text based formats are computationally wasteful, which is easily defended. The bailey being the implicit argument that hard coding the schema the way protobuf does is the only way to implement a binary format.

    Note that I'm not arguing particularly in favor of MessagePack here or even against protobuf as it exists on the wire. If anything, I'm arguing the opposite. You could have the benefits of JSON and protobuf in one. A solution so good that it makes everything else obsolete.

    replies(1): >>44397954 #
    12. colejohnson66 ◴[] No.44397954{3}[source]
    I didn't say protobufs were a valid replacement - you only think I did. "Protobufs and other binary formats already existed, [..]". I was only using it as an example of a binary format that most programmers have heard of; More people know of protobufs than MessagePack and CBOR.

    Please avoid snark.

    13. soulofmischief ◴[] No.44398329{4}[source]
    At least arrays of numbers are naturally much closer to the hardware, we've definitely come a long way in that regard.
    14. ako ◴[] No.44398742{4}[source]
    What is a better alternative if you just need to transform JSON from one structure to another JSON structure?
    replies(2): >>44399081 #>>44399149 #
    15. jimbokun ◴[] No.44398873[source]
    Both XML and JSON were poor replacements for s-expressions. Combined with Lisp and Lisp macros, a more powerful data manipulation text format and language has never been created.
    16. vjvjvjvjghv ◴[] No.44398988{3}[source]
    Exactly. Compared to microservices XML is a pretty minor problem.
    17. asa400 ◴[] No.44399081{5}[source]
    Load it into a full programming language runtime and use the great collections libraries available in almost all languages to transform it and then serialize it into your target format. I want to use maps and vectors and real integers and functions and date libraries and spec libraries. String to string processing is hell.
    18. rorylaitila ◴[] No.44399149{5}[source]
    Imperative code. Easy to mentally parse, comment, log, splice in other data. Why add another dependency just to go from json>json? That'd need an exceptional justification.
    19. bogeholm ◴[] No.44399317{4}[source]
    From your first link

    > An XML appliance is a special-purpose network device used to secure, manage and mediate XML traffic.

    Holy moly