Edit: "Normal/traditional frontend" here means both vanilla (HTML+JS+CSS) and the most popular frameworks (React, Angular, Vue, Next).
Edit: "Normal/traditional frontend" here means both vanilla (HTML+JS+CSS) and the most popular frameworks (React, Angular, Vue, Next).
When an interactive "widget" is needed, I try to simply embed that in one HTML page, and avoid to make the whole app an single page application (SPA).
SPAs are problematic because you need to manage state twice: in the BE and FE. You also may want a spec'ed API (with client library generation would be AWESOME: GraphQL and OpenAPIv3 have that and it helps a lot).
> so you use normal frontend technologies
This is the problem. "Normal" means React/Vue/Angular these days and they are all shit IMHO. This is partly because JS is a rampification and TS fix only what could be fixed by _adding_ to the language (since it's a superset). So TS is not a fix.
I had great success with Elm on the frontend. It's not normal by any norm. Probably less popular than HTMX. But it helps to build really solid web apps and all devs that use it become FP-superstars in a month.
Tools like ReasonML/ReScript and PureScript may also have these benefits.
Is JQuery normal? What about the Google Closure compiler? ColdFusion? Silverlight? Ruby and CoffeeScript? Angular? SPA React with classes? Elm? SSR React with a server framework? Client-only vanilla DOM manipulation?
Your idea of normal is presumably whatever you’ve been using for the past few years. For someone who starts using Htmx now, it becomes normal. And if there’s enough of those people, their idea of normal becomes commonplace.
OK, this helps explain some of the reasoning.
Unfortunately, that means that the tradeoff is that you're optimizing for user experience instead of developer experience - htmx is much easier for the developer, but worse for the user because of higher latency for all actions. I don't see how you can get around this if your paradigm is you do all of your computation on the server - and if you mix client- and server-side computation, then you're adding back in complexity that you explicitly wanted to get away from by using htmx.
> "Normal" means React/Vue/Angular these days
I didn't mean (just) that. I included vanilla webtech in my definition of "normal" - I guess I should have clarified in my initial comment (I just meant to exclude really exotic, if useful, things like Elm). Does that change how you would respond to it?
For the purposes of my comment and question - I do: vanilla (HTML+CSS+JS) and the most popular frameworks (React, Vue, Angular, Next).
> Your idea of normal is presumably whatever you’ve been using for the past few years.
None of those matter for my comment. Just substitute the value I provided above in to my original comment and you should be able to respond to the substance of the point I was making.
You can whip up a simple html form and spruce it up with htmx so it feels “modern” to current users, with little effort and changes and importantly without having to learn the insanity that are modern front end stacks. Not only curmudgeons from the 90s like me benefit from this!
”you want to push as much logic to the client”
React and Next have been moving in the opposite direction with SSR.
As I said, there isn’t a single right answer. Client-only vanilla JS offers one solution, hybrid SSR like Next/React offers another, and Htmx yet another with different tradeoffs on the same spectrum.
> ”you want to push as much logic to the client”
> React and Next have been moving in the opposite direction with SSR.
It doesn't matter - they still support client-side rendering, the other frameworks I listed are still client-side-rendering focused, and I explicitly enumerated vanilla webtech where that's the norm.
Please stop nitpicking my language. This is extremely tiring, boring, and not in the spirit of intellectual curiosity.
This implies you value optimization over other concerns, will do ssr & rehydration, etc.
If your implementation is poor
> all of your computation on the server
You doing weather forecasting? Crypto mining? What "computation" is happening on the client? The only real computation in most web sites is the algorithmic ad presentation - and that's not done on your servers.
This is factually incorrect. Latency is limited by the speed of light and the user's internet connection. If you read one of the links that I'd posted, you'd also know that a lot of users have very bad internet connection.
> You doing weather forecasting? Crypto mining? What "computation" is happening on the client?
This is absolutely ridiculous. It's very easy to see that you might want a simple SPA that allows you to browse a somewhat-interactive site (like your bank's site) without having to make a lot of round-trips to the server, and there's also thousands of examples of complex web applications that exist in the real world that serve as trivial examples of computation that might happen on the client.
> The only real computation in most web sites is the algorithmic ad presentation - and that's not done on your servers.
I never mentioned anything about "ads" or "most web sites" - I was asking an engineering question. Ads are entirely irrelevant here. I doubt you even have data to back this claim up.
Please don't leave low-effort and low-quality responses like this - it wastes peoples' time and degrades the quality of HN.
Feel free to disagree with them back if you wish, but characterising their disagreement as language nitpicking isn't disagreeing it's just wrong.
Latency is something to consider yes. Besides that we should not forget it is easy to make an HTMX-mess: HTMX's not a good fit not fit for all use-cases and some approaches are dead end (the article even talks about this, but you find more testimonies of this online). With HTMX you also create a lot of end-points, usually without a spec: this can also become an issue (might not work for some teams).
> if you mix client- and server-side computation, then you're adding back in complexity that you explicitly wanted to get away from by using htmx.
Exactly! A good reason not use HTMX, if you need a lot of browser-side computation.
> I didn't mean (just) that. I included vanilla webtech in my definition of "normal"
If you mean "just scripting with JS (w/o any framework)" then I still do not think this is an acceptable alternative to compare HTMX to. IMHO you have to compare with something that that provides a solid basis to develop a larger application on. Otherwise you may say HTMX is great because the status quo (vanillaJS/React/Vue/Ang) is such a mess.
That's an assertion I don't agree with.
Data still needs to come from or go to the server, whether you do it in a snippet of HTML or with an API call.
In either case, latency is there.
I'd say all these FE framework help us build and structure browser applications that potentially increase UX over "just HTML pages". Hence they all try to improve the DX building such apps compared to vanilla JS or jQuery.
https://htmx.org/essays/a-real-world-react-to-htmx-port/
https://htmx.org/essays/another-real-world-react-to-htmx-por...
This is the most beneficial thing related to perf.
I don't yet render fragments (only return full pages) so Is there a lot of potential for a speed up untaped...
My main issue with SPAs, and client rendering in general, has always been the attempt to client render state that is persisted elsewhere.
There are certain pieces of state that really do live on the client, and for that client rendering is great. A vast majority of cases involve state that is persisted somewhere on the server though, and in those cases its needlessly complex to ship both the state and an entire rendering engine to the browser.
Not really, your backend has rich domain logic you can leverage to provide users with as much data as possible, while providing comparable levels of interactivity. Pushing as much logic (i.e., state) while you're developing results in a pale imitation of that domain logic on the front end, leading to a greatly diminished user experience.
This is a solved problem. It is simple to download content shortly before it is very likely to be needed using plain old HTML. Additionally, all major hypermedia frameworks have mechanisms to download a link on mousedown or when you hover over a link for a specified time.
> If you read one of the links that I'd posted, you'd also know that a lot of users have very bad internet connection.
Your links mortally wounds your argument for js frameworks, because poor latency is linked strongly with poor download speeds. The second link also has screen fulls of text slamming sites who make users download 10-20MBs of data (i.e. the normal js front ends). Additionally, devices in the parts of the world the article mentions, devices are going to be slower and access to power less consistent, all of which are huge marks against client side processing vs. SSR.
The part about users being more latency-limited than compute-limited, or wanting to push as much to the browser as possible?
The former is somewhat hard to quantify, but most engineers building interactive applications (or distributed systems, of which interactive client-server webapps are a special case) have far more trouble with latency than compute.
The latter is definitely true.
> Data still needs to come from or go to the server, whether you do it in a snippet of HTML or with an API call. ... In either case, latency is there.
This is definitely incorrect.
Consider the very common case of fetching some data from the server and then filtering it. In many, many cases, the filtering can be done client-side. If you do that with an interactive frontend, then it's nearly instant, and there's no additional fetch to the server. If you shell out to the server, then you pay the latency penalty, and incur a fetch.
"In either case, latency is there." is just factually wrong.
No, it is not. There's no way to send the incremental results of typing in a search box, for instance, to the server with HTML alone - you need to use Javascript. And then you're still paying the latency penalty, because you don't know what the user's full search term is going to be until they press enter, and any autocomplete is going to have to make a full round trip with each incremental keystroke.
> Your links mortally wounds your argument for js frameworks, because poor latency is linked strongly with poor download speeds. The second link also has screen fulls of text slamming sites who make users download 10-20MBs of data (i.e. the normal js front ends).
I never said anything about or implying "download 10-20MBs of data (i.e. the normal js front ends)" in my question. Bad assumption. So, no, there's no "mortal wound" because you just strawmanned my premises.
> Additionally, devices in the parts of the world the article mentions, devices are going to be slower and access to power less consistent, all of which are huge marks against client side processing vs. SSR.
As someone building web applications - no, they really aren't. My webapps sip power and compute and are low-latency while still being very poorly optimized.
They absolutely are. "Htmx is “frontend tech”." is a nitpick (and an incorrect one at that). "What defines normal?" is a nitpick - if you ask an ensemble of frontend web developers what normal web frameworks are, they'll convert on the set that I specified without bringing up absolutely ridiculous answers like ColdFusion, and all of their answers have enough commonalities that you can answer the question I posed. "Your idea of normal is presumably whatever you’ve been using for the past few years." isn't "asking for clarity" - it's a dunk with no substance. "Ok, so your list of “normal” frontend doesn’t support your original point... React and Next have been moving in the opposite direction with SSR." is absolutely a nitpick - it's very very clear from the line that that poster quoted that I was referring to client-side rendering.
Let me reiterate that: pavlov read the line of my comment that answered his hypothetical question about SSR vs client-side rendering, then used it to try to gotcha me by saying "actually, React isn't just client-side".
And claiming that they were just "disagreeing" with me and "characterising their disagreement as language nitpicking isn't disagreeing it's just wrong" is factually incorrect. Aside from ”you want to push as much logic to the client” (which they didn't disagree with), I didn't make any statements to agree or disagree with - I asked a question that they tried to dunk on without actually engaging with.
People do not talk like this in real life. Let's not normalize or defend this, or manipulate language to try to justify it, shall we?
> your backend has rich domain logic you can leverage to provide users with as much data as possible ... Pushing as much logic (i.e., state) while you're developing results in a pale imitation of that domain logic on the front end
False. There's very little that you can do on the frontend that you can't do on the backend - you can implement almost all of your logic on the frontend and just use the backend for a very few things.
> leading to a greatly diminished user experience.
False. There's just no evidence for this whatsoever, and as counterevidence some of the best tools I've ever used have been extremely rich frontend-logic-heavy apps.
Hypermedia applications use javascript (e.g., htmx - the original subject), so I'm not sure why you're hung up on that.
> And then you're still paying the latency penalty, because you don't know what the user's full search term is going to be until they press enter, and any autocomplete is going to have to make a full round trip with each incremental keystroke.
You just send the request on keydown. It's going to take about ~50-75ms or so for your user's finger to traverse into the up position. Considering anything under ~100-150ms feels instantaneous, that's plenty of time to return a response.
> As someone building web applications - no, they really aren't.
We were originally talking about "normal" (js) web applications (e.g. react, angular, etc.)., most of these apps have all the traits I mentioned earlier. We all have used these pigs that take forever on first load, cause high cpu utilization, and are often janky.
> My webapps sip power and compute and are low-latency while still being very poorly optimized.
And now you have subtlely moved the goals posts to only consider the web apps you're building, in place of "normal" js webapps you originally compared against htmx. I saw you do the same thing in another thread on this story. I have no further interest in engaging in that sort of discussion.
Because you falsely claimed otherwise:
>> This is a solved problem. It is simple to download content shortly before it is very likely to be needed using plain old HTML.
So, another false statement on your part.
> You just send the request on keydown. It's going to take about ~50-75ms or so for your user's finger to traverse into the up position. Considering anything under ~100-150ms feels instantaneous, that's plenty of time to return a response.
No, it's not "plenty of time" because many users have latency in the 100's of ms (mine on my mobile connection is ~200ms), and some on satellite/in remote areas with poor infra have latency of up to a second - and that's completely ignoring server response latency, bandwidth limitations on data transport, and rehydration time on the frontend.
> Considering anything under ~100-150ms feels instantaneous, that's plenty of time to return a response.
Scientifically wrong: "the lower threshold of perception was 85 ms, but that the perceived quality of the button declined significantly for latencies above 100 ms"[1].
> We were originally talking about "normal" (js) web applications (e.g. react, angular, etc.).
Factually incorrect. We were talking about normal frontend technologies - including vanilla, which you intentionally left out - so even if you include those heavyweight frameworks:
> most of these apps have all the traits I mentioned earlier. We all have used these pigs that take forever on first load, cause high cpu utilization, and are often janky.
...this is a lie, because we're not talking about normal apps, we're talking about technologies. All you have to do is create a new React or Angular or Vue application, bundle it, and observe that the application size is under 300k, and responds instantly to user input.
> And now you have subtlely moved the goals posts to only consider the web apps you're building, in place of "normal" js webapps you originally compared against htmx.
Yet another lie, and gaslighting to boot. I never moved the goalposts - my comments have been about the technologies, not what webapps people "normally" build - you were the one who moved the goalposts by changing the discourse from the trade-space decisionmaking that I was talking about to trying to malign modern web frameworks (and intentionally ignoring the fact that I included vanilla webtech) based on how some developers use them. My example was merely to act as a counter-example to prove how insane your statements were.
Given that you also made several factually incorrect statements in another thread[2], we can conclude that in addition to maliciously lying about things that I've said, you're also woefully ignorant about how web development works.
Between these two things, I think we can safely conclude that htmx doesn't really have any redeeming qualities, given that you were unable to describe coherent arguments for it, and resorted to lies and falsehoods instead.
[1] https://www.tactuallabs.com/papers/howMuchFasterIsFastEnough...