Most active commenters
  • throw10920(11)
  • skydhash(5)
  • sgt(4)
  • t-writescode(3)
  • mort96(3)
  • ffsm8(3)
  • axegon_(3)
  • pavlov(3)
  • cies(3)
  • _heimdall(3)

Less Htmx Is More

(unplannedobsolescence.com)
169 points fanf2 | 119 comments | | HN request time: 2.37s | source | bottom
1. rpgbr ◴[] No.43619744[source]
> Like any new tool, especially a tool that got popular as quickly as htmx, there are differing schools of thought on how best to use it. My approach—which I believe necessary to achieve the results described above—requires you to internalize something that htmx certainly hints at, but doesn’t enforce: use plain HTML wherever possible.

That’s the path for ultimate long term functional web pages!

replies(3): >>43619973 #>>43621691 #>>43630466 #
2. chabska ◴[] No.43619770[source]
> In practice, this is virtually impossible to get right

Somehow every other JS frontend framework manages to hook into the History API just fine?

replies(5): >>43619808 #>>43619818 #>>43619832 #>>43619991 #>>43621102 #
3. t-writescode ◴[] No.43619802[source]
I have used htmx just a little bit; but I have found htmx + solid.js for some small reactive components to be ... very, very pleasant and very much else of what this blog post says to be accurate.

I've written a lot more html than I have in the past and it just ... feels good? html has upgraded quite a bit in the last decades since I learned it.

4. t-writescode ◴[] No.43619808[source]
The way I read it is that there are frustrating edge cases that (at least the author) has run into, regularly, that would imply it's ... more difficult to custom handle that than to just let the browser do it.
5. swiftcoder ◴[] No.43619818[source]
They do, but mostly they make this work by recreating the entire frontend ecosystem within that framework. Mix-and-match a history-aware JS library from outside the framework's ecosystem, and you may find it's less robust than you expect.
6. DeathArrow ◴[] No.43619823[source]
I thought HTMX is useful mostly for SPA style apps. If you want a website with individual pages you can mostly use HTML and a bit of vanilla JS for the stuff that needs to be dynamically updated.
replies(2): >>43619854 #>>43619863 #
7. ForHackernews ◴[] No.43619832[source]
What? They all butcher it. On a weekly basis I make the the mistake of clicking some link to a site and find I can't hit the back button to escape. Maybe this is technically the site developers' fault, not the library devs, but it's a lousy experience for users.
replies(2): >>43619865 #>>43619918 #
8. uzyn ◴[] No.43619854[source]
It's the other way around. HTMX is not suitable for client-side scripting heavy app like SPA. It's more for "traditional" AJAX-style web app.
replies(1): >>43621534 #
9. t-writescode ◴[] No.43619863[source]
htmx gets rid of the "bit of vanilla js" and replaces it with "a couple very, very short tags" and then the backend returns partial html, making the experience seamless.

It's written in Javascript, so yes, some vanilla JS can do the same thing; but this library makes that experience easier for the masses, and likely well-tested to the point where more trust can be given to it to be correct.

10. philipwhiuk ◴[] No.43619865{3}[source]
Sometimes it's definitely deliberate
replies(1): >>43620011 #
11. user432678 ◴[] No.43619918{3}[source]
My favourite part is when you middle click the link to be opened in a new tab to be read later to find out that it opens a bunch of main pages or nothing at all. That’s a top level UX.
12. kookamamie ◴[] No.43619953[source]
> In my opinion, most websites should be using htmx

In my opinion, in five years no one uses htmx, but another shiny toy. Hence, how about not starting to use it at all?

replies(3): >>43619982 #>>43619988 #>>43620006 #
13. Ringz ◴[] No.43619973[source]
Agreed! TLDR:

Use plain HTML wherever possible for long term functional web pages!

replies(1): >>43620365 #
14. mort96 ◴[] No.43619982[source]
This applies to every single web framework/library, so I guess your point is that nobody should start making web things?
replies(1): >>43620068 #
15. ffsm8 ◴[] No.43619988[source]
I don't share the opinion you've quoted there, but neither do I share yours. Hmtx isn't actually a shiny new toy, it's just a rebrand of intercoolerjs - and I found out about that one in 2014.

That makes the time since it's inception to date longer then the time between initial react release and initial intercoolerjs release

16. mort96 ◴[] No.43619991[source]
The amount of times when I've clicked a link, hit the back button, nothing happens, I hit the back button again and I go 2 steps back in history...
replies(1): >>43620004 #
17. harvie ◴[] No.43620004{3}[source]
Ever considered the website authors don't want you to go back? :-D
replies(3): >>43620013 #>>43620159 #>>43620292 #
18. alexpetros ◴[] No.43620006[source]
Hi, author here. The full quote is: "In my opinion, most websites should be using htmx for either:", and then I list two cases where I think htmx is appropriate.

In context, it's clear that I'm not saying "everyone should use htmx," but rather "if you are using htmx, here is how I recommend you do it."

As for the shiny object concern, I have a talk (which you can also find on this blog) called "Building the Hundred-Year Web Service", that dives into that question.

19. ffsm8 ◴[] No.43620011{4}[source]
Heh ,I actually once added a bug like that too. Angular has "redirect" routes (path matches with redirect to a new path).

I didn't realize that this will make the back button effective unusable, because the redirected url will still be in the history unless you manually redirect with a bare component. Hence whenever someone used back, the redirect pushed them back to the route they just left. I just used it as I came from a backend perspective, where redirects are fine as there is a delay between the request and the redirect, giving the user the time to just double press back... Obviously not so with a JS redirect.

I believe that continues to be an angular Anti-Feature to date.(Just checked, they still have it in their docs - without the ability to surpress the redirected history entry as far as I can see)

replies(2): >>43620717 #>>43621109 #
20. kolektiv ◴[] No.43620012[source]
It is amazing how quickly a simple, traditional, "collection of pages" type website actually works if you don't do annoying things to slow it down. Most websites would be absolutely fine if a) HTTP was used reasonably well to set things like cache headers, and so (as mentioned in the article) and b) if a load of user-irrelevant stuff like tracking and advertising code wasn't thrown in as well. A simple page with standard HTML, passably optimised assets where needed, and only the JS needed for actual functionality, should be almost instant on most modern connections.
replies(3): >>43620362 #>>43620363 #>>43620930 #
21. leni536 ◴[] No.43620013{4}[source]
I'm sure this move improves some engagement metric.
22. cientifico ◴[] No.43620037[source]
I like Hotwire/Turbo more than HTMX because of its core philosophy: start by building a fully functional page without any JavaScript, and then layer in enhancements only as needed. That approach has stayed consistent for years and feels really straightforward to work with.
replies(1): >>43620186 #
23. Ringz ◴[] No.43620055[source]
> The idea here is that the website still has a sound URL structure, which is managed by the core browser functionality, while interactivity is carefully layered on top, with targeted updates.

It’s a long time since I have to work with websites. JQuery was the hot stuff back then. But we didn’t used it. It was all HTML and a Java backend. This sentence implies that right now basic stuff isn’t managed by the browser (but by React, Vue and so on?) which seems to be simply wrong.

replies(1): >>43620338 #
24. kookamamie ◴[] No.43620068{3}[source]
It kind of is - although, with a twist.

I think most things done currently with the frontend "frameworks", could be achieved with standard HTML5+JS, without any "build steps" or other bloat involved.

This said, there is a case for building on a commonly used mature platform, such as React, to speed up the development cycle and in order not inventing the wheel again.

replies(1): >>43620268 #
25. intrasight ◴[] No.43620085[source]
While I agree that full page refreshes probably shouldn't use HTMX, I'll also claim that it's pretty rare that you would do a full page refresh. Almost every website I create has context that is present on every single page. Typically the branding at the top and the site map at the bottom does not change and hence there are no full refreshes.
replies(1): >>43622416 #
26. can3p ◴[] No.43620098[source]
There are two sides to the argument which I think should be treated separately: a) Is it a good idea overall? and b) is htmx implementation good enough?

a) I think so, yes. I've seen much more spa that have a completely broken page navigation. This approach does not fit all use cases, but if you remember that the whole idea of htmx is that you rely on webserver giving you page updates as opposed to having a thick js app rendering it all the way it makes sense. And yes, js libraries should be wrapped to function properly in many cases, but you would do the same with no react-native components in any react app for example

b) I don't think so. htmx boost functionality is an afterthought and it will always be like this. Compare it with turbo [1] where this is a core feature and the approach is to use turbo together with stimulus.js which gives you automagical component life cycle management. Turbo still has it's pains (my favorite gh issue [2]), but otherwise it works fine

[1]: https://turbo.hotwired.dev/ [2]: https://github.com/hotwired/turbo/issues/37

replies(3): >>43621533 #>>43622088 #>>43622537 #
27. axegon_ ◴[] No.43620121[source]
> Updates that users would not expect to see on a refresh (or a new page load)

I always hated this idea. As a user, a refresh indicates that something is happening and it's abundantly clear when something is wrong. People don't always handle errors and in all fairness they shouldn't - a developer has no way of knowing what custom stuff I have on my browser, whether I'm using any blockers or pi-holes or whatever and they should not know. Simple navigation, refreshes and server side rendering is something which worked great, the web was fast and could run on anything with a graphical output. These days a single page eats up 150+ mb while it loads. All that so the page doesn't "refresh".

replies(2): >>43620225 #>>43620587 #
28. seanwilson ◴[] No.43620127[source]
Anyone replace some HTMX usage with the View Transition API that's now in Chrome and Safari? https://developer.chrome.com/docs/web-platform/view-transiti...

This looks appealing where it makes sense (page transitions, table sorting/pagination) and if Firefox gets around to adding it.

replies(2): >>43620372 #>>43620523 #
29. mort96 ◴[] No.43620159{4}[source]
No I mean clicking a link that's part of the site's internal navigation. So like I'll click a link to go to a different part of the single-page application, then click the back button to get back to the previous place in the SPA, and the URL will change back but the page doesn't change.
30. JonoBB ◴[] No.43620186[source]
This is exactly how I use htmx. Is there an expectation that you use htmx differently?
31. echoangle ◴[] No.43620207[source]
(2024)
32. sgt ◴[] No.43620217[source]
How do you get flicker free navigation using htmx without using hx-swap or boost? I see that this article refers to https://unplannedobsolescence.com which has a header that remains the same, but it also doesn't change. Normally a header will do some type of change, e.g. showing which menu item is currently selected. That's when the flickering starts, in my experience.
replies(2): >>43620261 #>>43620349 #
33. sgt ◴[] No.43620225[source]
It's not that smooth looking if a refresh takes 500 milliseconds. Then it flickers.
replies(3): >>43620323 #>>43620644 #>>43620837 #
34. davedx ◴[] No.43620253[source]
Does anyone have examples of non trivial websites or apps built with Htmx?
replies(1): >>43621479 #
35. anentropic ◴[] No.43620261[source]
https://htmx.org/essays/view-transitions/
36. nevertoolate ◴[] No.43620268{4}[source]
Nobody gets fired for buying IBM, I guess :(
37. foobahify ◴[] No.43620288[source]
Yes if you click a regular link in a modern browser on a noJS or lowJS reasonable size page it is darn fast.

And the predictability is a boon. Open in new tab, back button etc.

38. sanitycheck ◴[] No.43620292{4}[source]
More likely they didn't care and nobody wrote an automated test for it because that would be hard, no human testers are employed (because who even does that now?), and only two users got all the way through the labyrinthine process to report it as an issue so managers triaged the bug as wontfix.

I think this is industry standard practice in 2025, right?

39. foobahify ◴[] No.43620323{3}[source]
500ms means a slow connection, high latency or significant bloat.

Optimising the page and using a CDN can help alot.

It is a shame HTML doesn't have rel="swap" in links to "swap" in a new page.

40. CharlieDigital ◴[] No.43620338[source]

    > This sentence implies that right now basic stuff isn’t managed by the browser (but by React, Vue and so on?) which seems to be simply wrong.
That's exactly what's happened. A React SPA .html is just an empty shell. A Next.js app renders HTML using React on the first load and then becomes an SPA on the client.
41. ahmetsait ◴[] No.43620349[source]
Browsers already have built-in mechanism to prevent flicker while switching pages, unless your page has issues such as flash of unstyled content or something similar.
replies(2): >>43620860 #>>43621083 #
42. Manfred ◴[] No.43620362[source]
I believe some of these issues are caused by framework abstractions.

New developers learn the framework and never learn how HTTP and HTML work.

Experienced developers have to learn how to punch through the framework to get to these features we get automatically with statically hosted assets.

replies(4): >>43620425 #>>43620499 #>>43620782 #>>43622435 #
43. weinzierl ◴[] No.43620363[source]
> "Like any new tool, especially a tool that got popular as quickly as htmx, there are differing schools of thought on how best to use it."

If so, it's not because for lack of a written down philosophical framework. See Hypermedia Systems by Carson et al.

https://hypermedia.systems/

44. freeamz ◴[] No.43620365{3}[source]
I have made many many web pages over the last 15 years of so that is for just a few months or days. What I do is archive it on way back machine, and if the page works on way back machine then that is my stamp of approval for the page. It can also tell you the page is self contained! It works pretty well with WebGL and event audio playlist. There is NO point of making web dev so complicated just because FANNG company are using the framework. It is designed for they corporate structure. Anyone else should NOT use FANNG based framework!

For long term web, I stay way from SPA with a long stick.

45. titaphraz ◴[] No.43620372[source]
I really hope this won't become the new craze. The examples triggers all my irritation triggers. I couldn't use a website like that.
46. throw10920 ◴[] No.43620387[source]
While I get the emotional appeal, I still don't understand the use-case for htmx. If you're making a completely static page, you just use HTML. If you're making a dynamic page, then you want to push as much logic to the client as possible because far more users are latency-limited than compute-limited (compare [1] vs [2]), so you use normal frontend technologies. Mixing htmx and traditional frontend tech seems like it'd result in extra unnecessary complexity. What's the target audience?

Edit: "Normal/traditional frontend" here means both vanilla (HTML+JS+CSS) and the most popular frameworks (React, Angular, Vue, Next).

[1] https://danluu.com/slow-device/

[2] https://danluu.com/web-bloat/

replies(9): >>43620401 #>>43620449 #>>43620467 #>>43620547 #>>43620624 #>>43620674 #>>43621160 #>>43621499 #>>43621641 #
47. pavlov ◴[] No.43620401[source]
Htmx is “frontend tech”.
replies(2): >>43620416 #>>43620424 #
48. codemonkey-zeta ◴[] No.43620410[source]
> “You didn’t solve anything! Doing validation is complex and you just magic wanded it away by designing a perfect interface for it.” Yes. Exactly. That is what interfaces are supposed to do. Better semantics make it possible for the programmer to describe what the element does, and for someone else to take care of the details for them.

Gosh I couldn't agree more, what a wonderfully succinct way to communicate what I spend a ridiculous amount of time trying to explain to my colleagues when designing programs!

[EDIT]: I just realized I had read this on one of the linked articles https://unplannedobsolescence.com/blog/behavior-belongs-in-h...

49. ◴[] No.43620416{3}[source]
50. throw10920 ◴[] No.43620424{3}[source]
I said "normal frontend tech" in my comment. It's also easy to tell from context what I mean. I'd appreciate not trying to be pedantic and instead responding to the substance of my comment :)
replies(1): >>43620502 #
51. kolektiv ◴[] No.43620425{3}[source]
Very likely. I remember reading a while back about developers who thought of rendering things on the server side as novel, which was absolutely wild to someone who was writing web pages before JS was a thing! It's such a shame because HTTP + HTML is actually a very, very simple system to learn with literally decades of hard-won knowledge baked in (particularly HTTP and surrounding standards). People end up inventing incredibly complex solutions to problems that could have been alleviated by reading a few RFCs.
52. cies ◴[] No.43620449[source]
I agree with the author of the article in that it's often best to do as much as possible in plain-old HTML.

When an interactive "widget" is needed, I try to simply embed that in one HTML page, and avoid to make the whole app an single page application (SPA).

SPAs are problematic because you need to manage state twice: in the BE and FE. You also may want a spec'ed API (with client library generation would be AWESOME: GraphQL and OpenAPIv3 have that and it helps a lot).

> so you use normal frontend technologies

This is the problem. "Normal" means React/Vue/Angular these days and they are all shit IMHO. This is partly because JS is a rampification and TS fix only what could be fixed by _adding_ to the language (since it's a superset). So TS is not a fix.

I had great success with Elm on the frontend. It's not normal by any norm. Probably less popular than HTMX. But it helps to build really solid web apps and all devs that use it become FP-superstars in a month.

Tools like ReasonML/ReScript and PureScript may also have these benefits.

replies(1): >>43620507 #
53. echoangle ◴[] No.43620467[source]
I use it to get some interactivity without reloading that would have to be Ajax anyways. If you have a form inline, you can’t do a lot client side if the server doesn’t work, so using htmx is fine.
54. thrance ◴[] No.43620499{3}[source]
I don't believe you can break past "absolute beginner" without learning some HTTP and HTML. Most JS frameworks aren't very good abstractions (which is fine).
replies(1): >>43621319 #
55. pavlov ◴[] No.43620502{4}[source]
What defines normal? It’s a strange idea when the typical stack for web front-end keeps changing. There isn’t even a single answer to the client/server split.

Is JQuery normal? What about the Google Closure compiler? ColdFusion? Silverlight? Ruby and CoffeeScript? Angular? SPA React with classes? Elm? SSR React with a server framework? Client-only vanilla DOM manipulation?

Your idea of normal is presumably whatever you’ve been using for the past few years. For someone who starts using Htmx now, it becomes normal. And if there’s enough of those people, their idea of normal becomes commonplace.

replies(1): >>43620519 #
56. throw10920 ◴[] No.43620507{3}[source]
> SPAs are problematic because you need to manage state twice: in the BE and FE. You also may want a spec'ed API (with client library generation would be AWESOME: GraphQL and OpenAPIv3 have that and it helps a lot).

OK, this helps explain some of the reasoning.

Unfortunately, that means that the tradeoff is that you're optimizing for user experience instead of developer experience - htmx is much easier for the developer, but worse for the user because of higher latency for all actions. I don't see how you can get around this if your paradigm is you do all of your computation on the server - and if you mix client- and server-side computation, then you're adding back in complexity that you explicitly wanted to get away from by using htmx.

> "Normal" means React/Vue/Angular these days

I didn't mean (just) that. I included vanilla webtech in my definition of "normal" - I guess I should have clarified in my initial comment (I just meant to exclude really exotic, if useful, things like Elm). Does that change how you would respond to it?

replies(4): >>43620807 #>>43621105 #>>43621124 #>>43621954 #
57. throw10920 ◴[] No.43620519{5}[source]
> What defines normal?

For the purposes of my comment and question - I do: vanilla (HTML+CSS+JS) and the most popular frameworks (React, Vue, Angular, Next).

> Your idea of normal is presumably whatever you’ve been using for the past few years.

None of those matter for my comment. Just substitute the value I provided above in to my original comment and you should be able to respond to the substance of the point I was making.

replies(1): >>43620567 #
58. _heimdall ◴[] No.43620523[source]
I really want to like the transitions API but I've been frustrated by it multiple times.

It feels like they wanted to build page animations similar to what Windows Phone had 15 years ago. That would be great, Windows Phone transition were surprisingly nice.

It just doesn't work on the web though, its an after thought. Animations on windows phone only worked because it was designed into the UI library and rendering engine from the beginning.

replies(1): >>43621811 #
59. loloquwowndueo ◴[] No.43620547[source]
You should read the htmx “book” (https://hypermedia.systems/ ) where the use case is clearly explained. It advocates for using htmx to enhance a page with more interactivity by extending html semantics and behaviours a bit (thus requiring minimum effort and learning curve) and move to more heavyweight client-side front end stuff (react and friends) if more interactivity or complex behaviours are needed.

You can whip up a simple html form and spruce it up with htmx so it feels “modern” to current users, with little effort and changes and importantly without having to learn the insanity that are modern front end stacks. Not only curmudgeons from the 90s like me benefit from this!

60. pavlov ◴[] No.43620567{6}[source]
Ok, so your list of “normal” frontend doesn’t support your original point:

”you want to push as much logic to the client”

React and Next have been moving in the opposite direction with SSR.

As I said, there isn’t a single right answer. Client-only vanilla JS offers one solution, hybrid SSR like Next/React offers another, and Htmx yet another with different tradeoffs on the same spectrum.

replies(1): >>43620593 #
61. friendzis ◴[] No.43620587[source]
> a developer has no way of knowing what custom stuff I have on my browser, whether I'm using any blockers or pi-holes or whatever

That's a feature.

> and they should not know.

Yes, a fetch failing, DOM being updated is standard web behavior, interns handle that.

replies(1): >>43620650 #
62. throw10920 ◴[] No.43620593{7}[source]
> Ok, so your list of “normal” frontend doesn’t support your original point:

> ”you want to push as much logic to the client”

> React and Next have been moving in the opposite direction with SSR.

It doesn't matter - they still support client-side rendering, the other frameworks I listed are still client-side-rendering focused, and I explicitly enumerated vanilla webtech where that's the norm.

Please stop nitpicking my language. This is extremely tiring, boring, and not in the spirit of intellectual curiosity.

replies(1): >>43620963 #
63. fulafel ◴[] No.43620624[source]
> If you're making a dynamic page, then you want to push as much logic to the client as possible because far more users are latency-limited than compute-limited

This implies you value optimization over other concerns, will do ssr & rehydration, etc.

replies(1): >>43621176 #
64. axegon_ ◴[] No.43620644{3}[source]
Oh yeah a 15 second spinning circle and 250mb memory is sooooo much better than a 500ms refresh </sarcasm>
replies(2): >>43620849 #>>43623738 #
65. axegon_ ◴[] No.43620650{3}[source]
> interns handle that

Reality has entered the chat.

66. tannhaeuser ◴[] No.43620674[source]
Yeah I don't get the irrationality either, especially the dogmatic "hypertext" angle. I mean you can see me pontificating about SGML as the original complement to the HTML vocabulary bringing text macros and other authoring affordances, but that is strictly for documents and their authors. If you want to target web apps and require JS anyway, I don't see the necessity for markup and template languages; you have already a much more powerful programming language in the mix. Any ad-hoc and inessential combination of markup templating and JS is going to be redesigned by the next generation of web devs anyway because of the domain's cyclic nature ie. the desire to carve out know-how niches, low retention rate in webdev staff, many from-scratch relaunches, ..,
67. chuckadams ◴[] No.43620717{5}[source]
It sounds like something you'd use for a POST route that served up html instead of JSON (weird for an app to do these days but it still happens). redirect-after-post is as old as the web (or at least the POST verb) and it actually _enhances_ the utility of the back button by removing the annoying prompt about re-posting.
replies(1): >>43621919 #
68. codegeek ◴[] No.43620782{3}[source]
Pretty much. I can't tell you how many times I interview these "framework developers" and they cant tell me how a regular HTML form submission works. Boggles my mind.
69. intrasight ◴[] No.43620807{4}[source]
>higher latency for all actions

If your implementation is poor

> all of your computation on the server

You doing weather forecasting? Crypto mining? What "computation" is happening on the client? The only real computation in most web sites is the algorithmic ad presentation - and that's not done on your servers.

replies(1): >>43620902 #
70. mattgreenrocks ◴[] No.43620837{3}[source]
If the layout hasn’t changed, then there is no flicker that I can tell on Chrome/FF at 144hz.
71. sgt ◴[] No.43620849{4}[source]
I don't disagree with you :)
72. sgt ◴[] No.43620860{3}[source]
Yeah you are right. I can't reproduce it at the moment. Maybe I am thinking of a couple of years ago when the browser did in fact flash for the same.
73. throw10920 ◴[] No.43620902{5}[source]
> If your implementation is poor

This is factually incorrect. Latency is limited by the speed of light and the user's internet connection. If you read one of the links that I'd posted, you'd also know that a lot of users have very bad internet connection.

> You doing weather forecasting? Crypto mining? What "computation" is happening on the client?

This is absolutely ridiculous. It's very easy to see that you might want a simple SPA that allows you to browse a somewhat-interactive site (like your bank's site) without having to make a lot of round-trips to the server, and there's also thousands of examples of complex web applications that exist in the real world that serve as trivial examples of computation that might happen on the client.

> The only real computation in most web sites is the algorithmic ad presentation - and that's not done on your servers.

I never mentioned anything about "ads" or "most web sites" - I was asking an engineering question. Ads are entirely irrelevant here. I doubt you even have data to back this claim up.

Please don't leave low-effort and low-quality responses like this - it wastes peoples' time and degrades the quality of HN.

replies(2): >>43621490 #>>43622357 #
74. vagrantJin ◴[] No.43620930[source]
Simplicity is a non-starter these days.
75. swores ◴[] No.43620963{8}[source]
They aren't nitpicking your language: they asked for clarity on something where they weren't sure they understood what you meant, then when you clarified they responded disagreeing with you, not about your language.

Feel free to disagree with them back if you wish, but characterising their disagreement as language nitpicking isn't disagreeing it's just wrong.

replies(1): >>43627776 #
76. zigzag312 ◴[] No.43621083{3}[source]
I think scroll position is the one that is not preserved.
77. papichulo2023 ◴[] No.43621102[source]
Reddit history in the mobile website is terrible
replies(1): >>43621199 #
78. QuadmasterXLII ◴[] No.43621105{4}[source]
react and htmx both trade off dx and ux, and react shits on the user way more aggressively
replies(1): >>43621175 #
79. swores ◴[] No.43621109{5}[source]
> "[...] the redirected url will still be in the history unless you manually redirect with a bare component. Hence whenever someone used back, the redirect pushed them back to the route they just left."

I know this isn't technically a browser bug, but ever since this first became a problem that's somewhat common to find on various websites, a decade or so ago, I've wondered if browsers couldn't provide the solution nonetheless..

At the very least, detect the user behaviour that screams this problem is happening (multiple clicks of the back button, which each time lead to the user being redirected to the URL they were on when they clicked it - but maybe it would be OK to detect that on the first attempt too?), and then either automatically solve it by taking them back by two URLs rather than just to the previous one (while looking out for possibility of stacked redirects that might need to skip more than one redirecting URL), or provide an alert that this potential problem has been detected and giving the user a single click solution to use or ignore.

I feel like the automatic solution should work well, but I've not put much thought into this beyond what I've just written, so maybe somebody will point out why doing it automatically would be a problem - either because of difficulty in the browser being accurate in recognising when this problem needs a bypass solution, or because even when it has correctly spotted it there are scenarios I haven't thought of where the user wouldn't want the back button's default behaviour to change?

I know very little about web browser (or even general software) development, any chance somebody reading this could chime in on whether this feature would be hard to code, whether it might have any / too much of an impact on performance, and generally whether or not you think it's a good idea?

(And happy of course to hear answers to the "good idea?" question from anyone who's had this problem in a web browser, not just developers.)

80. cies ◴[] No.43621124{4}[source]
> htmx is much easier for the developer, but worse for the user because of higher latency for all actions

Latency is something to consider yes. Besides that we should not forget it is easy to make an HTMX-mess: HTMX's not a good fit not fit for all use-cases and some approaches are dead end (the article even talks about this, but you find more testimonies of this online). With HTMX you also create a lot of end-points, usually without a spec: this can also become an issue (might not work for some teams).

> if you mix client- and server-side computation, then you're adding back in complexity that you explicitly wanted to get away from by using htmx.

Exactly! A good reason not use HTMX, if you need a lot of browser-side computation.

> I didn't mean (just) that. I included vanilla webtech in my definition of "normal"

If you mean "just scripting with JS (w/o any framework)" then I still do not think this is an acceptable alternative to compare HTMX to. IMHO you have to compare with something that that provides a solid basis to develop a larger application on. Otherwise you may say HTMX is great because the status quo (vanillaJS/React/Vue/Ang) is such a mess.

81. stevoski ◴[] No.43621160[source]
> "If you're making a dynamic page, then you want to push as much logic to the client as possible because far more users are latency-limited than compute-limited"

That's an assertion I don't agree with.

Data still needs to come from or go to the server, whether you do it in a snippet of HTML or with an API call.

In either case, latency is there.

replies(1): >>43627657 #
82. cies ◴[] No.43621175{5}[source]
What do you mean? You can build any level of UX (user) in React that you can achieve without it right?

I'd say all these FE framework help us build and structure browser applications that potentially increase UX over "just HTML pages". Hence they all try to improve the DX building such apps compared to vanilla JS or jQuery.

83. mamcx ◴[] No.43621176{3}[source]
I work with user in far location, bad internet signal, and terrible low-end androids.

Htmx has been the best performant of all my tries before.

HTML is fast. (also, I use svg everywhere I can)

replies(1): >>43621435 #
84. cube00 ◴[] No.43621199{3}[source]
I'm surprised you can tolerate the mobile website at all with the non stop full screen "better in the app" modals.
85. skydhash ◴[] No.43621319{4}[source]
They know the keywords, but not how everything fit together, even in the basic sense.
86. skydhash ◴[] No.43621435{4}[source]
Also, you can have a whole book with half of MB of html. So loading 2+ MB of JS, then a good amount of json, especially with high latency connection is not better than just loading the html with all the data baked in.
replies(1): >>43621613 #
87. recursivedoubts ◴[] No.43621479[source]
two I know of:

https://zorro.management/

https://www.commspace.co.za/

88. ◴[] No.43621489[source]
89. skydhash ◴[] No.43621490{6}[source]
If you want to help people with latency issues, the best way is to sent everything in with as few request as possible, especially if you don’t have a lot of images. And updating with html (swapping part of the DOM) is efficient compared to updating with JSON.
90. recursivedoubts ◴[] No.43621499[source]
It has worked well for some people:

https://htmx.org/essays/a-real-world-react-to-htmx-port/

https://htmx.org/essays/another-real-world-react-to-htmx-por...

91. recursivedoubts ◴[] No.43621533[source]
hx-boost is an afterthought and we haven't pushed the idea further because we expect view transitions via normal navigation to continue to fill in that area

htmx focuses on generalizing hypermedia controls:

https://dl.acm.org/doi/pdf/10.1145/3648188.3675127

we also have a minimalist version of the idea, fixi:

https://github.com/bigskysoftware/fixi

replies(1): >>43644028 #
92. skydhash ◴[] No.43621534{3}[source]
And if you’re not building a very interactive desktop app, you have no need for an SPA.
replies(1): >>43622075 #
93. mamcx ◴[] No.43621613{5}[source]
Correct. Also, because I output all the formatting and do the processing/filtering/joining on the server, a lot of data is removed before get into the client.

This is the most beneficial thing related to perf.

I don't yet render fragments (only return full pages) so Is there a lot of potential for a speed up untaped...

94. _heimdall ◴[] No.43621641[source]
I reach for HTMX when (a) its a project where I have the power to make that decision and (b) I need to render state that lives on the server.

My main issue with SPAs, and client rendering in general, has always been the attempt to client render state that is persisted elsewhere.

There are certain pieces of state that really do live on the client, and for that client rendering is great. A vast majority of cases involve state that is persisted somewhere on the server though, and in those cases its needlessly complex to ship both the state and an entire rendering engine to the browser.

95. mycall ◴[] No.43621691[source]
In the pursuit of ultimate long term functional web pages, how does that affect maintainability, e.g. changing requirements? In my mind, the less characters used the less effort it takes to change it, but this might be obsolete thinking wrt AI assisted (or fully written) sites.
replies(1): >>43623448 #
96. panstromek ◴[] No.43621807[source]
Yea, in my opinion, hx-swap or turbolinks is a bit of an antipattern. Seems to me you get worst of both worlds:

- something that kinda looks like full reload

- but with all problems of client side routing

- and without preserving the DOM state like SPA would

- and you don't get the immediate response that is the main reason to build the SPA in the first place

In fact, it's often even slower than native navigation, because native navigation can start processing the stream during download and there's overhead with pre and post processing of the response in the boosted link version. Try to profile GitHub links to see what I'm talking about, opening a link in new tab can be 2x faster than clicking in the same tab.

97. seanwilson ◴[] No.43621811{3}[source]
Can you go into more detail about what's frustrating? You can customize the animations, right?
replies(1): >>43624634 #
98. ffsm8 ◴[] No.43621919{6}[source]
There are countless usecases for redirects.

A few examples of the top of my head:

when urls have changed,

when a resource/entity has been deleted,

when you wish to provide a unified entrypoint that sends users to another url so that you can easily change the redirection target in the future (i.e. redirecting a / to a /entities, so youre but blocking the / path if you want to add a homepage/landing page later)

I don't think I'd help with posts though. They do indeed usually send back a redirect - but the post is likely still in the history unless the website send it as AJAX/without a history entry (basically via JS fetch() instead of form action=)

99. infamia ◴[] No.43621954{4}[source]
> Unfortunately, that means that the tradeoff is that you're optimizing for user experience instead of developer experience

Not really, your backend has rich domain logic you can leverage to provide users with as much data as possible, while providing comparable levels of interactivity. Pushing as much logic (i.e., state) while you're developing results in a pale imitation of that domain logic on the front end, leading to a greatly diminished user experience.

replies(1): >>43627787 #
100. DeathArrow ◴[] No.43622075{4}[source]
How would something like Facebook, Instagram, Netflix work and look if they weren't Spas?
replies(1): >>43623407 #
101. evantbyrne ◴[] No.43622088[source]
Would like to second the turbo rec. I've had good results with it for nontrivial use cases. Would like to hear from people if they have different experiences. Also, praying that everything gets cached first load and hand waving that view transitions will eventually work is not a position I want to hear from an engineer in a commercial context. Really happy to see the author bring more attention to how good vanilla web technologies have gotten though.
102. ksec ◴[] No.43622095[source]
>Triptych—the HTML proposals that Carson and I are working on—would render htmx obsolete for the type of website I describe here.

Which we discussed here [1], is still no where near getting even looked at. Browser maker are not interested in anything HTMX. Vast majority of browser developers still want JS driven web apps, served with AVIF image.

[1] https://news.ycombinator.com/item?id=42615646

103. infamia ◴[] No.43622357{6}[source]
> This is factually incorrect. Latency is limited by the speed of light and the user's internet connection.

This is a solved problem. It is simple to download content shortly before it is very likely to be needed using plain old HTML. Additionally, all major hypermedia frameworks have mechanisms to download a link on mousedown or when you hover over a link for a specified time.

> If you read one of the links that I'd posted, you'd also know that a lot of users have very bad internet connection.

Your links mortally wounds your argument for js frameworks, because poor latency is linked strongly with poor download speeds. The second link also has screen fulls of text slamming sites who make users download 10-20MBs of data (i.e. the normal js front ends). Additionally, devices in the parts of the world the article mentions, devices are going to be slower and access to power less consistent, all of which are huge marks against client side processing vs. SSR.

replies(1): >>43627717 #
104. JodieBenitez ◴[] No.43622416[source]
hence why unpoly targets the <main> element by default: https://unpoly.com/up.fragment.config
105. xiphias2 ◴[] No.43622435{3}[source]
As you can see even the basic documentation is compromised at this point:

https://unplannedobsolescence.com/blog/behavior-belongs-in-h...

<button onclick="alert('I was clicked!')">Click me</button>

,,You can find HTML attribute equivalents for many of the event handler properties; however, you shouldn't use these — they are considered bad practice. It might seem easy to use an event handler attribute if you are doing something really quick, but they quickly become unmanageable and inefficient.''

,,You should never use the HTML event handler attributes — those are outdated, and using them is bad practice.''

And then proceeds to suggest this:

<button>Click me</button>

<script> const btn = document.querySelector("button")

btn.addEventListener("click", () => { alert('I was clicked!') }) </script>

After this, next step would be putting javascript to a separate file, and on a slow mobile connection in a random airport with thousands of people using the same wifi, a simple working code gets timeouts.

20 years ago all I needed to know was this, and it worked great:

<button onclick="alert('I was clicked!')">Click me</button>

106. yawaramin ◴[] No.43622537[source]
htmx boost functionality is an afterthought in the main use case it is marketed for (turning a traditional MPA into something that feels like a SPA), but it's actually super useful for the normal htmx use case of fetching partial updates and swapping them into a page.

If you do something like <a href=/foo hx-get=/foo hx-target="#foo">XYZ</a> the intention is that it should work with or without JavaScript or htmx available. But the problem is that if you do Ctrl-click or Cmd-click, htmx swallows the Ctrl/Cmd key and opens the link in the same tab instead of in a new tab!

But if you do <a href=/foo hx-boost=true hx-target="#foo">XYZ</a> everything works as expected–left-click does the swap in the current tab as expected, Ctrl/Cmd-click opens in a new tab, etc. etc.

Also another point–you are comparing htmx's boost, one feature out of many, to the entirety of Turbo? That seems like apples and oranges.

107. skydhash ◴[] No.43623407{5}[source]
There's nothing inherent to those sites that warrant SPAs. Things that do warrant SPA are full blown apps like Gmail, Figma, Google Maps,... or if you're building a desktop-like dashboard like Synology NAS's.
108. rpgbr ◴[] No.43623448{3}[source]
I agree! As a journalist that codes his own site/blog, I've never took the time to learn anything more complex than good and old HTML and CSS to structure a page, and the short incursions I had in complex system (node, SCSS, frameworks), the returns for the kind of sites I developed was so little I couldn't be bothered to climb the learning curve.

HTML + CSS + sprinkles of vanilla JS is the perfect recipe for readable, fast, and high resilient web pages.

109. earthnail ◴[] No.43623738{4}[source]
A 2s spinning circle may indeed be better than a 500ms flashing reload. Try it on your mom (“waaah why did it flash??”).

I also dislike SPAs but there is business value in this slow spinner that you shouldn’t discount.

Luckily, turbo or htmx solve this just as well. And maybe even more importantly, I can’t think of a modern browser that still flickers.

110. _heimdall ◴[] No.43624634{4}[source]
Yeah you can customize them a bit.

What I've always wanted to be able to do though is restyle the actual DOM element as it animates from one part of the screen to another. Unless this changed pretty recently, my understanding is that the browser is basically grabbing a screenshot of the DOM element before it navigates and animates the snapshot itself across the screen.

I've also run into random issues related to having to tag the element targets. I don't remember all the details now so I can't give a great example, but especially when using libraries like HTMX I was having issues trying to make sure the right elements were tagged to animate correctly.

Conditional animations can be done, though its a little odd to wrangle and you can end up with code mixed between HTML, JS, and CSS to work with it.

111. throw10920 ◴[] No.43627657{3}[source]
> That's an assertion I don't agree with.

The part about users being more latency-limited than compute-limited, or wanting to push as much to the browser as possible?

The former is somewhat hard to quantify, but most engineers building interactive applications (or distributed systems, of which interactive client-server webapps are a special case) have far more trouble with latency than compute.

The latter is definitely true.

> Data still needs to come from or go to the server, whether you do it in a snippet of HTML or with an API call. ... In either case, latency is there.

This is definitely incorrect.

Consider the very common case of fetching some data from the server and then filtering it. In many, many cases, the filtering can be done client-side. If you do that with an interactive frontend, then it's nearly instant, and there's no additional fetch to the server. If you shell out to the server, then you pay the latency penalty, and incur a fetch.

"In either case, latency is there." is just factually wrong.

112. throw10920 ◴[] No.43627717{7}[source]
> This is a solved problem. It is simple to download content shortly before it is very likely to be needed using plain old HTML.

No, it is not. There's no way to send the incremental results of typing in a search box, for instance, to the server with HTML alone - you need to use Javascript. And then you're still paying the latency penalty, because you don't know what the user's full search term is going to be until they press enter, and any autocomplete is going to have to make a full round trip with each incremental keystroke.

> Your links mortally wounds your argument for js frameworks, because poor latency is linked strongly with poor download speeds. The second link also has screen fulls of text slamming sites who make users download 10-20MBs of data (i.e. the normal js front ends).

I never said anything about or implying "download 10-20MBs of data (i.e. the normal js front ends)" in my question. Bad assumption. So, no, there's no "mortal wound" because you just strawmanned my premises.

> Additionally, devices in the parts of the world the article mentions, devices are going to be slower and access to power less consistent, all of which are huge marks against client side processing vs. SSR.

As someone building web applications - no, they really aren't. My webapps sip power and compute and are low-latency while still being very poorly optimized.

replies(1): >>43636284 #
113. throw10920 ◴[] No.43627776{9}[source]
> They aren't nitpicking your language

They absolutely are. "Htmx is “frontend tech”." is a nitpick (and an incorrect one at that). "What defines normal?" is a nitpick - if you ask an ensemble of frontend web developers what normal web frameworks are, they'll convert on the set that I specified without bringing up absolutely ridiculous answers like ColdFusion, and all of their answers have enough commonalities that you can answer the question I posed. "Your idea of normal is presumably whatever you’ve been using for the past few years." isn't "asking for clarity" - it's a dunk with no substance. "Ok, so your list of “normal” frontend doesn’t support your original point... React and Next have been moving in the opposite direction with SSR." is absolutely a nitpick - it's very very clear from the line that that poster quoted that I was referring to client-side rendering.

Let me reiterate that: pavlov read the line of my comment that answered his hypothetical question about SSR vs client-side rendering, then used it to try to gotcha me by saying "actually, React isn't just client-side".

And claiming that they were just "disagreeing" with me and "characterising their disagreement as language nitpicking isn't disagreeing it's just wrong" is factually incorrect. Aside from ”you want to push as much logic to the client” (which they didn't disagree with), I didn't make any statements to agree or disagree with - I asked a question that they tried to dunk on without actually engaging with.

People do not talk like this in real life. Let's not normalize or defend this, or manipulate language to try to justify it, shall we?

114. throw10920 ◴[] No.43627787{5}[source]
More incorrect statements.

> your backend has rich domain logic you can leverage to provide users with as much data as possible ... Pushing as much logic (i.e., state) while you're developing results in a pale imitation of that domain logic on the front end

False. There's very little that you can do on the frontend that you can't do on the backend - you can implement almost all of your logic on the frontend and just use the backend for a very few things.

> leading to a greatly diminished user experience.

False. There's just no evidence for this whatsoever, and as counterevidence some of the best tools I've ever used have been extremely rich frontend-logic-heavy apps.

115. taneq ◴[] No.43630466[source]
Having been out of the web dev game (or at least ‘front end’ as you cool kids are calling it? ;) for a while, I was a little dismayed when I asked an LLM for a web page that did a specific thing. It required Bootstrap and Angular and a bunch of other stuff. Asked it nicely to redo it in plain HTML and JS and the resulting code was simpler and no longer then the original, with no external dependencies.
116. infamia ◴[] No.43636284{8}[source]
> No, it is not. There's no way to send the incremental results of typing in a search box, for instance, to the server with HTML alone - you need to use Javascript.

Hypermedia applications use javascript (e.g., htmx - the original subject), so I'm not sure why you're hung up on that.

> And then you're still paying the latency penalty, because you don't know what the user's full search term is going to be until they press enter, and any autocomplete is going to have to make a full round trip with each incremental keystroke.

You just send the request on keydown. It's going to take about ~50-75ms or so for your user's finger to traverse into the up position. Considering anything under ~100-150ms feels instantaneous, that's plenty of time to return a response.

> As someone building web applications - no, they really aren't.

We were originally talking about "normal" (js) web applications (e.g. react, angular, etc.)., most of these apps have all the traits I mentioned earlier. We all have used these pigs that take forever on first load, cause high cpu utilization, and are often janky.

> My webapps sip power and compute and are low-latency while still being very poorly optimized.

And now you have subtlely moved the goals posts to only consider the web apps you're building, in place of "normal" js webapps you originally compared against htmx. I saw you do the same thing in another thread on this story. I have no further interest in engaging in that sort of discussion.

replies(1): >>43640348 #
117. throw10920 ◴[] No.43640348{9}[source]
> Hypermedia applications use javascript (e.g., htmx - the original subject), so I'm not sure why you're hung up on that.

Because you falsely claimed otherwise:

>> This is a solved problem. It is simple to download content shortly before it is very likely to be needed using plain old HTML.

So, another false statement on your part.

> You just send the request on keydown. It's going to take about ~50-75ms or so for your user's finger to traverse into the up position. Considering anything under ~100-150ms feels instantaneous, that's plenty of time to return a response.

No, it's not "plenty of time" because many users have latency in the 100's of ms (mine on my mobile connection is ~200ms), and some on satellite/in remote areas with poor infra have latency of up to a second - and that's completely ignoring server response latency, bandwidth limitations on data transport, and rehydration time on the frontend.

> Considering anything under ~100-150ms feels instantaneous, that's plenty of time to return a response.

Scientifically wrong: "the lower threshold of perception was 85 ms, but that the perceived quality of the button declined significantly for latencies above 100 ms"[1].

> We were originally talking about "normal" (js) web applications (e.g. react, angular, etc.).

Factually incorrect. We were talking about normal frontend technologies - including vanilla, which you intentionally left out - so even if you include those heavyweight frameworks:

> most of these apps have all the traits I mentioned earlier. We all have used these pigs that take forever on first load, cause high cpu utilization, and are often janky.

...this is a lie, because we're not talking about normal apps, we're talking about technologies. All you have to do is create a new React or Angular or Vue application, bundle it, and observe that the application size is under 300k, and responds instantly to user input.

> And now you have subtlely moved the goals posts to only consider the web apps you're building, in place of "normal" js webapps you originally compared against htmx.

Yet another lie, and gaslighting to boot. I never moved the goalposts - my comments have been about the technologies, not what webapps people "normally" build - you were the one who moved the goalposts by changing the discourse from the trade-space decisionmaking that I was talking about to trying to malign modern web frameworks (and intentionally ignoring the fact that I included vanilla webtech) based on how some developers use them. My example was merely to act as a counter-example to prove how insane your statements were.

Given that you also made several factually incorrect statements in another thread[2], we can conclude that in addition to maliciously lying about things that I've said, you're also woefully ignorant about how web development works.

Between these two things, I think we can safely conclude that htmx doesn't really have any redeeming qualities, given that you were unable to describe coherent arguments for it, and resorted to lies and falsehoods instead.

[1] https://www.tactuallabs.com/papers/howMuchFasterIsFastEnough...

[2] https://news.ycombinator.com/item?id=43621954

118. can3p ◴[] No.43644028{3}[source]
Thanks for the correction!