Most active commenters
  • Dylan16807(3)

←back to thread

752 points dceddia | 12 comments | | HN request time: 1.946s | source | bottom
Show context
yomlica8 ◴[] No.36447314[source]
It blows my mind how unresponsive modern tech is, and it frustrates me constantly. What makes it even worse is how unpredictable the lags are so you can't even train yourself around it.

I was watching Halt and Catch Fire and in the first season the engineering team makes a great effort to meet something called the "Doherty Threshold" to keep the responsiveness of the machine so the user doesn't get frustrated and lose interest. I guess that is lost to time!

replies(18): >>36447344 #>>36447520 #>>36447558 #>>36447932 #>>36447949 #>>36449090 #>>36449889 #>>36450472 #>>36450591 #>>36451868 #>>36452042 #>>36453741 #>>36454246 #>>36454271 #>>36454404 #>>36454473 #>>36462340 #>>36469396 #
sidewndr46 ◴[] No.36447344[source]
Even worse is the new trend of web pages optimizing for page load time. You wind up with a page that loads "instantly" but has almost none of the data you need displayed. Instead there are 2 or 3 AJAX requests to load the data & populate the DOM. Each one results in a repaint, wasting CPU and causing the page content to move around.
replies(13): >>36447430 #>>36448035 #>>36448135 #>>36448336 #>>36448834 #>>36449278 #>>36449850 #>>36450266 #>>36454683 #>>36455856 #>>36456553 #>>36457699 #>>36458429 #
danieldk ◴[] No.36448336[source]
This drives me crazy, especially because it breaks finding within a page. Eg. if you order food and you already know what you want.

Old days: Cmd + f, type what you want.

New days: first scroll to the end of the page so that all the contents are actually loaded. Cmd + f, type what you want.

Is just a list of dishes, some with small thumbnails, some without any images at all. If you can't load a page with 30 dishes fast enough, you have a serious problem (you could always lazily load the thumbnails if you want to cheat).

replies(6): >>36448673 #>>36448968 #>>36449626 #>>36449636 #>>36449814 #>>36454049 #
sazz ◴[] No.36449636[source]
Well, the tech stack is insane: Some virtual machine running a web browser process running a virtual machine for a html renderer which consumes a document declaration language incorporating a scripting language to overcome the document limitations trying to build interactive programs.

Actually much worse as Microsoft once did with their COM model, ActiveX based on MFC foundation classes with C++ templates, etc.

And to build those interactive programs somebody is trained to use React, Vue, etc. using their own eco systems of tools. This is operated by a stack of build tools, a stack of distribution tools, kubernetes for hosting and AWS for managing that whole damn thing.

Oh - and do not talk even about Dependency Management, Monitoring, Microservices, Authorization and so on...

But I really wonder - what would be more complex?

Building interactive programs based on HTML or Logo (if anybody does remember)?

replies(7): >>36449860 #>>36449954 #>>36450498 #>>36451505 #>>36452358 #>>36453795 #>>36454122 #
1. Dylan16807 ◴[] No.36450498[source]
Virtual machines don't add much overhead.
replies(3): >>36450607 #>>36454298 #>>36459630 #
2. vladvasiliu ◴[] No.36450607[source]
That may be so, although a VM on AWS is measurably slower than the same program running on my Xeon from 2013 or so.

But, more generally, the problem is that tech stacks are an awful amalgamation of uncountable pieces which, taken by themselves, "don't add much overhead". But when you add them all together, you end up with a terribly laggy affair; see the other commenter's description of a web page with 30 dishes taking forever to load.

replies(2): >>36450758 #>>36451281 #
3. Dylan16807 ◴[] No.36450758[source]
My experience has been that HTML and making changes to HTML has huge amounts of overhead all by itself.

I don't dispute that very bad javascript can cause problems, but I don't think it's the virtualization layers or the specific language that are responsible for more than a sliver of that in the vast majority of use cases.

And the pile of build and distribution tools shouldn't hurt the user at all.

replies(1): >>36451031 #
4. Brian_K_White ◴[] No.36451031{3}[source]
It's exactly the pile of layers, not "bad javascript" or any other single thing. Yes vms do add ridiculius overhead.
replies(2): >>36451174 #>>36451634 #
5. Dylan16807 ◴[] No.36451174{4}[source]
When half the layers add up to 10% of the problem, and the other half of the layers add up to 90% of the problem, I don't blame layers in general. If you remove the parts that legitimately are just bad by themselves, you solve most of the problems.

If you have a dozen layers and they each add 5% slowdown, okay that makes a CPU half as fast. That amount of slowdown is nothing for UI responsiveness. A modern core clocked at 300MHz would blow the pants off the 600MHz core that's responding instantly in the video, and then it would clock 10x higher when you turn off the artificial limiter. Those slight slowdowns of layered abstractions are not the real problem.

(Edit: And note that's not for a dozen layers total but for a dozen additional layers on top of what the NT code had.)

replies(1): >>36453757 #
6. sazz ◴[] No.36451281[source]
"But when you add them all together, you end up with a terribly laggy affair"

I think this is the key.

People fall in love with simple systems which solves a problem. Being totally in honey moon they want to use those systems for additional use cases. So they build stuff around it, atop of it, abstract the original system to allow growth for more complexity.

Then the system becomes too complex, so people might come up with a new idea. So they create a new simple system just to fix a part of the whole stack and allow migration from the old one. The party starts all over again.

But it's like hard drive fragmentation - at some point there are so many layers that it's basically impossible to recombine layers again. Everybody fears complexity already built.

7. jrott ◴[] No.36451634{4}[source]
At this point there are so many layers that it would be hard to figure out the common problems without doing some serious work profiling a whole bunch of applications
8. TeMPOraL ◴[] No.36453757{5}[source]
Unfortunately, at this point some of the slow layers are in hardware, or immediately adjacent to it. For example, AFAIR[0], the time between your keyboard registering a press, and the corresponding event reaching the application, is already counted in milliseconds and can become perceptible. Even assuming the app processes it instantly, that's just half of the I/O loop. The other half, that is changing something on display, involves digging through GUI abstraction layers, compositor, possibly waiting on GPU a bit, and then... these days, displays themselves tend to introduce single-digit millisecond lags due to the time it takes to flip a pixel, and buffering added to mask it.

These are things we're unlikely to get back (and by themselves already make typical PC stacks unable to deliver smooth hand-writing experience - Microsoft Research had a good demo some time ago, showing that you need to get to single-digit milliseconds on the round-trip between touch event and display update, for it to feel like manipulating a physical thing vs. having it attached to your finger on a rubber band). Win2K on a hardware from that era is going to remain snappier than modern computers for that reason alone. But that only underscores the need to make the userspace software part leaner.

--

[0] - Source: I'll need to find that blog that's regularly on HN, whose author did measurements on this.

replies(1): >>36456133 #
9. ungamedplayer ◴[] No.36454298[source]
Except for when they do.
10. jbboehr ◴[] No.36456133{6}[source]
Perhaps it's this one?

http://danluu.com/input-lag/

replies(1): >>36458714 #
11. TeMPOraL ◴[] No.36458714{7}[source]
Yes, this one exactly, thank you!
12. immibis ◴[] No.36459630[source]
That depends how much overhead is in the VM. WASM is designed to be thin. Java is not.