Most active commenters
  • Dylan16807(3)
  • TeMPOraL(3)
  • immibis(3)

←back to thread

752 points dceddia | 33 comments | | HN request time: 2.189s | source | bottom
Show context
yomlica8 ◴[] No.36447314[source]
It blows my mind how unresponsive modern tech is, and it frustrates me constantly. What makes it even worse is how unpredictable the lags are so you can't even train yourself around it.

I was watching Halt and Catch Fire and in the first season the engineering team makes a great effort to meet something called the "Doherty Threshold" to keep the responsiveness of the machine so the user doesn't get frustrated and lose interest. I guess that is lost to time!

replies(18): >>36447344 #>>36447520 #>>36447558 #>>36447932 #>>36447949 #>>36449090 #>>36449889 #>>36450472 #>>36450591 #>>36451868 #>>36452042 #>>36453741 #>>36454246 #>>36454271 #>>36454404 #>>36454473 #>>36462340 #>>36469396 #
sidewndr46 ◴[] No.36447344[source]
Even worse is the new trend of web pages optimizing for page load time. You wind up with a page that loads "instantly" but has almost none of the data you need displayed. Instead there are 2 or 3 AJAX requests to load the data & populate the DOM. Each one results in a repaint, wasting CPU and causing the page content to move around.
replies(13): >>36447430 #>>36448035 #>>36448135 #>>36448336 #>>36448834 #>>36449278 #>>36449850 #>>36450266 #>>36454683 #>>36455856 #>>36456553 #>>36457699 #>>36458429 #
danieldk ◴[] No.36448336[source]
This drives me crazy, especially because it breaks finding within a page. Eg. if you order food and you already know what you want.

Old days: Cmd + f, type what you want.

New days: first scroll to the end of the page so that all the contents are actually loaded. Cmd + f, type what you want.

Is just a list of dishes, some with small thumbnails, some without any images at all. If you can't load a page with 30 dishes fast enough, you have a serious problem (you could always lazily load the thumbnails if you want to cheat).

replies(6): >>36448673 #>>36448968 #>>36449626 #>>36449636 #>>36449814 #>>36454049 #
1. sazz ◴[] No.36449636[source]
Well, the tech stack is insane: Some virtual machine running a web browser process running a virtual machine for a html renderer which consumes a document declaration language incorporating a scripting language to overcome the document limitations trying to build interactive programs.

Actually much worse as Microsoft once did with their COM model, ActiveX based on MFC foundation classes with C++ templates, etc.

And to build those interactive programs somebody is trained to use React, Vue, etc. using their own eco systems of tools. This is operated by a stack of build tools, a stack of distribution tools, kubernetes for hosting and AWS for managing that whole damn thing.

Oh - and do not talk even about Dependency Management, Monitoring, Microservices, Authorization and so on...

But I really wonder - what would be more complex?

Building interactive programs based on HTML or Logo (if anybody does remember)?

replies(7): >>36449860 #>>36449954 #>>36450498 #>>36451505 #>>36452358 #>>36453795 #>>36454122 #
2. myth2018 ◴[] No.36449860[source]
And don't forget the UX designers armed with their figmas and alikes. The tech-stack is only one among a number of organizational and cultural issues crippling the field.
replies(1): >>36492525 #
3. atchoo ◴[] No.36449954[source]
Ironically these instant starting NT applications were often using COM.

As much as I hated developing with COM, the application interoperability and OLE automation is a form 90s tech utopianism that I miss.

replies(4): >>36453279 #>>36454710 #>>36456031 #>>36459614 #
4. Dylan16807 ◴[] No.36450498[source]
Virtual machines don't add much overhead.
replies(3): >>36450607 #>>36454298 #>>36459630 #
5. vladvasiliu ◴[] No.36450607[source]
That may be so, although a VM on AWS is measurably slower than the same program running on my Xeon from 2013 or so.

But, more generally, the problem is that tech stacks are an awful amalgamation of uncountable pieces which, taken by themselves, "don't add much overhead". But when you add them all together, you end up with a terribly laggy affair; see the other commenter's description of a web page with 30 dishes taking forever to load.

replies(2): >>36450758 #>>36451281 #
6. Dylan16807 ◴[] No.36450758{3}[source]
My experience has been that HTML and making changes to HTML has huge amounts of overhead all by itself.

I don't dispute that very bad javascript can cause problems, but I don't think it's the virtualization layers or the specific language that are responsible for more than a sliver of that in the vast majority of use cases.

And the pile of build and distribution tools shouldn't hurt the user at all.

replies(1): >>36451031 #
7. Brian_K_White ◴[] No.36451031{4}[source]
It's exactly the pile of layers, not "bad javascript" or any other single thing. Yes vms do add ridiculius overhead.
replies(2): >>36451174 #>>36451634 #
8. Dylan16807 ◴[] No.36451174{5}[source]
When half the layers add up to 10% of the problem, and the other half of the layers add up to 90% of the problem, I don't blame layers in general. If you remove the parts that legitimately are just bad by themselves, you solve most of the problems.

If you have a dozen layers and they each add 5% slowdown, okay that makes a CPU half as fast. That amount of slowdown is nothing for UI responsiveness. A modern core clocked at 300MHz would blow the pants off the 600MHz core that's responding instantly in the video, and then it would clock 10x higher when you turn off the artificial limiter. Those slight slowdowns of layered abstractions are not the real problem.

(Edit: And note that's not for a dozen layers total but for a dozen additional layers on top of what the NT code had.)

replies(1): >>36453757 #
9. sazz ◴[] No.36451281{3}[source]
"But when you add them all together, you end up with a terribly laggy affair"

I think this is the key.

People fall in love with simple systems which solves a problem. Being totally in honey moon they want to use those systems for additional use cases. So they build stuff around it, atop of it, abstract the original system to allow growth for more complexity.

Then the system becomes too complex, so people might come up with a new idea. So they create a new simple system just to fix a part of the whole stack and allow migration from the old one. The party starts all over again.

But it's like hard drive fragmentation - at some point there are so many layers that it's basically impossible to recombine layers again. Everybody fears complexity already built.

10. pjmlp ◴[] No.36451505[source]
And then came WASM...
replies(1): >>36459679 #
11. jrott ◴[] No.36451634{5}[source]
At this point there are so many layers that it would be hard to figure out the common problems without doing some serious work profiling a whole bunch of applications
12. syntheweave ◴[] No.36452358[source]
Well, the way in which we got there is in adding more features to the "obvious" answer: all any given site necessarily has to do, to have the same presentation as today, is place pixels on the screen, change the pixels when the user clicks or types things, and persist data somewhere as the user does things.

Except...there's no model of precisely how text is handled or how to re-encode it for e.g. screen reading...the development model lacks the abstraction to do layout...and so on. So we added a longer pipeline with more things to configure, over and over.

But - the computing environment is also different now. We can say, "aha, but OCR exists, GPT exists" and pursue a completely different way of presenting many of those features, where you leverage a higher grade of computing power to make the architectural line between the presentation layer and the database extremely short and weighted towards "user in control of their data and presentation". That still takes engineering and design, but the order of magnitude goes down, allowing complexity to bottleneck elsewhere.

That's the kind of conceptual leap computing has made a few times over history - at first the idea of having the computer itself compile your program from a textual form to machine instructions ("auto-coding") was novel and complex. Nowadays we expect our compilers to make coffee for us.

13. TeMPOraL ◴[] No.36453279[source]
Indeed.

On one project, I actually shifted quite recently from working on old-school, pre-Windows XP, DCOM-based protocols, to interfacing with REST APIs. Let me tell you this: compared to OpenAPI tooling, DCOM is a paradise.

I have no first clue how anyone does anything with OpenAPI. Just about every tool I tried to turn OpenAPI specs into C or C++ code, or documentation, is horribly broken. And this isn't me "holding it wrong" - in each case, I found GitHub issues about those same failures, submitted months or years ago, and still open, on supposedly widely-used and actively maintained projects...

replies(2): >>36453846 #>>36454428 #
14. TeMPOraL ◴[] No.36453757{6}[source]
Unfortunately, at this point some of the slow layers are in hardware, or immediately adjacent to it. For example, AFAIR[0], the time between your keyboard registering a press, and the corresponding event reaching the application, is already counted in milliseconds and can become perceptible. Even assuming the app processes it instantly, that's just half of the I/O loop. The other half, that is changing something on display, involves digging through GUI abstraction layers, compositor, possibly waiting on GPU a bit, and then... these days, displays themselves tend to introduce single-digit millisecond lags due to the time it takes to flip a pixel, and buffering added to mask it.

These are things we're unlikely to get back (and by themselves already make typical PC stacks unable to deliver smooth hand-writing experience - Microsoft Research had a good demo some time ago, showing that you need to get to single-digit milliseconds on the round-trip between touch event and display update, for it to feel like manipulating a physical thing vs. having it attached to your finger on a rubber band). Win2K on a hardware from that era is going to remain snappier than modern computers for that reason alone. But that only underscores the need to make the userspace software part leaner.

--

[0] - Source: I'll need to find that blog that's regularly on HN, whose author did measurements on this.

replies(1): >>36456133 #
15. meese712 ◴[] No.36453795[source]
Here's a fun fact, most fonts have a font program written in a font specific instruction set that requires a virtual machine to run. There is no escaping the VMs!
replies(1): >>36453995 #
16. nycdotnet ◴[] No.36453846{3}[source]
same with C#
17. DaiPlusPlus ◴[] No.36453995[source]
A VM is not a VM. Just because a program’s semantics are defined in-terms of “a” virtual-machine (Java, .NET, etc) - it’s otherwise entirely unrelated to virtualisation.
replies(2): >>36469595 #>>36627695 #
18. civilitty ◴[] No.36454122[source]
> But I really wonder - what would be more complex?

> Building interactive programs based on HTML or Logo (if anybody does remember)?

Hold my beer: my Github Actions CI scripts use Logo to generate the bash build scripts as images that are then OCRed and executed by a special terminal that exploits the Turing complete nature of Typescript's type system.

Turtles all the way down!

19. ungamedplayer ◴[] No.36454298[source]
Except for when they do.
20. kbenson ◴[] No.36454428{3}[source]
I too was once very excited that OpenAPI specs I had access to would save me untold hours in implementing an API for a service since I could pass them through a generator, only to find once I tried everything seemed somewhat broken or the important time saving bits just weren't quite ready yet.

That was about five years ago. :/

21. cmgbhm ◴[] No.36454710[source]
I think of the OLE demos every time I shove a google sheet into a google doc and realize it’s only a one way sync.
22. desi_ninja ◴[] No.36456031[source]
WinRT is COM under the covers
23. jbboehr ◴[] No.36456133{7}[source]
Perhaps it's this one?

http://danluu.com/input-lag/

replies(1): >>36458714 #
24. TeMPOraL ◴[] No.36458714{8}[source]
Yes, this one exactly, thank you!
25. immibis ◴[] No.36459614[source]
In some ways COM is pretty optimized. An intra-thread COM call is just a virtual function call - no extra overhead. Otherwise it's a virtual function call to a proxy function that knows exactly how to serialize the parameters for IPC.
replies(1): >>36534462 #
26. immibis ◴[] No.36459630[source]
That depends how much overhead is in the VM. WASM is designed to be thin. Java is not.
27. immibis ◴[] No.36459679[source]
WASM is designed to cut through all the bullshit and leave only a minimal amount of bullshit, even though it turns out there's still a lot of bullshit in the other parts of the system that WASM doesn't address.

I like websockets for the same reason. Each message has a two byte overhead compared to TCP. Two bytes. Unfortunately messages sent by the client have a whopping four additional bytes to help protect buggy middleboxes.

28. froggit ◴[] No.36469595{3}[source]
It always kind of cracks me up when I hear someone having to explain the difference between between these 2 breeds of VM.

At one point back in school a friend said to me "hey, I can't figure out how to install and boot JVM on Virtual Box. I need to use it for homework in another class. Help me?"

I wish I had been able to explain it as succinctly as you. Instead I sat there laughing in the guy's face for a good minute, eventually realizing from his expression that he was being serious, which only made me laugh even harder.

replies(1): >>36627461 #
29. wellanyway ◴[] No.36492525[source]
Howdoes figma contribute to laggy UI of the end product?
replies(1): >>36507960 #
30. myth2018 ◴[] No.36507960{3}[source]
Notice that in this sub-topic we're talking more generally about causes for low-quality software -- laggy UIs being only one of the symptoms.

Figma contributes by enabling UI designers to easily author interfaces which look allegedly beautiful but are complex to build, test and maintain.

And the resources burned on building such esthetically pleasant piles of barely usable software could find better use on making it simpler, faster and more focused on user actual functional and non-functional requirements (much of them taking place on the server-side) instead of sugaring their eyes by throwing tons of code on their clients

31. benibela ◴[] No.36534462{3}[source]
>An intra-thread COM call is just a virtual function call - no extra overhead.

There was a time when a virtual function call was a lot of overhead

Even having a VMT is overhead.

Sometimes the COM interface is implemented as actual interface, where the implementing class is derived from another class and the interface. (in C++ the interface is just another class with multiple inheritance, but other languages have designed interfaces). Then the class even needs to have two VMTs.

Multiple VMTs have even more overhead. And with multiple VMTs, it is not just a method call. In the functions, this always points to the first VMT. But when a function from the VMT is called, the pointer points to that VMT. So the compiler creates a wrapper function, that adjusts this and calls the actual function.

when methods from the later VMTs are called , this points (non-virtual thunk)

32. ◴[] No.36627461{4}[source]
33. ◴[] No.36627695{3}[source]