Most active commenters
  • TeMPOraL(4)
  • immibis(4)
  • Dylan16807(3)

←back to thread

752 points dceddia | 60 comments | | HN request time: 0.006s | source | bottom
Show context
yomlica8 ◴[] No.36447314[source]
It blows my mind how unresponsive modern tech is, and it frustrates me constantly. What makes it even worse is how unpredictable the lags are so you can't even train yourself around it.

I was watching Halt and Catch Fire and in the first season the engineering team makes a great effort to meet something called the "Doherty Threshold" to keep the responsiveness of the machine so the user doesn't get frustrated and lose interest. I guess that is lost to time!

replies(18): >>36447344 #>>36447520 #>>36447558 #>>36447932 #>>36447949 #>>36449090 #>>36449889 #>>36450472 #>>36450591 #>>36451868 #>>36452042 #>>36453741 #>>36454246 #>>36454271 #>>36454404 #>>36454473 #>>36462340 #>>36469396 #
sidewndr46 ◴[] No.36447344[source]
Even worse is the new trend of web pages optimizing for page load time. You wind up with a page that loads "instantly" but has almost none of the data you need displayed. Instead there are 2 or 3 AJAX requests to load the data & populate the DOM. Each one results in a repaint, wasting CPU and causing the page content to move around.
replies(13): >>36447430 #>>36448035 #>>36448135 #>>36448336 #>>36448834 #>>36449278 #>>36449850 #>>36450266 #>>36454683 #>>36455856 #>>36456553 #>>36457699 #>>36458429 #
1. danieldk ◴[] No.36448336[source]
This drives me crazy, especially because it breaks finding within a page. Eg. if you order food and you already know what you want.

Old days: Cmd + f, type what you want.

New days: first scroll to the end of the page so that all the contents are actually loaded. Cmd + f, type what you want.

Is just a list of dishes, some with small thumbnails, some without any images at all. If you can't load a page with 30 dishes fast enough, you have a serious problem (you could always lazily load the thumbnails if you want to cheat).

replies(6): >>36448673 #>>36448968 #>>36449626 #>>36449636 #>>36449814 #>>36454049 #
2. mikecoles ◴[] No.36448673[source]
I miss typing / to search on pages. Ctrl-W still messes me up at times when using a terminal in a browser.
replies(4): >>36448807 #>>36448949 #>>36449154 #>>36449651 #
3. TechieKid ◴[] No.36448807[source]
You can search with / on Firefox.
replies(2): >>36449643 #>>36450495 #
4. abdusco ◴[] No.36448949[source]
Firefox lets you use / for search.
5. TeMPOraL ◴[] No.36448968[source]
> New days: first scroll to the end of the page so that all the contents are actually loaded. Cmd + f, type what you want.

Only to discover that 2/3 of the matches are invisible text that's put there for $deity knows what reason, and the rest only gives you the subset of what you want, as the UI truncates the list of ingredients/toppings and you need to click or hover over it to see it in full.

6. dbtc ◴[] No.36449154[source]
vimium / vimium-ff
7. vhcr ◴[] No.36449626[source]
Or even worse, Cmd + F doesn't work because the page is lazily rendered.
replies(2): >>36450298 #>>36453959 #
8. sazz ◴[] No.36449636[source]
Well, the tech stack is insane: Some virtual machine running a web browser process running a virtual machine for a html renderer which consumes a document declaration language incorporating a scripting language to overcome the document limitations trying to build interactive programs.

Actually much worse as Microsoft once did with their COM model, ActiveX based on MFC foundation classes with C++ templates, etc.

And to build those interactive programs somebody is trained to use React, Vue, etc. using their own eco systems of tools. This is operated by a stack of build tools, a stack of distribution tools, kubernetes for hosting and AWS for managing that whole damn thing.

Oh - and do not talk even about Dependency Management, Monitoring, Microservices, Authorization and so on...

But I really wonder - what would be more complex?

Building interactive programs based on HTML or Logo (if anybody does remember)?

replies(7): >>36449860 #>>36449954 #>>36450498 #>>36451505 #>>36452358 #>>36453795 #>>36454122 #
9. 5e92cb50239222b ◴[] No.36449643{3}[source]
' will search in URLs, skipping plain text.
10. _a_a_a_ ◴[] No.36449651[source]
Firefox allows you to use / to start a conventional search.

As a bonus, in Firefox if you hit the ' key (apostrophe) you get a search that looks only within hyperlinks and ignores all the un-clickable plain text. Give it a try, sometimes it can be very useful

11. nzach ◴[] No.36449814[source]
>If you can't load a page with 30 dishes fast enough, you have a serious problem

That depends on your scale. If your product is "large enough" it is relatively easy to get into the range of several seconds of response time.

Here are some of the steps you may want to execute before responding a resquest to your user:

- Get all the dishes that have the filters the user selected

- Remove all dishes from restaurants that doen't delivery in the user location

- Remove all dishes from restaurants that aren't open right now

- Get all discount campaigns for the user and apply its effects for every dish

- Reorder the dish list based on the history of the user interactions

Now imagine that for every step in this list you have, at least, a single team of developers. Add some legacy requirements and a little bit of tech debt... That's it, now you have the perfect stage for a request that takes 5-10 seconds.

replies(3): >>36450058 #>>36450270 #>>36455639 #
12. myth2018 ◴[] No.36449860[source]
And don't forget the UX designers armed with their figmas and alikes. The tech-stack is only one among a number of organizational and cultural issues crippling the field.
replies(1): >>36492525 #
13. atchoo ◴[] No.36449954[source]
Ironically these instant starting NT applications were often using COM.

As much as I hated developing with COM, the application interoperability and OLE automation is a form 90s tech utopianism that I miss.

replies(4): >>36453279 #>>36454710 #>>36456031 #>>36459614 #
14. bamfly ◴[] No.36450058[source]
Dafuq kind of scale could even a service for lots of restaurants have, that that wouldn't be a single query taking milliseconds to execute? I'd maybe split the last bit (user history re-ordering) into another operation, but the rest, nah, not seeing it, one quick query, probably behind a view.

I mean maybe your DB is a single node running on a potato and your load's very high but you're also somehow never hitting cache, but otherwise... no, there's no good reason for that to be slow.

[EDIT] Your last paragraph is the reason, though: it's made extremely poorly. That'll do it.

replies(1): >>36459713 #
15. jerf ◴[] No.36450270[source]
None of the things you said mentioned should be hard. We did complicated things like that and more in the 1990s.

But it was different...

Yeah. It was. That's exactly my point.

A major problem is the number of places in our code stacks where developers think it's perfectly normal for things to take 50ms or 500ms that aren't. I am not a performance maniac but I'm always keeping a mental budget in my head for how long things should take, and if something that should be 50us takes 50ms I generally at some point dig in and figure out why. If you don't even realize that something should be snappy you'll never dig into why your accidentally quadratic code is as slow as it is.

Another one I think is ever-increasingly to blame is the much celebrated PHP-esque "fully isolated page", where a given request is generated and then everything is thrown away. It was always a performance disaster, but when you go from 1 request to dozens for the simplest page render it becomes extra catastrophic. A lot of my web sites are a lot faster than my fellow developers expect simply because I reject that as a model for page generation. Things are a lot faster if you're only serving what was actually requested and not starting everything up from scratch.

Relatedly, developers really underestimate precomputation, which is very relevant to your point. Your hypothetical page layout is slow because you waited until the user actually clicked "menu" to start generating all that. Why did you do that? You should have computed that all at login time and have it stored right at your fingertips, because it is a reasonable assumption given the sort of page you're talking about that if the user logged in, they are there to make an order, not to look at their credit card settings. Even if it expensive for reasons out of your control (location API, for instance) if you already did the work you can serve the user instantly.

Having precomputed all this data, you might as well shove it all down to the client and let them manipulate it there with zero further network requests. A menu is a trivial amount of information.

It isn't even like precomputation is hard. It's the same code, just running at a different time.

"But what about when that doesn't work?" Well, you do something else. You've got a huge list of options. I haven't even scratched the surface. This isn't a treatise on how to speed up every conceivable website, this is a cri de coeur to stop making excuses for not even trying, and just try a little.

And it is SO MUCH FUN. Those of you who don't try you have no idea what you are missing out on. It is completely normal on a code base no one has ever profiled before to find a 50ms process and improve it to 50us with just a handful of lines tweaked. It is completely normal to examine a DB query taking 3 seconds and find that with a single ALTER TABLE ADD INDEX cut that down to 2us. This is the most fun I have at work. Give it a try. It's addictive!

replies(2): >>36450558 #>>36450979 #
16. pohuing ◴[] No.36450298[source]
Twitter does that and I don't understand why. Do they think their js blob can figure out which divs to render faster than the browser engine itself?
replies(1): >>36452979 #
17. GrinningFool ◴[] No.36450495{3}[source]
Unless you're on github.com, because it rather inconsiderately takes over `/`.
18. Dylan16807 ◴[] No.36450498[source]
Virtual machines don't add much overhead.
replies(3): >>36450607 #>>36454298 #>>36459630 #
19. ericd ◴[] No.36450558{3}[source]
Yeah, your last point is totally spot on, it is so gratifying to make something feel obviously much faster, in a way that few other things in programming are, and there's usually a lot of low hanging fruit.

Also, if you work on a website, the Google crawler seems to allocate a certain amount of wall time (not just CPU time) to crawling your page. If you can get your pages to respond extremely quickly, more of your pages will be indexed, and you're going to match for more keywords. So if for some reason people aren't convinced that speed is an important feature for users wanting to use your site, maybe SEO benefits will help make the case.

20. vladvasiliu ◴[] No.36450607{3}[source]
That may be so, although a VM on AWS is measurably slower than the same program running on my Xeon from 2013 or so.

But, more generally, the problem is that tech stacks are an awful amalgamation of uncountable pieces which, taken by themselves, "don't add much overhead". But when you add them all together, you end up with a terribly laggy affair; see the other commenter's description of a web page with 30 dishes taking forever to load.

replies(2): >>36450758 #>>36451281 #
21. Dylan16807 ◴[] No.36450758{4}[source]
My experience has been that HTML and making changes to HTML has huge amounts of overhead all by itself.

I don't dispute that very bad javascript can cause problems, but I don't think it's the virtualization layers or the specific language that are responsible for more than a sliver of that in the vast majority of use cases.

And the pile of build and distribution tools shouldn't hurt the user at all.

replies(1): >>36451031 #
22. Sohcahtoa82 ◴[] No.36450979{3}[source]
ALTER TABLE ADD INDEX, in some cases, can create speed increases in multiple orders of magnitude.

I was using a SAST program that used MS SQL Server and was generating reports, and often finding the reports took HOURS to generate, even when the report was only ~50 pages. A report on one specific project took over a DAY to generate a report. I thought it was ludicrous, so I logged onto the SQL server to investigate and found that one query was taking 99% of the time. This query was searching through a table with tens of millions of rows, but not indexed on the specific rows it was checking against, and many variations of the query were being used to generate the report. I added the index (Only took about an hour, IIRC), and what took hours now took a couple minutes.

I was always surprised the software didn't create that index to begin with.

replies(1): >>36455993 #
23. Brian_K_White ◴[] No.36451031{5}[source]
It's exactly the pile of layers, not "bad javascript" or any other single thing. Yes vms do add ridiculius overhead.
replies(2): >>36451174 #>>36451634 #
24. Dylan16807 ◴[] No.36451174{6}[source]
When half the layers add up to 10% of the problem, and the other half of the layers add up to 90% of the problem, I don't blame layers in general. If you remove the parts that legitimately are just bad by themselves, you solve most of the problems.

If you have a dozen layers and they each add 5% slowdown, okay that makes a CPU half as fast. That amount of slowdown is nothing for UI responsiveness. A modern core clocked at 300MHz would blow the pants off the 600MHz core that's responding instantly in the video, and then it would clock 10x higher when you turn off the artificial limiter. Those slight slowdowns of layered abstractions are not the real problem.

(Edit: And note that's not for a dozen layers total but for a dozen additional layers on top of what the NT code had.)

replies(1): >>36453757 #
25. sazz ◴[] No.36451281{4}[source]
"But when you add them all together, you end up with a terribly laggy affair"

I think this is the key.

People fall in love with simple systems which solves a problem. Being totally in honey moon they want to use those systems for additional use cases. So they build stuff around it, atop of it, abstract the original system to allow growth for more complexity.

Then the system becomes too complex, so people might come up with a new idea. So they create a new simple system just to fix a part of the whole stack and allow migration from the old one. The party starts all over again.

But it's like hard drive fragmentation - at some point there are so many layers that it's basically impossible to recombine layers again. Everybody fears complexity already built.

26. pjmlp ◴[] No.36451505[source]
And then came WASM...
replies(1): >>36459679 #
27. jrott ◴[] No.36451634{6}[source]
At this point there are so many layers that it would be hard to figure out the common problems without doing some serious work profiling a whole bunch of applications
28. syntheweave ◴[] No.36452358[source]
Well, the way in which we got there is in adding more features to the "obvious" answer: all any given site necessarily has to do, to have the same presentation as today, is place pixels on the screen, change the pixels when the user clicks or types things, and persist data somewhere as the user does things.

Except...there's no model of precisely how text is handled or how to re-encode it for e.g. screen reading...the development model lacks the abstraction to do layout...and so on. So we added a longer pipeline with more things to configure, over and over.

But - the computing environment is also different now. We can say, "aha, but OCR exists, GPT exists" and pursue a completely different way of presenting many of those features, where you leverage a higher grade of computing power to make the architectural line between the presentation layer and the database extremely short and weighted towards "user in control of their data and presentation". That still takes engineering and design, but the order of magnitude goes down, allowing complexity to bottleneck elsewhere.

That's the kind of conceptual leap computing has made a few times over history - at first the idea of having the computer itself compile your program from a textual form to machine instructions ("auto-coding") was novel and complex. Nowadays we expect our compilers to make coffee for us.

29. RichardCA ◴[] No.36452979{3}[source]
I am in no way a front-end person, but my understanding is that it has to do with React and the way it maintains a virtual copy of the DOM.

Having Ctrl-F not work correctly is maddening and I hate it.

replies(1): >>36454004 #
30. TeMPOraL ◴[] No.36453279{3}[source]
Indeed.

On one project, I actually shifted quite recently from working on old-school, pre-Windows XP, DCOM-based protocols, to interfacing with REST APIs. Let me tell you this: compared to OpenAPI tooling, DCOM is a paradise.

I have no first clue how anyone does anything with OpenAPI. Just about every tool I tried to turn OpenAPI specs into C or C++ code, or documentation, is horribly broken. And this isn't me "holding it wrong" - in each case, I found GitHub issues about those same failures, submitted months or years ago, and still open, on supposedly widely-used and actively maintained projects...

replies(2): >>36453846 #>>36454428 #
31. TeMPOraL ◴[] No.36453757{7}[source]
Unfortunately, at this point some of the slow layers are in hardware, or immediately adjacent to it. For example, AFAIR[0], the time between your keyboard registering a press, and the corresponding event reaching the application, is already counted in milliseconds and can become perceptible. Even assuming the app processes it instantly, that's just half of the I/O loop. The other half, that is changing something on display, involves digging through GUI abstraction layers, compositor, possibly waiting on GPU a bit, and then... these days, displays themselves tend to introduce single-digit millisecond lags due to the time it takes to flip a pixel, and buffering added to mask it.

These are things we're unlikely to get back (and by themselves already make typical PC stacks unable to deliver smooth hand-writing experience - Microsoft Research had a good demo some time ago, showing that you need to get to single-digit milliseconds on the round-trip between touch event and display update, for it to feel like manipulating a physical thing vs. having it attached to your finger on a rubber band). Win2K on a hardware from that era is going to remain snappier than modern computers for that reason alone. But that only underscores the need to make the userspace software part leaner.

--

[0] - Source: I'll need to find that blog that's regularly on HN, whose author did measurements on this.

replies(1): >>36456133 #
32. meese712 ◴[] No.36453795[source]
Here's a fun fact, most fonts have a font program written in a font specific instruction set that requires a virtual machine to run. There is no escaping the VMs!
replies(1): >>36453995 #
33. nycdotnet ◴[] No.36453846{4}[source]
same with C#
34. TheBrokenRail ◴[] No.36453959[source]
Like GitHub's new code viewer! I have to load the raw text just so I can use Ctrl-F!
replies(1): >>36464894 #
35. DaiPlusPlus ◴[] No.36453995{3}[source]
A VM is not a VM. Just because a program’s semantics are defined in-terms of “a” virtual-machine (Java, .NET, etc) - it’s otherwise entirely unrelated to virtualisation.
replies(2): >>36469595 #>>36627695 #
36. korm ◴[] No.36454004{4}[source]
Nothing to do with React, it's a common optimization to improve performance with long lists. You only render the dom elements in the viewport, with some buffer. A common technique to achieve that is called "virtualization" or "windowing".

It's common enough that there were a couple browser proposals to deal with this and would address the Ctrl+F issue. I believe this has been merged into the CSS Containment spec, but at the moment it doesn't make windowing obsolete in every situation.

replies(2): >>36455791 #>>36457351 #
37. brazzledazzle ◴[] No.36454049[source]
If you think that’s enraging just wait until you get a page that unloads the previous content as you scroll. You can only search the text that’s visible. Maddening.
38. civilitty ◴[] No.36454122[source]
> But I really wonder - what would be more complex?

> Building interactive programs based on HTML or Logo (if anybody does remember)?

Hold my beer: my Github Actions CI scripts use Logo to generate the bash build scripts as images that are then OCRed and executed by a special terminal that exploits the Turing complete nature of Typescript's type system.

Turtles all the way down!

39. ungamedplayer ◴[] No.36454298{3}[source]
Except for when they do.
40. kbenson ◴[] No.36454428{4}[source]
I too was once very excited that OpenAPI specs I had access to would save me untold hours in implementing an API for a service since I could pass them through a generator, only to find once I tried everything seemed somewhat broken or the important time saving bits just weren't quite ready yet.

That was about five years ago. :/

41. cmgbhm ◴[] No.36454710{3}[source]
I think of the OLE demos every time I shove a google sheet into a google doc and realize it’s only a one way sync.
42. jaxrtech ◴[] No.36455639[source]
I'm a well constructed system, you could probably get this down to one PostGIS query with some joins and spacial indexes that should run in <100ms.
43. ziml77 ◴[] No.36455791{5}[source]
It's not even a technique limited to browsers. When I did Android development years ago it was a technique used in native list controls to reduce the overhead of them. Though in that case, there's no search based on what's been rendered like a web browser has. And of course if you did implement search, you could have that look at the underlying data, so it doesn't matter if it's been rendered.
44. simooooo ◴[] No.36455993{4}[source]
Because it can hog a lot of disk space and slow inserts
replies(1): >>36494555 #
45. desi_ninja ◴[] No.36456031{3}[source]
WinRT is COM under the covers
46. jbboehr ◴[] No.36456133{8}[source]
Perhaps it's this one?

http://danluu.com/input-lag/

replies(1): >>36458714 #
47. pohuing ◴[] No.36457351{5}[source]
But Web browsers already don't render off screen content right? I'm pretty sure I remember opening hundreds of megs of data in Firefox at one point without an issue. Or old reddit with RES has infinite scroll and you can go dozens of pages deep without a hitch. All while not lazily rendering the other pages.
48. TeMPOraL ◴[] No.36458714{9}[source]
Yes, this one exactly, thank you!
49. immibis ◴[] No.36459614{3}[source]
In some ways COM is pretty optimized. An intra-thread COM call is just a virtual function call - no extra overhead. Otherwise it's a virtual function call to a proxy function that knows exactly how to serialize the parameters for IPC.
replies(1): >>36534462 #
50. immibis ◴[] No.36459630{3}[source]
That depends how much overhead is in the VM. WASM is designed to be thin. Java is not.
51. immibis ◴[] No.36459679{3}[source]
WASM is designed to cut through all the bullshit and leave only a minimal amount of bullshit, even though it turns out there's still a lot of bullshit in the other parts of the system that WASM doesn't address.

I like websockets for the same reason. Each message has a two byte overhead compared to TCP. Two bytes. Unfortunately messages sent by the client have a whopping four additional bytes to help protect buggy middleboxes.

52. immibis ◴[] No.36459713{3}[source]
Recently I stumbled across the online catalog for Segor Electronics (segor.de I think? Google it. Only in German. They're not paying me to post this)

It's extremely fast. Super duper fast. And a quick look at the network debugging tab shows why: it loads the shop's entire catalog data (about 3 megs) upfront, and the entire application runs locally with not a single request until you buy something. Now that's efficiency.

Really. Go to their website, clock on KATALOG and click some random buttons, pick a product at random, add it to your cart, remove it from your cart.

The product images are the only things that aren't pre-loaded.

53. sidewndr46 ◴[] No.36464894{3}[source]
Someone should invent a way for a web server to return a representation of the text, complete with styling and formatting that the browser can use to render it.
54. froggit ◴[] No.36469595{4}[source]
It always kind of cracks me up when I hear someone having to explain the difference between between these 2 breeds of VM.

At one point back in school a friend said to me "hey, I can't figure out how to install and boot JVM on Virtual Box. I need to use it for homework in another class. Help me?"

I wish I had been able to explain it as succinctly as you. Instead I sat there laughing in the guy's face for a good minute, eventually realizing from his expression that he was being serious, which only made me laugh even harder.

replies(1): >>36627461 #
55. wellanyway ◴[] No.36492525{3}[source]
Howdoes figma contribute to laggy UI of the end product?
replies(1): >>36507960 #
56. Sohcahtoa82 ◴[] No.36494555{5}[source]
Somehow missed this reply for 3 days...

In my use case, impact on inserts was not noticed. I did notice higher disk space usage, but it absolutely was worth it. Spending $200 on a larger disk was absolutely worth saving literally days on report generation.

57. myth2018 ◴[] No.36507960{4}[source]
Notice that in this sub-topic we're talking more generally about causes for low-quality software -- laggy UIs being only one of the symptoms.

Figma contributes by enabling UI designers to easily author interfaces which look allegedly beautiful but are complex to build, test and maintain.

And the resources burned on building such esthetically pleasant piles of barely usable software could find better use on making it simpler, faster and more focused on user actual functional and non-functional requirements (much of them taking place on the server-side) instead of sugaring their eyes by throwing tons of code on their clients

58. benibela ◴[] No.36534462{4}[source]
>An intra-thread COM call is just a virtual function call - no extra overhead.

There was a time when a virtual function call was a lot of overhead

Even having a VMT is overhead.

Sometimes the COM interface is implemented as actual interface, where the implementing class is derived from another class and the interface. (in C++ the interface is just another class with multiple inheritance, but other languages have designed interfaces). Then the class even needs to have two VMTs.

Multiple VMTs have even more overhead. And with multiple VMTs, it is not just a method call. In the functions, this always points to the first VMT. But when a function from the VMT is called, the pointer points to that VMT. So the compiler creates a wrapper function, that adjusts this and calls the actual function.

when methods from the later VMTs are called , this points (non-virtual thunk)

59. ◴[] No.36627461{5}[source]
60. ◴[] No.36627695{4}[source]