Most active commenters
  • immibis(8)
  • TeMPOraL(6)
  • mike_hearn(4)
  • Dylan16807(4)
  • ObscureScience(4)
  • simooooo(4)
  • account42(4)
  • (3)
  • viraptor(3)
  • winrid(3)

←back to thread

752 points dceddia | 179 comments | | HN request time: 2.727s | source | bottom
1. yomlica8 ◴[] No.36447314[source]
It blows my mind how unresponsive modern tech is, and it frustrates me constantly. What makes it even worse is how unpredictable the lags are so you can't even train yourself around it.

I was watching Halt and Catch Fire and in the first season the engineering team makes a great effort to meet something called the "Doherty Threshold" to keep the responsiveness of the machine so the user doesn't get frustrated and lose interest. I guess that is lost to time!

replies(18): >>36447344 #>>36447520 #>>36447558 #>>36447932 #>>36447949 #>>36449090 #>>36449889 #>>36450472 #>>36450591 #>>36451868 #>>36452042 #>>36453741 #>>36454246 #>>36454271 #>>36454404 #>>36454473 #>>36462340 #>>36469396 #
2. sidewndr46 ◴[] No.36447344[source]
Even worse is the new trend of web pages optimizing for page load time. You wind up with a page that loads "instantly" but has almost none of the data you need displayed. Instead there are 2 or 3 AJAX requests to load the data & populate the DOM. Each one results in a repaint, wasting CPU and causing the page content to move around.
replies(13): >>36447430 #>>36448035 #>>36448135 #>>36448336 #>>36448834 #>>36449278 #>>36449850 #>>36450266 #>>36454683 #>>36455856 #>>36456553 #>>36457699 #>>36458429 #
3. leidenfrost ◴[] No.36447430[source]
There was a small accordion in some Google search results that opened around ~1 second after the results page was loaded and I think it was the most infuriating thing ever. And we are talking about Google here.
replies(5): >>36447524 #>>36447546 #>>36448356 #>>36449240 #>>36450437 #
4. Retr0id ◴[] No.36447520[source]
Unfortunately for most users of most software, "losing interest" isn't really an option - they need it to do some job or other.
replies(1): >>36447716 #
5. amaccuish ◴[] No.36447524{3}[source]
The one where you would go to click on the first result and it would expand seemingly perfectly timed in between and you’d end up somewhere else?
replies(3): >>36447648 #>>36449394 #>>36453231 #
6. polyvisual ◴[] No.36447546{3}[source]
My goodness, that accordion needs to removed from their code base immediately.

A horrific piece of UI/UX/engineering/whatever.

7. dm319 ◴[] No.36447558[source]
I was just reading this week about someone trying to get their UHK keyboard to launch an application on Windows by producing a sequence of keys starting with the Windows key. They needed to put in a delay to get this to work, and it reminded me of my frustrations launching programs in Windows as the start menu takes its sweet and variable time. Not least because I know the technology has been focused on getting adverts into the start menu.
replies(4): >>36447636 #>>36449361 #>>36450231 #>>36452132 #
8. danuker ◴[] No.36447636[source]
Isn't there still the Run dialog, Win+R in newer versions of Windows?
replies(4): >>36448559 #>>36450153 #>>36458273 #>>36458485 #
9. yomlica8 ◴[] No.36447648{4}[source]
I want to say some webpages actually do this to make you accidentally click on ads but I have no proof.
replies(1): >>36447734 #
10. jprete ◴[] No.36447716[source]
I get your point, but "losing interest" can also mean losing flow, because the user got interrupted for five seconds instead of instantly taking the next action in their mental plan.

When apps have these kinds of interruptions all over the place, that's even worse than just having them at startup.

11. michaelt ◴[] No.36447734{5}[source]
We A/B tested it, and the 750ms accordion produces maximum revenue. Why do you hate evidence-based decision making? /s
replies(3): >>36448307 #>>36454391 #>>36456059 #
12. bee_rider ◴[] No.36447932[source]
OTOH I recall alt-tabbing full screen games (Warcraft 3 on a single core machine is a specific memory) and then sitting back for a while…

Office suites have never been good, but office suites in like 2005 seemed to stretch systems to the breaking point.

Lots of consumer software has always sucked out of the box, I guess if you are here you were possibly a technically savvy kid at some point, is it possible that you were just more selective about the types of programs you ran when you were using the computer for fun?

replies(4): >>36448746 #>>36448850 #>>36453057 #>>36459784 #
13. kitsunesoba ◴[] No.36447949[source]
This is what happens when you have a leaning tower of abstractions, with each layer being developed with a philosophy of, "it's good enough". Some performance loss is unavoidable when you're adding layers, but that aforementioned attitude of indifference has a multiplicative effect which dramatically increases losses. By the time you get to the endpoint, the losses snowball into something rather ridiculous.
replies(3): >>36448999 #>>36449608 #>>36451720 #
14. JohnFen ◴[] No.36448035[source]
> You wind up with a page that loads "instantly" but has almost none of the data you need displayed.

Which, in my mind, means it didn't load instantly. The page isn't loaded until all of the data is displayed.

15. timthorn ◴[] No.36448135[source]
> Even worse is the new trend of web pages optimizing for page load time

I don't disagree with your example, but optimising for page load time is as old as the graphical Web.

replies(2): >>36448780 #>>36448912 #
16. TeMPOraL ◴[] No.36448307{6}[source]
You jest, but that's exactly how you get plausibly deniable dark patterns. It's a numbers game.
replies(1): >>36451520 #
17. danieldk ◴[] No.36448336[source]
This drives me crazy, especially because it breaks finding within a page. Eg. if you order food and you already know what you want.

Old days: Cmd + f, type what you want.

New days: first scroll to the end of the page so that all the contents are actually loaded. Cmd + f, type what you want.

Is just a list of dishes, some with small thumbnails, some without any images at all. If you can't load a page with 30 dishes fast enough, you have a serious problem (you could always lazily load the thumbnails if you want to cheat).

replies(6): >>36448673 #>>36448968 #>>36449626 #>>36449636 #>>36449814 #>>36454049 #
18. rkagerer ◴[] No.36448356{3}[source]
That one ALWAYS foils me and I hate it and whoever created it.
19. abwizz ◴[] No.36448559{3}[source]
this is the way.

but putting delay between events (not keystrokes) is nevertheless a good practice

20. mikecoles ◴[] No.36448673{3}[source]
I miss typing / to search on pages. Ctrl-W still messes me up at times when using a terminal in a browser.
replies(4): >>36448807 #>>36448949 #>>36449154 #>>36449651 #
21. treeman79 ◴[] No.36448746[source]
Old text word processors on my 286/386 back in the late 80s / early 90s, ran just fine. Instant everything. Only thing that was truly slow was the scanner.
replies(1): >>36450409 #
22. thesuitonym ◴[] No.36448780{3}[source]
Yes, but back in the 90s the dominant idea was load the text as quickly as possible, then all the other junk can come later.
23. TechieKid ◴[] No.36448807{4}[source]
You can search with / on Firefox.
replies(2): >>36449643 #>>36450495 #
24. porphyra ◴[] No.36448834[source]
And when you're about to click on something, something new loads and everything jumps unpredictably and you end up accidentally clicking on the wrong thing lol
replies(2): >>36449167 #>>36449545 #
25. anaisbetts ◴[] No.36448850[source]
Most games don't need to change screen resolutions anymore which is the expensive bit since not only do you have to wait for the hardware to settle, you have to throw out basically everything in GPU memory and reset it all
replies(2): >>36450166 #>>36477975 #
26. omoikane ◴[] No.36448912{3}[source]
I thought the thing to optimize for is "largest contentful paint", which is one of the weighted factors used by https://pagespeed.web.dev/
27. abdusco ◴[] No.36448949{4}[source]
Firefox lets you use / for search.
28. TeMPOraL ◴[] No.36448968{3}[source]
> New days: first scroll to the end of the page so that all the contents are actually loaded. Cmd + f, type what you want.

Only to discover that 2/3 of the matches are invisible text that's put there for $deity knows what reason, and the rest only gives you the subset of what you want, as the UI truncates the list of ingredients/toppings and you need to click or hover over it to see it in full.

29. jimt1234 ◴[] No.36449090[source]
Recently I re-discovered my old collection of mp3s. I copied them to my laptop and started listening (using VLC). It blew me away how pleasant the experience was, particularly the overall responsiveness, navigating from track-to-track, skipping around in a track, stuff like that. I never really noticed the delays from streaming services until they were gone.
replies(1): >>36453512 #
30. dbtc ◴[] No.36449154{4}[source]
vimium / vimium-ff
31. YoukaiCountry ◴[] No.36449167{3}[source]
This is honestly the worst part of it to me. It happens quite a lot!
32. Fatnino ◴[] No.36449240{3}[source]
Their new AI shit on the top of the sesrch results does this. It's slow AF and I'll sometimes have scrolled partway down the page before it farts a huge blob of text up top and pushes stuff I've already scrolled past back down past my finger.
33. asah ◴[] No.36449278[source]
jfyi this is not new - we confronted this in 1996 when the web was very young.
34. Ekaros ◴[] No.36449361[source]
Or this new version will just break... I had some nice weeks when the search just didn't work at all... While I was too lazy to restart the computer to fix it...

Then again, I guess any other OS might break in same way. Like my Debian VM just kinda stops responding to part of the screen sometimes if programs are maximised...

35. ballenf ◴[] No.36449394{4}[source]
I really could see Apple adopting an approach from FPS games, where the phone applies a click to what was under your finger 1/4 second-ish instead of what's there when the click is recognized. Time-travel clicking.

But the real solution is better web page design.

36. dghughes ◴[] No.36449545{3}[source]
Yes, in Windows 50% of my day is fighting with something obscuring my view or taking focus. Something pops up just as I was clicking or pressing Enter. I have to sign into multiple systems in the morning and sometimes it takes four attempts to enter creds for one login because I keep getting interrupted. It's infuriating.
replies(2): >>36452207 #>>36455637 #
37. LanceH ◴[] No.36449608[source]
Along those lines, I have numerous clients who just want plain Ruby on Rails -- no react front end. They are all business to business, or at least professional users on the end. They just want their data loaded and to work with it.

Ruby on Rails may not be the poster child for speediness as things get big or complex, but if you aren't fighting the ORM, it's consistently quick from click to data.

Also, RoR is definitely not dead.

replies(1): >>36450631 #
38. vhcr ◴[] No.36449626{3}[source]
Or even worse, Cmd + F doesn't work because the page is lazily rendered.
replies(2): >>36450298 #>>36453959 #
39. sazz ◴[] No.36449636{3}[source]
Well, the tech stack is insane: Some virtual machine running a web browser process running a virtual machine for a html renderer which consumes a document declaration language incorporating a scripting language to overcome the document limitations trying to build interactive programs.

Actually much worse as Microsoft once did with their COM model, ActiveX based on MFC foundation classes with C++ templates, etc.

And to build those interactive programs somebody is trained to use React, Vue, etc. using their own eco systems of tools. This is operated by a stack of build tools, a stack of distribution tools, kubernetes for hosting and AWS for managing that whole damn thing.

Oh - and do not talk even about Dependency Management, Monitoring, Microservices, Authorization and so on...

But I really wonder - what would be more complex?

Building interactive programs based on HTML or Logo (if anybody does remember)?

replies(7): >>36449860 #>>36449954 #>>36450498 #>>36451505 #>>36452358 #>>36453795 #>>36454122 #
40. 5e92cb50239222b ◴[] No.36449643{5}[source]
' will search in URLs, skipping plain text.
41. _a_a_a_ ◴[] No.36449651{4}[source]
Firefox allows you to use / to start a conventional search.

As a bonus, in Firefox if you hit the ' key (apostrophe) you get a search that looks only within hyperlinks and ignores all the un-clickable plain text. Give it a try, sometimes it can be very useful

42. nzach ◴[] No.36449814{3}[source]
>If you can't load a page with 30 dishes fast enough, you have a serious problem

That depends on your scale. If your product is "large enough" it is relatively easy to get into the range of several seconds of response time.

Here are some of the steps you may want to execute before responding a resquest to your user:

- Get all the dishes that have the filters the user selected

- Remove all dishes from restaurants that doen't delivery in the user location

- Remove all dishes from restaurants that aren't open right now

- Get all discount campaigns for the user and apply its effects for every dish

- Reorder the dish list based on the history of the user interactions

Now imagine that for every step in this list you have, at least, a single team of developers. Add some legacy requirements and a little bit of tech debt... That's it, now you have the perfect stage for a request that takes 5-10 seconds.

replies(3): >>36450058 #>>36450270 #>>36455639 #
43. tracker1 ◴[] No.36449850[source]
Even more annoying is when ads pop in a second or so later under the cursor position just before you click, taking you to the ad not what you wanted to click.
replies(1): >>36451987 #
44. the_overseer ◴[] No.36449854{3}[source]
Is this missing an /s?
replies(1): >>36450020 #
45. myth2018 ◴[] No.36449860{4}[source]
And don't forget the UX designers armed with their figmas and alikes. The tech-stack is only one among a number of organizational and cultural issues crippling the field.
replies(1): >>36492525 #
46. erwan577 ◴[] No.36449889[source]
The first step to solve a problem is to measure it. Do you know of a windows program that can measure the latency of the UI of other windows apps ?

What really drives me mad is the latency of some file selection dialogs for example which can take like 10 seconds.

replies(3): >>36450371 #>>36452165 #>>36453701 #
47. atchoo ◴[] No.36449954{4}[source]
Ironically these instant starting NT applications were often using COM.

As much as I hated developing with COM, the application interoperability and OLE automation is a form 90s tech utopianism that I miss.

replies(4): >>36453279 #>>36454710 #>>36456031 #>>36459614 #
48. zackees ◴[] No.36450020{4}[source]
Yeah 10 browser tabs on brave my ubuntu grinds to a halt.
replies(1): >>36456039 #
49. ◴[] No.36450044{3}[source]
50. bamfly ◴[] No.36450058{4}[source]
Dafuq kind of scale could even a service for lots of restaurants have, that that wouldn't be a single query taking milliseconds to execute? I'd maybe split the last bit (user history re-ordering) into another operation, but the rest, nah, not seeing it, one quick query, probably behind a view.

I mean maybe your DB is a single node running on a potato and your load's very high but you're also somehow never hitting cache, but otherwise... no, there's no good reason for that to be slow.

[EDIT] Your last paragraph is the reason, though: it's made extremely poorly. That'll do it.

replies(1): >>36459713 #
51. Anthony-G ◴[] No.36450153{3}[source]
Thanks. I had forgot about that because the Windows key by itself was fast enough in Windows 7 that I stopped using Win+R.
52. atq2119 ◴[] No.36450166{3}[source]
Also, having to throw out basically everything in GPU memory is largely a thing of the past in the first place.

I still have this instinctual reluctance to change screen resolution in a game's setting screen, even though 99% of the time it's an instantaneous thing these days.

replies(1): >>36477994 #
53. mike_hearn ◴[] No.36450231[source]
This is partly an OS design issue. There's no deep reason the OS should ever throw away keypresses, but contemporary GUIs have a very weak and flaky notion of focus. Contrast this with mainframe apps where users could learn to go incredibly fast, because keystrokes were buffered per connection and the mainframe would process them serially, so even if you typed faster than the machine could process them it wouldn't matter, no keys were lost.
replies(2): >>36455244 #>>36457094 #
54. jrumbut ◴[] No.36450266[source]
I could kind of understand things moving around back in the 2000s when we were all getting used to AJAX but all these years later can we (at least) have the main navigation links stay still?
55. jerf ◴[] No.36450270{4}[source]
None of the things you said mentioned should be hard. We did complicated things like that and more in the 1990s.

But it was different...

Yeah. It was. That's exactly my point.

A major problem is the number of places in our code stacks where developers think it's perfectly normal for things to take 50ms or 500ms that aren't. I am not a performance maniac but I'm always keeping a mental budget in my head for how long things should take, and if something that should be 50us takes 50ms I generally at some point dig in and figure out why. If you don't even realize that something should be snappy you'll never dig into why your accidentally quadratic code is as slow as it is.

Another one I think is ever-increasingly to blame is the much celebrated PHP-esque "fully isolated page", where a given request is generated and then everything is thrown away. It was always a performance disaster, but when you go from 1 request to dozens for the simplest page render it becomes extra catastrophic. A lot of my web sites are a lot faster than my fellow developers expect simply because I reject that as a model for page generation. Things are a lot faster if you're only serving what was actually requested and not starting everything up from scratch.

Relatedly, developers really underestimate precomputation, which is very relevant to your point. Your hypothetical page layout is slow because you waited until the user actually clicked "menu" to start generating all that. Why did you do that? You should have computed that all at login time and have it stored right at your fingertips, because it is a reasonable assumption given the sort of page you're talking about that if the user logged in, they are there to make an order, not to look at their credit card settings. Even if it expensive for reasons out of your control (location API, for instance) if you already did the work you can serve the user instantly.

Having precomputed all this data, you might as well shove it all down to the client and let them manipulate it there with zero further network requests. A menu is a trivial amount of information.

It isn't even like precomputation is hard. It's the same code, just running at a different time.

"But what about when that doesn't work?" Well, you do something else. You've got a huge list of options. I haven't even scratched the surface. This isn't a treatise on how to speed up every conceivable website, this is a cri de coeur to stop making excuses for not even trying, and just try a little.

And it is SO MUCH FUN. Those of you who don't try you have no idea what you are missing out on. It is completely normal on a code base no one has ever profiled before to find a 50ms process and improve it to 50us with just a handful of lines tweaked. It is completely normal to examine a DB query taking 3 seconds and find that with a single ALTER TABLE ADD INDEX cut that down to 2us. This is the most fun I have at work. Give it a try. It's addictive!

replies(2): >>36450558 #>>36450979 #
56. pohuing ◴[] No.36450298{4}[source]
Twitter does that and I don't understand why. Do they think their js blob can figure out which divs to render faster than the browser engine itself?
replies(1): >>36452979 #
57. hulitu ◴[] No.36450371[source]
> What really drives me mad is the latency of some file selection dialogs for example which can take like 10 seconds.

That's why the obfuscated it and now it takes 30 seconds. (Win 10). Unless you want to save on OneDrive.

58. kjellsbells ◴[] No.36450409{3}[source]
WordPerfect 5.1 for DOS has entered the chat.

I have fond memories of that, but basically the editor was a UI into a linked list with a blue screen. So not comparable to what people are being asked to do with Word and 365 today.

My personal beef with Word is that it struggles so much with long documents. Trying to read, say, a 300 page spec from 3GPP is miserable.

59. azemetre ◴[] No.36450437{3}[source]
I'm sure the person who wrote that accordion is quite good at solving leetcode tho! (only half joking)
60. Dylan16807 ◴[] No.36450472[source]
Trying to meet a specific threshold is part of the problem. The better the development hardware, the earlier that threshold is met in testing, which tends to mean optimization effort falls off a cliff.

Also that threshold is an entire 400ms. We should expect significantly better than that these days.

61. GrinningFool ◴[] No.36450495{5}[source]
Unless you're on github.com, because it rather inconsiderately takes over `/`.
62. Dylan16807 ◴[] No.36450498{4}[source]
Virtual machines don't add much overhead.
replies(3): >>36450607 #>>36454298 #>>36459630 #
63. ericd ◴[] No.36450558{5}[source]
Yeah, your last point is totally spot on, it is so gratifying to make something feel obviously much faster, in a way that few other things in programming are, and there's usually a lot of low hanging fruit.

Also, if you work on a website, the Google crawler seems to allocate a certain amount of wall time (not just CPU time) to crawling your page. If you can get your pages to respond extremely quickly, more of your pages will be indexed, and you're going to match for more keywords. So if for some reason people aren't convinced that speed is an important feature for users wanting to use your site, maybe SEO benefits will help make the case.

64. HumblyTossed ◴[] No.36450591[source]
About 20 years ago I was evaluating solutions for reporting to see if we could save money switching from Crystal Reports. One reason we stayed with Crystal was they did something they other report engines did not; they made available for display every page as it was rendered. So for a 400 page report, you could start working with it immediately. It took each engine (with a couple exceptions) about the same time to generate the entire report, but for the other engines, you had to wait until they were done.

There is speed, then there is the perception of speed. Crystal got this right.

65. vladvasiliu ◴[] No.36450607{5}[source]
That may be so, although a VM on AWS is measurably slower than the same program running on my Xeon from 2013 or so.

But, more generally, the problem is that tech stacks are an awful amalgamation of uncountable pieces which, taken by themselves, "don't add much overhead". But when you add them all together, you end up with a terribly laggy affair; see the other commenter's description of a web page with 30 dishes taking forever to load.

replies(2): >>36450758 #>>36451281 #
66. ecshafer ◴[] No.36450631{3}[source]
Ruby on Rails is plenty fast, especially with Turbo. The biggest RoR speed drop is n+1 queries imo.
replies(1): >>36452030 #
67. Dylan16807 ◴[] No.36450758{6}[source]
My experience has been that HTML and making changes to HTML has huge amounts of overhead all by itself.

I don't dispute that very bad javascript can cause problems, but I don't think it's the virtualization layers or the specific language that are responsible for more than a sliver of that in the vast majority of use cases.

And the pile of build and distribution tools shouldn't hurt the user at all.

replies(1): >>36451031 #
68. Sohcahtoa82 ◴[] No.36450979{5}[source]
ALTER TABLE ADD INDEX, in some cases, can create speed increases in multiple orders of magnitude.

I was using a SAST program that used MS SQL Server and was generating reports, and often finding the reports took HOURS to generate, even when the report was only ~50 pages. A report on one specific project took over a DAY to generate a report. I thought it was ludicrous, so I logged onto the SQL server to investigate and found that one query was taking 99% of the time. This query was searching through a table with tens of millions of rows, but not indexed on the specific rows it was checking against, and many variations of the query were being used to generate the report. I added the index (Only took about an hour, IIRC), and what took hours now took a couple minutes.

I was always surprised the software didn't create that index to begin with.

replies(1): >>36455993 #
69. Brian_K_White ◴[] No.36451031{7}[source]
It's exactly the pile of layers, not "bad javascript" or any other single thing. Yes vms do add ridiculius overhead.
replies(2): >>36451174 #>>36451634 #
70. Dylan16807 ◴[] No.36451174{8}[source]
When half the layers add up to 10% of the problem, and the other half of the layers add up to 90% of the problem, I don't blame layers in general. If you remove the parts that legitimately are just bad by themselves, you solve most of the problems.

If you have a dozen layers and they each add 5% slowdown, okay that makes a CPU half as fast. That amount of slowdown is nothing for UI responsiveness. A modern core clocked at 300MHz would blow the pants off the 600MHz core that's responding instantly in the video, and then it would clock 10x higher when you turn off the artificial limiter. Those slight slowdowns of layered abstractions are not the real problem.

(Edit: And note that's not for a dozen layers total but for a dozen additional layers on top of what the NT code had.)

replies(1): >>36453757 #
71. snovv_crash ◴[] No.36451210{3}[source]
If you disable the virus scanner and write a simple SDL app on Windows it will open instantly as well.
72. sazz ◴[] No.36451281{6}[source]
"But when you add them all together, you end up with a terribly laggy affair"

I think this is the key.

People fall in love with simple systems which solves a problem. Being totally in honey moon they want to use those systems for additional use cases. So they build stuff around it, atop of it, abstract the original system to allow growth for more complexity.

Then the system becomes too complex, so people might come up with a new idea. So they create a new simple system just to fix a part of the whole stack and allow migration from the old one. The party starts all over again.

But it's like hard drive fragmentation - at some point there are so many layers that it's basically impossible to recombine layers again. Everybody fears complexity already built.

73. pjmlp ◴[] No.36451505{4}[source]
And then came WASM...
replies(1): >>36459679 #
74. p_l ◴[] No.36451520{7}[source]
Worse, sometimes the people who do it are completely unaware they are making a dark pattern, because they see the result of A/B test and convince themselves it's superior to what they think.
replies(1): >>36452692 #
75. jrott ◴[] No.36451634{8}[source]
At this point there are so many layers that it would be hard to figure out the common problems without doing some serious work profiling a whole bunch of applications
76. Arwill ◴[] No.36451720[source]
Each level of abstraction has its own caching and buffering routines because the underlying layers are slow, and without the ability to make them better, you can only put your own cache on top of it. This helps initially, but in the end, the time goes wasted managing all those caches and buffers at every given layer.
77. lenkite ◴[] No.36451868[source]
People got too used to the Web - slowly loading stuff - and so began to tolerate native apps also loading slow.
replies(1): >>36452015 #
78. palata ◴[] No.36451987{3}[source]
Ads? I have a solution for ads... ever heard of uBlock Origin? :)
replies(1): >>36524099 #
79. palata ◴[] No.36452015[source]
And they got too used to receiving updates over the internet (as opposed to buying a CD-ROM), and so began to tolerate software full of bugs.
replies(1): >>36459865 #
80. viraptor ◴[] No.36452030{4}[source]
Which is not a RoR issue. You get those accidentally if you write plain SQL as well. (Actually passing the AR query fragments makes the n+1 easier to avoid in complex situations)
81. Gigachad ◴[] No.36452042[source]
Because the user would rather wait a minute to load Photoshop, than have MS Paint that loads instantly.
replies(1): >>36453064 #
82. viraptor ◴[] No.36452132[source]
That sounds terrible. Why interact with the start menu at all if you can just start the application itself through the path? That's the kind of abuse that delays things in the first place.
83. viraptor ◴[] No.36452165[source]
You can get lots of good information from ETW https://learn.microsoft.com/en-us/windows-hardware/drivers/d...

There's also Dtrace https://learn.microsoft.com/en-us/windows-hardware/drivers/d...

84. smegger001 ◴[] No.36452207{4}[source]
you would think they could figured out that programs not directly opened by the user shouldn't be able to steal focus from the program the user is currently interacting with. seems like basic UX/UI failure
replies(2): >>36453697 #>>36477728 #
85. syntheweave ◴[] No.36452358{4}[source]
Well, the way in which we got there is in adding more features to the "obvious" answer: all any given site necessarily has to do, to have the same presentation as today, is place pixels on the screen, change the pixels when the user clicks or types things, and persist data somewhere as the user does things.

Except...there's no model of precisely how text is handled or how to re-encode it for e.g. screen reading...the development model lacks the abstraction to do layout...and so on. So we added a longer pipeline with more things to configure, over and over.

But - the computing environment is also different now. We can say, "aha, but OCR exists, GPT exists" and pursue a completely different way of presenting many of those features, where you leverage a higher grade of computing power to make the architectural line between the presentation layer and the database extremely short and weighted towards "user in control of their data and presentation". That still takes engineering and design, but the order of magnitude goes down, allowing complexity to bottleneck elsewhere.

That's the kind of conceptual leap computing has made a few times over history - at first the idea of having the computer itself compile your program from a textual form to machine instructions ("auto-coding") was novel and complex. Nowadays we expect our compilers to make coffee for us.

86. TeMPOraL ◴[] No.36452692{8}[source]
The ultimate version of this was done by Optimizely some years ago, where - let's assume here unintentional - bad UI design encouraged people to terminate their A/B tests early when the metrics favored the new version, leading to people without good understanding of statistics implementing dark patterns (or just stupid patterns), blissfully unaware that they've biased their own A/B tests so strongly that they could just as well be replaced by a piece of paper with words "NEW THING WORKS BETTER" written on it.
87. RichardCA ◴[] No.36452979{5}[source]
I am in no way a front-end person, but my understanding is that it has to do with React and the way it maintains a virtual copy of the DOM.

Having Ctrl-F not work correctly is maddening and I hate it.

replies(1): >>36454004 #
88. Aerroon ◴[] No.36453057[source]
Back then you ran games in proper fullscreen mode, whereas nowadays you run them in windowed mode (even when it's called fullscreen windowed).

If you get bad performance in a game nowadays it's a good idea to try proper fullscreen. Alt tabbing might be slow, but the game will run better.

89. ObscureScience ◴[] No.36453064[source]
Note that modern versions of MS Paint (with few improvements over the original) takes seconds to load on a quad core 3+ghz machine, loaded from an SSD.
replies(1): >>36455106 #
90. phist_mcgee ◴[] No.36453231{4}[source]
My guess for this is that the average user doesn't parse text on pages as fast as more tech-savvy users. That delay is perfectly timed for the average user to grab their attention.

For me, I use google so often that I can rapidly parse information without really needing to read much of the text, the link just sort of 'looks' right. I've observed my wife reading google results and she is much slower and more methodical, probably because she doesn't google things 20+ times a day every day like I do.

That's how I end up misclicking, because i'm not working at the speed of a normal googler.

It is really annoying though, there are some css tweaks you can make using browser extensions to make that disappear if you're so inclined.

91. TeMPOraL ◴[] No.36453279{5}[source]
Indeed.

On one project, I actually shifted quite recently from working on old-school, pre-Windows XP, DCOM-based protocols, to interfacing with REST APIs. Let me tell you this: compared to OpenAPI tooling, DCOM is a paradise.

I have no first clue how anyone does anything with OpenAPI. Just about every tool I tried to turn OpenAPI specs into C or C++ code, or documentation, is horribly broken. And this isn't me "holding it wrong" - in each case, I found GitHub issues about those same failures, submitted months or years ago, and still open, on supposedly widely-used and actively maintained projects...

replies(2): >>36453846 #>>36454428 #
92. batiudrami ◴[] No.36453512[source]
Spotify’s trick when it first launched is that search and playback was so fast it almost felt like you had all the music in the world on your hard drive.

Unfortunately I cannot think of a single thing that has gotten better about Spotify since I started using it, and a lot which has gotten worse.

replies(1): >>36458320 #
93. causality0 ◴[] No.36453697{5}[source]
You're talking about people who haven't figured out you should be able to edit the options on the right click menu from a single settings menu and that the task bar should expand and contract dynamically with the number of open windows. Microsoft stopped improving the Windows UI around 2006.
replies(2): >>36458357 #>>36473884 #
94. Aerroon ◴[] No.36453701[source]
Some file selection dialog (and explorer) issues come down to the anti-virus. I've had folders on an SSD that took a minute (60 seconds!) to load. After I added it to an exclusion list in Defender it loaded in a second.

Another one that can be slow with file dialogs is that sometimes (maybe it has been fixed now) it will try to query whether a networked drive is around on another computer. If it isn't then the call to it can be blocking your file UI.

A third problem I've noticed with file selection dialogs and explorer is that the My Computer 'folder' that contains your disks takes a long time to load. Much longer than any sub-folders on any of the drives.

I think the problem is largely with explorer.exe. If I browse those folders in a web browser the experience is snappy.

95. deepspace ◴[] No.36453741[source]
I keep harping on Visual Studio, but I learned programming with Borland Turbo Pascal (and later Turbo C/C++) on a 4.77MHz machine with 512k memory. It was orders of magnitude more responsive than VS on my current 16 core 3.5GHz 128GB machine.

The only exceptions are: 1) the actual build, which is faster on the modern machine, but only for a large number of source files and 2) reading and writing files - a floppy disk cannot beat an nvme drive of course.

replies(1): >>36456079 #
96. TeMPOraL ◴[] No.36453757{9}[source]
Unfortunately, at this point some of the slow layers are in hardware, or immediately adjacent to it. For example, AFAIR[0], the time between your keyboard registering a press, and the corresponding event reaching the application, is already counted in milliseconds and can become perceptible. Even assuming the app processes it instantly, that's just half of the I/O loop. The other half, that is changing something on display, involves digging through GUI abstraction layers, compositor, possibly waiting on GPU a bit, and then... these days, displays themselves tend to introduce single-digit millisecond lags due to the time it takes to flip a pixel, and buffering added to mask it.

These are things we're unlikely to get back (and by themselves already make typical PC stacks unable to deliver smooth hand-writing experience - Microsoft Research had a good demo some time ago, showing that you need to get to single-digit milliseconds on the round-trip between touch event and display update, for it to feel like manipulating a physical thing vs. having it attached to your finger on a rubber band). Win2K on a hardware from that era is going to remain snappier than modern computers for that reason alone. But that only underscores the need to make the userspace software part leaner.

--

[0] - Source: I'll need to find that blog that's regularly on HN, whose author did measurements on this.

replies(1): >>36456133 #
97. meese712 ◴[] No.36453795{4}[source]
Here's a fun fact, most fonts have a font program written in a font specific instruction set that requires a virtual machine to run. There is no escaping the VMs!
replies(1): >>36453995 #
98. nycdotnet ◴[] No.36453846{6}[source]
same with C#
99. TheBrokenRail ◴[] No.36453959{4}[source]
Like GitHub's new code viewer! I have to load the raw text just so I can use Ctrl-F!
replies(1): >>36464894 #
100. DaiPlusPlus ◴[] No.36453995{5}[source]
A VM is not a VM. Just because a program’s semantics are defined in-terms of “a” virtual-machine (Java, .NET, etc) - it’s otherwise entirely unrelated to virtualisation.
replies(2): >>36469595 #>>36627695 #
101. korm ◴[] No.36454004{6}[source]
Nothing to do with React, it's a common optimization to improve performance with long lists. You only render the dom elements in the viewport, with some buffer. A common technique to achieve that is called "virtualization" or "windowing".

It's common enough that there were a couple browser proposals to deal with this and would address the Ctrl+F issue. I believe this has been merged into the CSS Containment spec, but at the moment it doesn't make windowing obsolete in every situation.

replies(2): >>36455791 #>>36457351 #
102. brazzledazzle ◴[] No.36454049{3}[source]
If you think that’s enraging just wait until you get a page that unloads the previous content as you scroll. You can only search the text that’s visible. Maddening.
103. civilitty ◴[] No.36454122{4}[source]
> But I really wonder - what would be more complex?

> Building interactive programs based on HTML or Logo (if anybody does remember)?

Hold my beer: my Github Actions CI scripts use Logo to generate the bash build scripts as images that are then OCRed and executed by a special terminal that exploits the Turing complete nature of Typescript's type system.

Turtles all the way down!

104. agumonkey ◴[] No.36454246[source]
Reminds me of my hp48 calc, which original system was often .. sluggish, but with enough input buffer and predictability (stack + RPL) meant that it was actually beneficial since it forced you to learn things by heart and anticipate more, and the pauses would serve as thinking time for your brain. A rare kind of slow and laggy turning out to be fun.

Also impressive how our brain is sensitive to all of this.. no matter how impressive a current OS / web browser / jitted js is.. I dearly miss the past eras "behavior".

105. AlecSchueler ◴[] No.36454271[source]
We're so glued to our screens now that our threshold for losing interest is far higher. We'll check our phone while waiting for a desktop app to open rather than walking away and finding another activity.
106. ungamedplayer ◴[] No.36454298{5}[source]
Except for when they do.
107. esafak ◴[] No.36454391{6}[source]
It maximizes their revenue until I decide to stop using Google. Joke's on them for not measuring long-term effects.
108. zadler ◴[] No.36454404[source]
The one I can’t stand is keyboard input lag when a paragraph gets large. Happens in seemingly every app except vim.
109. kbenson ◴[] No.36454428{6}[source]
I too was once very excited that OpenAPI specs I had access to would save me untold hours in implementing an API for a service since I could pass them through a generator, only to find once I tried everything seemed somewhat broken or the important time saving bits just weren't quite ready yet.

That was about five years ago. :/

110. barbariangrunge ◴[] No.36454473[source]
I’ve been sharing videos like this for a month or two

I have a 2015 MacBook Air I abandoned recently for being so painfully slow to use that I had almost not touched it for months. I have an iPad Air 2 that is basically unusable at this point. Both are 2-3 orders of magnitudes faster than those old computers that work instantly.

But windows and web apps are super slow now too

Think of all the landfills and wasted work hours earning the money needed to fill those landfills. The heavy and rare metals

This is proof that if computers were 10x faster, they would run slower than ours today, because in the past we’ve seen that be true over and over. The software companies will just make heavier and heavier programs and operating systems until we have gained nothing but a significant amount of co2 emissions

replies(2): >>36455917 #>>36458887 #
111. wingworks ◴[] No.36454683[source]
This is so frustrating. Though I recently come across a rare bread, that I thought stood out on load speed. The BBC of all places. While following the sub disaster I was pleasantly surprised at how quick it always loaded. Nearly always it would load near instant after clicking my bookmark. (Definitely not something that is that common when you live in NZ)

Not at all what I expected from a news site. They're usually full of crap and dog slow.

This is the specific URL I experienced this with. Though the whole site seems mostly very quick. https://www.bbc.com/news/live/world-us-canada-65967464

And this is on a 2017 MBP, which some sites are really slow. Nothing crazy like the new silicon CPUs here either.

replies(3): >>36456026 #>>36456630 #>>36456638 #
112. cmgbhm ◴[] No.36454710{5}[source]
I think of the OLE demos every time I shove a google sheet into a google doc and realize it’s only a one way sync.
113. winrid ◴[] No.36455106{3}[source]
Is this just the C# runtime being slow or just lots of abstraction?
replies(1): >>36496930 #
114. giantrobot ◴[] No.36455244{3}[source]
It's not necessarily Windows throwing away keypress events but the order of events system wide not necessarily staying in the expected order or the target for an event not being active at the right time.

If you activate the start menu with a keypress it's going to grab focus. Before it grabs focus the previous window in focus will get events. The same applies with panels (drawers? I forget Windows' name for them) in the Start Menu. There's a non-zero time between activation and grabbing focus to receive keypress events.

Everything from animation delays to stupid enumeration bugs can affect a windows not grabbing focus to receive keypress events. Scripting a UI always has challenges with timing like this.

A mainframe terminal has a single input context. You can fire off a bunch of events quickly because there's no real opportunity for another process (on that terminal) to grab focus and receive those events.

Note the above doesn't absolve Windows of any stupid performance/UX problems with bad animation timings and general shittiness. Microsoft has been focusing on delivering ads and returning telemetry with Windows instead of fixing UX and performance issues.

replies(1): >>36458340 #
115. eviks ◴[] No.36455637{4}[source]
Stealing focus should be a misdemeanor!
116. jaxrtech ◴[] No.36455639{4}[source]
I'm a well constructed system, you could probably get this down to one PostGIS query with some joins and spacial indexes that should run in <100ms.
117. ziml77 ◴[] No.36455791{7}[source]
It's not even a technique limited to browsers. When I did Android development years ago it was a technique used in native list controls to reduce the overhead of them. Though in that case, there's no search based on what's been rendered like a web browser has. And of course if you did implement search, you could have that look at the underlying data, so it doesn't matter if it's been rendered.
118. THENATHE ◴[] No.36455856[source]
Adjacent but similar, I am so over all of the animations and corporate bullshit on pages. I run a business making decent and super optimized web pages and people love them. They’re the only website in their town and field that doesn’t take 10 seconds to load and render.
replies(1): >>36479150 #
119. accrual ◴[] No.36455917[source]
Aren't those devices also running non-contemporary OSs? I wonder if installing the factory OS would make them seem fast again, albeit at the cost of 8 years of software updates.
120. simooooo ◴[] No.36455993{6}[source]
Because it can hog a lot of disk space and slow inserts
replies(1): >>36494555 #
121. simooooo ◴[] No.36456026{3}[source]
It’s fast but it still resets the scroll to the top about 700ms after it loads on an iPhone
122. desi_ninja ◴[] No.36456031{5}[source]
WinRT is COM under the covers
123. simooooo ◴[] No.36456039{5}[source]
My Ubuntu uses snap for key applications which take 10+ seconds to start
124. flangola7 ◴[] No.36456059{6}[source]
A/B testing is so gross. In other domains human experimentation of any kind, no matter how low risk, involves getting fully informed consent and ethics board approval before going ahead.

Experimental behavior manipulation, without even telling the subject they are part of a manipulation experiment? You would be chased out of the room and your reputation destroyed! Utterly unacceptable. But in webdev universe this is somehow seen as a totally normal practice.

replies(1): >>36470092 #
125. simooooo ◴[] No.36456079[source]
It was also doing about 3 magnitudes less work
replies(1): >>36517723 #
126. jbboehr ◴[] No.36456133{10}[source]
Perhaps it's this one?

http://danluu.com/input-lag/

replies(1): >>36458714 #
127. rasz ◴[] No.36456553[source]
Google specifically reengineered YT last year to do exactly that! From loading ready to display html with subsequent clicks ajaxing more html to loading 1MB .js library and 400KB of json on first load with every subsequent click requesting anotehr ~400KB of json to be interpreted.
128. anthk ◴[] No.36456630{3}[source]
If you can install Lagrange on Mac, install it and try gemini://gemi.dev

Then head to the Newswaffle link and input https://bbc.com or just scroll down the page to head to the converted site.

129. anthk ◴[] No.36456638{3}[source]
Sorry I forgot the URL:

https://gmi.skyjake.fi/lagrange/

130. thequux ◴[] No.36457094{3}[source]
Mainframes don't buffer keystrokes at all: rather, they send a screen to the terminal with marked "fields", and then the terminal handles all keystrokes until you press return or one of the attention keys to submit the changes back to the mainframe. Thus, even on a heavily loaded system, typing is instant because the main CPU doesn't get involved at all
replies(1): >>36458326 #
131. pohuing ◴[] No.36457351{7}[source]
But Web browsers already don't render off screen content right? I'm pretty sure I remember opening hundreds of megs of data in Firefox at one point without an issue. Or old reddit with RES has infinite scroll and you can go dozens of pages deep without a hitch. All while not lazily rendering the other pages.
132. chrisldgk ◴[] No.36457699[source]
This is called cumulative layout shift (CLS for short) and there’s a push in web dev to work against and around this - it’s recently become part of the performance measurement of googles lighthouse scoring and server side rendering and static site generation metaframeworks like nextjs and Astro allow you to send out the entire static page HTML without having to lazy load any new data.
133. mwcampbell ◴[] No.36458273{3}[source]
There is, but even that's not perfect. I have this stupid habit where I bring up the NVDA screen reader when I want it by typing Windows+R, typing "nvda", and pressing Enter. (I have low vision, and I use a screen reader sometimes, but not all the time.) I know, I really should use a desktop keyboard shortcut instead. Anyway, it's muscle memory, and I do it automatically, without watching for the Run dialog to appear. Only sometimes, the Run dialog doesn't appear fast enough, and maybe the "n" doesn't make it into that dialog's edit box. Or, on one or two embarrasing occasions, the Run dialog didn't grab focus at all, or at least not until way too late, and a message that just said "nvda" made it into a chat window.
134. mwcampbell ◴[] No.36458320{3}[source]
Depending on when you started using Spotify, accessibility might have gotten infinitely better. The original Windows app, with its custom GUI toolkit, was completely inaccessible with a screen reader. Then they re-did the UI using Chromium Embedded Framework.
135. mike_hearn ◴[] No.36458326{4}[source]
Right, but what I mean is, if you press some keys to move from one screen to another, you can start typing before the navigation is complete and those keystrokes will be buffered by the terminal. They won't just be discarded, meaning you can learn to type ahead of where the server is up to.
136. mike_hearn ◴[] No.36458340{4}[source]
Yes, my point is that an alternative OS design could serialize keystrokes such that a keypress has to be handled (including focus changes) before the next keys are delivered, allowing you to type ahead without keypresses going to the wrong place or depending on the vagaries of timing. It might require a different approach to UI API and design, though.
replies(2): >>36459773 #>>36460861 #
137. Aerbil313 ◴[] No.36458357{6}[source]
You’re talking about people who use Macs.
138. ksec ◴[] No.36458429[source]
We also have tech companies that claims sub 300ms request response time is good enough. I wish people look at StackOverflow and understand how they do everything at under 20ms.
replies(1): >>36459761 #
139. whywhywhywhy ◴[] No.36458485{3}[source]
MacOS manages to provide an advanced search launcher experience as fast or faster than the run dialog that requires the perfect exe name.

What’s Microsoft’s excuse? Tired of their incompetence being handwaved and a several decades old jank non-solution being held up as acceptable. MS needs more accountability in their teams.

140. TeMPOraL ◴[] No.36458714{11}[source]
Yes, this one exactly, thank you!
141. Aerbil313 ◴[] No.36458887[source]
This is only true as the software industry keeps piling abstractions on top of abstractions, as Moore’s law allows it. But we’re reaching the end of the Moore and already feeling the effects of it, for example Javascript frameworks priding themselves on performance instead of features.

Give it some time for the industry to finally mature.

142. immibis ◴[] No.36459614{5}[source]
In some ways COM is pretty optimized. An intra-thread COM call is just a virtual function call - no extra overhead. Otherwise it's a virtual function call to a proxy function that knows exactly how to serialize the parameters for IPC.
replies(1): >>36534462 #
143. immibis ◴[] No.36459630{5}[source]
That depends how much overhead is in the VM. WASM is designed to be thin. Java is not.
144. immibis ◴[] No.36459679{5}[source]
WASM is designed to cut through all the bullshit and leave only a minimal amount of bullshit, even though it turns out there's still a lot of bullshit in the other parts of the system that WASM doesn't address.

I like websockets for the same reason. Each message has a two byte overhead compared to TCP. Two bytes. Unfortunately messages sent by the client have a whopping four additional bytes to help protect buggy middleboxes.

145. immibis ◴[] No.36459713{5}[source]
Recently I stumbled across the online catalog for Segor Electronics (segor.de I think? Google it. Only in German. They're not paying me to post this)

It's extremely fast. Super duper fast. And a quick look at the network debugging tab shows why: it loads the shop's entire catalog data (about 3 megs) upfront, and the entire application runs locally with not a single request until you buy something. Now that's efficiency.

Really. Go to their website, clock on KATALOG and click some random buttons, pick a product at random, add it to your cart, remove it from your cart.

The product images are the only things that aren't pre-loaded.

146. immibis ◴[] No.36459761{3}[source]
As a person from New Zealand I am accustomed to every single request taking a minimum of 300ms RTT and sometimes it's shocking when it doesn't.
147. immibis ◴[] No.36459773{5}[source]
That alternative OS is called... 16-bit Windows.
148. immibis ◴[] No.36459784[source]
Full-screen games were a specific issue because they would have full control of the graphics card (GPU multitasking hadn't been invented yet or something). When the game gets focus it has to set up the GPU from scratch.
149. immibis ◴[] No.36459865{3}[source]
This cuts both ways
150. giantrobot ◴[] No.36460861{5}[source]
That's not really practical with a preemptive multitasking OS. There's no guarantee (without real-time scheduling) any process will have uninterrupted time on the CPU(s).

According to an external wall clock the keypress events happen at seconds 1, 2, and 3. The first press triggers a window to appear (menu panels are a type of window). It takes 0.5s to instantiate and register to receive keypress events from the shell. Wall clock time is 1.5s. Nice.

The second window (menu panel) receives a keypress event at wall clock 2s which opens a third panel. That panel because it has more complicated drawing and page faulted so had to fetch a page from disk swap unfortunately took 1.2s to register for focus. A keypress triggered at wall clock time of 3s. Our third panel though didn't register focus until wall clock 3.2s. That keypress went to panel 2 because that had focus when the keypress event triggered. All times greatly exaggerated.

The shell needs to add events to processes' event queues but it can't just arbitrarily add them to every process. It also can't know any individual window wants events until the process tells it so. Unlike mouse events a keypress event doesn't have coordinates so a process can't really figure out the intended target of an event.

A model that prevents preemption means your back to the Win16 cooperative multitasking. A process can't be interrupted until it gives up the CPU willingly. That however means background processes can't do work while a foreground process holds the CPU. If you make just your shell and GUI apps cooperative the responsiveness of the system will end up awful.

replies(1): >>36476835 #
151. cstrahan ◴[] No.36462340[source]
> What makes it even worse is how unpredictable the lags are so you can't even train yourself around it.

What is worse than that: not being able to predict if inputs will be buffered or dropped during unresponsiveness. I kind of look like an idiot when I keep clicking/typing away on someone else’s computer while things are frozen, thinking “oh, it’ll catch up in a bit”, and then 5 seconds later I have to work harder to fix the chaos: some keystrokes at the beginning made it through, then only every other one, then the next couple hundred got dropped, but the next 100 came through fine, and interspersed everywhere there are bizarre runs of duplicated keys, as if I had held the letter aaaaaaaaaaaaaa down continuously.

Everyone’s mix of hardware, OS, text editor, text editor plugins, etc makes this behavior highly variable, and hard to guess if it makes sense to keep typing or just wait out the frequent 1-5 second lockups.

152. sidewndr46 ◴[] No.36464894{5}[source]
Someone should invent a way for a web server to return a representation of the text, complete with styling and formatting that the browser can use to render it.
153. eternityforest ◴[] No.36469396[source]
Android is almost as responsive as things like that. Linux is good, sometimes, windows seems better.

1GB programs are rarely instant but that's usually just the price for very complex functionality if it's too interconnected to load parts of it on demand.

154. froggit ◴[] No.36469595{6}[source]
It always kind of cracks me up when I hear someone having to explain the difference between between these 2 breeds of VM.

At one point back in school a friend said to me "hey, I can't figure out how to install and boot JVM on Virtual Box. I need to use it for homework in another class. Help me?"

I wish I had been able to explain it as succinctly as you. Instead I sat there laughing in the guy's face for a good minute, eventually realizing from his expression that he was being serious, which only made me laugh even harder.

replies(1): >>36627461 #
155. froggit ◴[] No.36470092{7}[source]
This is exactly how users felt when Reddit ran A/B testing on their "feature" that forcibly signed out people on mobile browsers and said they needed to use the Reddit app to sign back in. I saw a crazy long thread of straight backlash about how messed up it was and how they aren't cattle to experiment on and how they didn't consent to that (which they prolly did in the T&C but no one reads that and actually understands what they're agreeing to).

Seeing as they were posting the backlash on Reddit, I'm guessing a lot of people downloaded the app to log in and Reddit said "Big Success!" when they checked the stats.

replies(1): >>36477704 #
156. smegger001 ◴[] No.36473884{6}[source]
desktop user interfaces on all 3 main operating systems peaked by 2010, at that point everything became aimed at clueless cellphone users.
157. mike_hearn ◴[] No.36476835{6}[source]
That's the issue you'd face with today's operating systems and UI designs, yes, but I'm talking about a hypothetical new OS design.

In such an OS the APIs would allow you to atomically transfer focus as part of other operations, for example, starting a new program or opening a new window could simply transfer focus atomically to the pending new program/window such that the OS buffers keystrokes until the recipient is ready to receive them. Also, taking focus would require you to advertise what keys you're willing to receive, allowing a focus cascade such that there's never any situation in which keystrokes get delivered to a UI that isn't able to do anything with them. At the top level of the shell there'd be a kind of global command line or palette that is the default receiver of all other unhandled keystrokes. Because focus transfer is always deterministic under this scheme, people can learn where keystrokes will go without timing playing a part.

158. account42 ◴[] No.36477704{8}[source]
> which they prolly did in the T&C but no one reads that and actually understands what they're agreeing to

The GDPR's notion of informed consent really needs to be applied pervasively to all kinds of consumer contracts. If it's hidden in walls of text that the average user doesn't read it shouldn't count as consent.

replies(1): >>36595351 #
159. account42 ◴[] No.36477728{5}[source]
This is actually pretty hard to get right. Just yesterday I was confused why opening my text editor under KDE didn't pop up a window. Turns out some update to KDE's focus stealing prevention (or some other involved component) changed things so the new text editor window got pushed behind existing windows.

This isn't an argument for not trying though.

160. account42 ◴[] No.36477975{3}[source]
> you have to throw out basically everything in GPU memory and reset it all

This is not an inherent limitation of CPUs but a part of Windows' exclusive fullscreen concept. Just another thing that was simply accepted as the way things are instead of being improved (until exclusive fullscreen went out of style).

161. account42 ◴[] No.36477994{4}[source]
Modern games typically don't change the screen resolution at all. If there is a resolution setting then it usually is just for the internal render resolution and the final pass will scale that up to the native resolution of the display. Changing the screen resolution only made sense with CRTs where the display is actually capable of different resolutions unlike LCD displays where there is only one native resolution and non-native resolution needs to be resampled (either by the display, the GPU or the game).
162. pdimitar ◴[] No.36479150{3}[source]
Can you link to your website?
replies(1): >>36498782 #
163. wellanyway ◴[] No.36492525{5}[source]
Howdoes figma contribute to laggy UI of the end product?
replies(1): >>36507960 #
164. Sohcahtoa82 ◴[] No.36494555{7}[source]
Somehow missed this reply for 3 days...

In my use case, impact on inserts was not noticed. I did notice higher disk space usage, but it absolutely was worth it. Spending $200 on a larger disk was absolutely worth saving literally days on report generation.

165. ObscureScience ◴[] No.36496930{4}[source]
I would not say that the C# runtime is slow, but it is a JIT compiler that is not optimizing for start-up (as far as I know), and it is "doing a lot of work" at runtime to achieve the eventual performance it is capable of. Start-up time is not the most important property for a lot of services, but for user applications it's pretty high up there, so if the underlying runtime is not optimizing for that this shows a disconnect in the choice of technology stack.

I'm quite impressed of both .NET and OpenJDK in some metrics, but it is often not resource efficient, which is something I do value.

One example of an application that works as I would expect others to do is MuPDF, Being able to open 20MB+ PDFs in 1/10 seconds on a 10 years+ old laptop.

By the way, does anyone know why Debian launches LibreOffice so much quicker than Ubuntu, Fedora or Archlinux (or any other distro I've tested with)? In Debian its 1-2 seconds, and the others 5-10 seconds. I mean it could be included extensions or how they are configured, but I'm honestly interested.

replies(1): >>36500190 #
166. THENATHE ◴[] No.36498782{4}[source]
Unfortunately I don’t have a website for myself and I don’t feel comfortable sharing my client’s websites. I operate purely through word of mouth in my local town, and I don’t have any aspirations to “go big”
replies(1): >>36498837 #
167. pdimitar ◴[] No.36498837{5}[source]
I realize I didn't formulate my comment well -- the XYZ problem happened.

I am interested in your fast loading techniques in general. I also am considering making a bunch of personal / pro websites where I'll use a static generator. Just looking for some inspiration and ideas to steal I suppose.

Since it's kinda fresh pursuit for me, I am still looking to gather some links and do proper research. I wasn't looking to deanonymize you, my apologies.

replies(1): >>36499261 #
168. THENATHE ◴[] No.36499261{6}[source]
Oh sure, no worries!

As far as my inspiration, I use Craigslist and google as my inspiration. I try to get a sleek and simple look like google pages, but maintain the “old school” functionality and layout ideas of Craigslist .

As far as actual development is concerned, I use oracle ARM servers that are grossly overpowered for a webhost, cloudflare nameservers OR CDN, and I keep as much of the development as possible server side with as little JavaScript as I can. An example was a simple blogging system I made. The entire system is set up with a MariaDB table that has “title, date, image url, and content” as the data bits for it, and then each function works on one of two pages: a backend using php session that has all of the functions with get request, and a front end that has all of its content on a page with post requests. There is no JavaScript involved on either page, which means the stuff transmitted over the internet is lesser, the stuff done on the client computer is lesser, and the number of outside calls is lesser. This does make it “less responsive”, but does the blog really need image zoom in on hover and shit like that?

I have found the best way to develop for speed and simplicity is to curb the enthusiasm of the client from “looks as good as possible” to “simple, cheap, fast, and robust, while still looking better than average”

The final suggestion I have is to develop with security AND accessibility in mind first. If you want to put aria links of all of your stuff, it is much harder to go back later and 1) determine what the link does, and 2) write an aria for it than it is to just include it in the first place. Always follow proper form for mitigating risks like SQL injection and XSS, and do as much as possible on the server before you resort to JS.

If you are looking for a couple of sites that I didn’t build, but get the point of what I am trying to do across, check out

Smashingmagazine.com Hacker news doesn’t look the best, but it follows the logic set forth, Openai.com (this one surprised me because if you remove a lot of the slightly more interactive elements it is fast as hell)

If you have any specific questions, ask away and I’ll do my best

169. winrid ◴[] No.36500190{5}[source]
Maybe you're using the older Java based libreoffice on the other machines? It was mostly rewritten in C++.
replies(1): >>36504994 #
170. ObscureScience ◴[] No.36504994{6}[source]
That's pretty odd to assume. All but Debian are running more or less the latest, and Arch is on the "fresh" track.
replies(1): >>36507039 #
171. winrid ◴[] No.36507039{7}[source]
Did not assume. Asked, maybe the other distros are old versions. Not sure why else there would be a huge difference... maybe flatpack/snap dependencies not in fs page cache.
replies(1): >>36511974 #
172. myth2018 ◴[] No.36507960{6}[source]
Notice that in this sub-topic we're talking more generally about causes for low-quality software -- laggy UIs being only one of the symptoms.

Figma contributes by enabling UI designers to easily author interfaces which look allegedly beautiful but are complex to build, test and maintain.

And the resources burned on building such esthetically pleasant piles of barely usable software could find better use on making it simpler, faster and more focused on user actual functional and non-functional requirements (much of them taking place on the server-side) instead of sugaring their eyes by throwing tons of code on their clients

173. ObscureScience ◴[] No.36511974{8}[source]
Ok, sorry for assuming your intent. No, its nothing like that. They are all the distro provided stable versions installed as "regular" applications.

And it seem to be the start up process that differs, as putting them all in a ram-disk does not alleviate the issue, and restarting the app cuts the time in ~1/2 but equally for each distro.

My guess, as I said first is what default libraries as loaded, and possibly how they are configured. I do however find it strange that this has not been mentioned elsewhere as I've been struck bu this difference for years, when I happen to load a pure Debian install (not what I usually use).

174. anonymoushn ◴[] No.36517723{3}[source]
What is this supposed to mean?
175. tracker1 ◴[] No.36524099{4}[source]
I am using uBlock origin, Privacy Badger as well as a PiHole for my personal use.

That's not always an option for a given work environment, however.

176. benibela ◴[] No.36534462{6}[source]
>An intra-thread COM call is just a virtual function call - no extra overhead.

There was a time when a virtual function call was a lot of overhead

Even having a VMT is overhead.

Sometimes the COM interface is implemented as actual interface, where the implementing class is derived from another class and the interface. (in C++ the interface is just another class with multiple inheritance, but other languages have designed interfaces). Then the class even needs to have two VMTs.

Multiple VMTs have even more overhead. And with multiple VMTs, it is not just a method call. In the functions, this always points to the first VMT. But when a function from the VMT is called, the pointer points to that VMT. So the compiler creates a wrapper function, that adjusts this and calls the actual function.

when methods from the later VMTs are called , this points (non-virtual thunk)

177. cylemons ◴[] No.36595351{9}[source]
I remember reading somewhere that if you actually read the TOS/EULA of every single thing you use, it would take your entire lifetime.
178. ◴[] No.36627461{7}[source]
179. ◴[] No.36627695{6}[source]