Most active commenters
  • sgarland(9)
  • irskep(5)
  • (5)
  • fidotron(4)
  • netdevphoenix(4)
  • fxtentacle(3)
  • x0x0(3)
  • gooosle(3)
  • flohofwoe(3)
  • yxhuvud(3)

←back to thread

324 points onnnon | 89 comments | | HN request time: 1.095s | source | bottom
1. irskep ◴[] No.42729983[source]
I agree with most of the other comments here, and it sounds like Shopify made sound tradeoffs for their business. I'm sure the people who use Shopify's apps are able to accomplish the tasks they need to.

But as a user of computers and occasional native mobile app developer, hearing "<500ms screen load times" stated as a win is very disappointing. Having your app burn battery for half a second doing absolutely nothing is bad UX. That kind of latency does have a meaningful effect on productivity for a heavy user.

Besides that, having done a serious evaluation of whether to migrate a pair of native apps supported by multi-person engineering teams to RN, I think this is a very level-headed take on how to make such a migration work in practice. If you're going to take this path, this is the way to do it. I just hope that people choose targets closer to 100ms.

replies(11): >>42730123 #>>42730268 #>>42730440 #>>42730580 #>>42730668 #>>42730720 #>>42732024 #>>42732603 #>>42734492 #>>42735167 #>>42737372 #
2. fxtentacle ◴[] No.42730123[source]
I would read the <500ms screen loads as follows:

When the user clicks a button, we start a server round-trip and fetch the data and do client-side parsing, layout, formatting and rendering and then less than 500ms later, the user can see the result on his/her screen.

With a worst-case ping of 200ms for a round-trip, that leaves about 200ms for DB queries and then 100ms for the GUI rendering, which is roughly what you'd expect.

replies(7): >>42730497 #>>42730551 #>>42730748 #>>42731484 #>>42732820 #>>42733328 #>>42733722 #
3. x0x0 ◴[] No.42730268[source]
> Having your app burn battery for half a second doing absolutely nothing is bad UX.

Why are you assuming the app is either burning much battery or even doing more than waiting on current data from the server? For an app that I would assume isn't much use without up-to-date data from the server?

4. freedomben ◴[] No.42730440[source]
Assuming the 500ms is mostly delay for fetching data over a socket, unless the code is really broken that should not really be burning battery. <500ms for display of non-trivial network-fetched data is great regardless of whether it's rendered by react native or is a fully native app. They would both be I/O-bound on the network primarily, with a small but insignificant compute overhead for RN. If the data needs lots of transformation (though not compute-intensive transformation like calculating hashes or somethign) upon returning that could make a difference, though again I'd be surprised if CPU for RN vs native was all that different.

As an Elixir dev who aims for and routinely achieves <10ms response times, (and sometimes < 1 ms for frequent endpoints that I can hand optimize into a single efficient SQL query, which Ecto makes easy I might add!) I find the response time to be the more egregious part :-D

5. fidotron ◴[] No.42730497[source]
If you are good those numbers are an order of magnitude off. In truth it is probably mostly auth or something. If you simply avoid json you can radically attack these things fast.

RTT to nearest major metro DC should be up to 20ms (where I am it is less than half that), your DB calls should not be anything like 200ms (and in the event they are you need to show something else first), and 10-20ms is what you should assume for rendering budget of something very big. 60hz means 16ms per frame after all.

replies(2): >>42730694 #>>42731141 #
6. cellularmitosis ◴[] No.42730551[source]
100ms to render an iOS screen means dropping 6 frames. That would put an applicant in the "no hire" category.
7. epolanski ◴[] No.42730580[source]
Make a single example of an app that from when I click to the opening takes less than that.

I've just tried whatsapp, notes, gallery, settings and discord out of curiosity, none did and I have a very fast phone.

replies(1): >>42732058 #
8. lelandfe ◴[] No.42730668[source]
500ms is the 75th percentile speed, so 75% of users are having load times faster than that. For context, Google's synthetic p75 loads emulate a crappy old Android phone on a bad network.

A linked post[0] says their p75 was 1400ms before 2023, yowza.

[0] https://shopify.engineering/improving-shopify-app-s-performa...

replies(3): >>42730747 #>>42731948 #>>42735324 #
9. x0x0 ◴[] No.42730694{3}[source]
> RTT to nearest major metro DC should be up to 20ms (where I am it is less than half that)

over a mobile network? My best rtt to azure or aws over tmobile or verizon is 113ms vs 13ms over my fiber conection.

replies(2): >>42730975 #>>42731277 #
10. afavour ◴[] No.42730720[source]
> Having your app burn battery for half a second doing absolutely nothing is bad UX. That kind of latency does have a meaningful effect on productivity for a heavy user.

The implication is that React Native is to blame for this and I'm not sure that's true. What would the ms delay be with pure native? I have plenty of native apps that also have delays from time to time.

replies(2): >>42732053 #>>42733959 #
11. ◴[] No.42730747[source]
12. joaohaas ◴[] No.42730748[source]
Since the post is about the benefits of react, I'm sure if requests were involved they would mention it.

Also, even if it was involved, 200ms for round-trip and DB queries is complete bonkers. Most round-trips don't take more than 100ms, and if you're taking 200ms for a DB query on an app with millions of users, you're screwed. Most queries should take max 20-30ms, with some outliers in places where optimization is hard taking up to 80ms.

replies(4): >>42732645 #>>42733310 #>>42734929 #>>42737646 #
13. ◴[] No.42730975{4}[source]
14. gf000 ◴[] No.42731141{3}[source]
What percentile? Topics like these don't talk about the 5G connected iphone 16 pro max, but have to include low-end phones with old OS versions and bad connectivity (e.g. try the same network connectivity in the London metro, where often there is no receiption whatsoever).

As you reach for higher percentiles, RTT and such start growing very fast.

Edit: other commenter mentioned 75% as percentile.

replies(2): >>42732005 #>>42732136 #
15. fidotron ◴[] No.42731277{4}[source]
With times like that you'd be better off with Starlink!

I'm not joking: https://www.pcmag.com/news/is-starlink-good-for-gaming-we-pu...

Are you doing the 113 test from the actual device, or something tethered to it? For example, you don't want a bluetooth stack in the middle.

replies(1): >>42731319 #
16. x0x0 ◴[] No.42731319{5}[source]
straight off my android phone by disabling wifi then moving through my 2 sims
replies(3): >>42731920 #>>42732094 #>>42732334 #
17. bluGill ◴[] No.42731484[source]
People have gotten used to that, but UI work back to the 1960s has done studies and showed clearly that for many of these operations you get tens of ms before people notice and their attention wanes. The web often doesn't allow for response times as fast as the humans need, which is a good reason to write real apps not web apps. That is also why I use tabs - load a bunch in the background so when I'm ready I can just switch tabs and it is there.
18. pinoy420 ◴[] No.42731920{6}[source]
Don’t take the bait. It is a typical hn hyperbole comment
19. pinoy420 ◴[] No.42731948[source]
2 seconds to wait for a webpage to load isn’t even that bad. If you take an average user on facebook it is horrendously slow - to someone who knows how fast something can be - but no typical user cares/notices. They just accept it.

Nike’s website is phenomenally quick. But again. Ask anyone if that is what they care about. Nope. It’s the shoes.

replies(2): >>42732065 #>>42732115 #
20. fingerlocks ◴[] No.42732005{4}[source]
Independent of connectivity, UI rendering should be well under the device refresh rate. Consider the overhead of a modern video game that runs 60fps without a hiccup. It’s ludicrous that a CRUD app which usually only populates some text fields and maybe a small image or two can’t do the same
replies(1): >>42735008 #
21. irskep ◴[] No.42732024[source]
Replying to myself for clarification: I did not read their 500ms number as including waiting for a network. It sounded like that's how long it was taking React Native to load local data and draw the screen. If that's not the case, it's a very different story.

From another comment by seemack (https://news.ycombinator.com/item?id=42730348):

> For example, I just recorded myself tapping on a product in the Product list screen and the delay between the pressed state appearing and the first frame of the screen transition animation is more than half a second. The animation itself then takes 300ms which is a generally accepted timeframe for screen animations. But that half second where I'm waiting for the app to respond after I've tapped a given element is painful.

replies(2): >>42734072 #>>42738342 #
22. irskep ◴[] No.42732053[source]
It all depends on whether the number includes network roundtrip or not, which they don't state. I read it as not including a network request, i.e. all CPU and local I/O.
replies(1): >>42734359 #
23. irskep ◴[] No.42732058[source]
It sounds like you're referring to app-launch time, which is different from screen-load time. Very different things!
24. itishappy ◴[] No.42732065{3}[source]
Then there's McMaster Carr, which has great service, but all anyone seems to want to talk about is how snappy their site is!
replies(1): >>42732349 #
25. throw5959 ◴[] No.42732094{6}[source]
If you have a dual SIM phone, try to swap your SIMs.
26. ◴[] No.42732115{3}[source]
27. fidotron ◴[] No.42732136{4}[source]
> What percentile?

There's no argument that starts this way which doesn't end either with "support working offline", or defining when you consider that a user has stepped out of bounds with respect to acceptable parameters, which then raises the question what do you do in that event?

If all you're trying to do is say 75% of users have a good experience, and in your territory 75% means a 150ms and that's too long then the network cannot be in your critical path, and you have to deal with it. If you're on a low end phone any I/O at all is going to kill you, including loading too much code, and needs to be out of the way.

If you can tell the UX is going to be bad you will need to abort and tell them that, though they really will not like it, it's often better to prevent such users ever getting your app in the first place.

I come from mobile games, and supported titles with tens of millions of players around the world back in the early 4G era. All I can tell you is not once did mobile ping become a concern - in fact those networks are shockingly good compared to wifi.

28. fidotron ◴[] No.42732334{6}[source]
That is odd then, but it is odd.

I can only guess the connectivity between your mast and the Internet is awfully congested, and/or you are in the middle of nowhere.

One of the reasons starlink does as well as it does is the ground stations are well connected to the wider world, whereas your nearest cell mast might not be.

replies(1): >>42733076 #
29. lelandfe ◴[] No.42732349{4}[source]
Used to work for a competitor. It’s not just the speed, it’s an amazing site all around; they know their customers and cut out all the fluff.
replies(1): >>42738077 #
30. brokencode ◴[] No.42732603[source]
Subjectively, I find the Shop app to be quite nice and speedy. It works well enough that I’d never have guessed it is using any kind of cross platform framework.

It’s easy to get caught up on numbers, but at the end of the day the user experience is all that matters. And I very much doubt that performance is a concern for their users.

replies(1): >>42735197 #
31. andy_ppp ◴[] No.42732645{3}[source]
I do not understand this thinking at all, a parsed response into whatever rendering engine, even if extremely fast is going to be a large percentage of this 500ms page load. Diminishing it with magical thinking about pure database queries under load with no understanding of the complexity of Shopify is quite frankly ridiculous, next up you’ll be telling everyone to roll there own file sharing with rsync or something…
replies(1): >>42735052 #
32. sgarland ◴[] No.42732820[source]
> 200ms for DB queries

No. Just no. There’s an entire generation of devs at this point who are convinced that a DB is something you throw JSON into, use UUIDs for everything, add indices when things are slower than you expected, and then upsize the DB when that doesn’t fix it.

RAM access on modern hardware has a latency of something like 10 nanoseconds. NVMe reads vary based on queue depth and block size, but sub-msec is easily attainable. Even if your disks are actually a SAN, you should still see 1-2 msec. The rest is up to the DB.

All that to say, a small point query on a well-designed schema should easily execute in sub-msec times if the pages are in the DB’s buffer pool. Even one with a small number of joins shouldn’t take more than 1-2 msec. If this is not the case for you, your schema, query, or DB parameters are sub-optimal, or you’re doing some kind of large aggregation query.

I took a query from 70 to 40 msec today just by rewriting it. Zero additional indexing or tuning, just unrolling several unnecessary nested subqueries, and adding a more selective predicate. I have no doubt that it could get into the single digits if better indexing was applied.

I beg of devs, please take the time to learn SQL, to read EXPLAIN plans, and to measure performance. Don’t accept 200 msec queries as “good enough” because you’re meeting your SLOs. They can be so much faster.

replies(5): >>42733202 #>>42734430 #>>42735780 #>>42737433 #>>42737686 #
33. harrall ◴[] No.42733076{7}[source]
Nah it could also be differing peering agreements with the ISP and the data center you are connecting to.

On T-Mobile, I get 30ms to Cloudflare but 150ms to my local AWS.

But I also get 450 mbps on T-Mobile so I’m not complaining.

34. reissbaker ◴[] No.42733202{3}[source]
I think 500ms P75 is good for an app that hits network in a hot path (mobile networks are no joke), but I agree that 200ms is very very bad for hitting the DB on the backend. I've managed apps with tables in the many, many billions of rows in MySQL and would typically expect single digit millisecond responses. If you use EXPLAIN you can quickly learn to index appropriately and adjust queries when necessary.
replies(1): >>42733370 #
35. xmprt ◴[] No.42733310{3}[source]
> Most queries should take max 20-30ms

Most queries are 20-30ms. But a worst case of 200ms for large payloads or edge cases or just general degradations isn't crazy. Without knowing if 500ms is a p50 or p99 it's kind of a meaningless metric but assuming it's a p99, I think it's not as bad as the original commenter stated.

replies(2): >>42733340 #>>42739303 #
36. gooosle ◴[] No.42733328[source]
The 500ms number is p75 - not worst case at all.

200ms round trip is like 10x more than what's reasonably possible.

Same with your other numbers.

37. gooosle ◴[] No.42733340{4}[source]
They mention later in the article that the 500ms is p75.

Realistically 50ms p75 should be achievable for the level of complexity in the shopify app.

replies(1): >>42733476 #
38. gooosle ◴[] No.42733370{4}[source]
500ms p75 is not good for the (low) complexity of the shopify app.

Also reporting p75 latency instead of p99+ just screams to me that their p99 is embarrassing and they chose p75 to make it seem reasonable.

39. bushbaba ◴[] No.42733476{5}[source]
P75. I can only image the p90 and p99 are upwards of 1 second.
replies(1): >>42736223 #
40. np_tedious ◴[] No.42733722[source]
What does "DB queries" mean here? The on-device sqlite stuff?
41. ycombinatrix ◴[] No.42733959[source]
React Native is just a tool, this is Shopify's fault
42. ◴[] No.42734072[source]
43. lolinder ◴[] No.42734359{3}[source]
The article they link to about how they optimized talks about caching network calls as part of their strategy to get below 500ms, so I would assume network calls are included in the number.
44. charleslmunger ◴[] No.42734430{3}[source]
>RAM access on modern hardware has a latency of something like 10 nanoseconds

What modern hardware are you using that this is true? That's faster than L3 cache on many processors.

replies(1): >>42737193 #
45. SeptiumMMX ◴[] No.42734492[source]
Check out Avalonia [0]

It's a cross-platform spiritual successor of WPF and it kicks ass! You get proper separation of models and views, you can separate what controls there are from how they look (themes/styles), you can build the entire thing into a native compiled application with very reasonable speed and memory use.

[0] https://avaloniaui.net

46. bobnamob ◴[] No.42734929{3}[source]
> 200ms for round-trip and DB queries is complete bonkers

Never lived in Australia I see

replies(1): >>42735231 #
47. gf000 ◴[] No.42735008{5}[source]
That's a page load, not a frame render.

Also, due to layouting, a CRUD app may actually be harder to optimize per frame, than the trivial to parallelize many triangle case as seen in games.

replies(1): >>42735286 #
48. flohofwoe ◴[] No.42735052{4}[source]
I know - old man yells at cloud and stuff - but some 8-bit home computers from the 80s completed their entire boot sequence in about half a second. What does a 'UI rendering engine' need to do that takes half a second on a device that's tens of thousands of times faster? Everything on modern computers should be 'instant' (some of that time may include internet latency of course, but I assume that the Shopify devs don't live on the moon).
replies(3): >>42735209 #>>42736003 #>>42736410 #
49. jeswin ◴[] No.42735167[source]
> I just hope that people choose targets closer to 100ms.

Why? If it's about the phone burning battery for 500ms, it probably isn't doing that - it's just waiting for data to arrive. And even when it's rendering, it's probably not burning battery like say Uber (with which you can feel the battery melt in your hands).

But that's not why I am commenting. I am writing because so many commentors are saying that 500ms is bad. Why is 500ms bad, as long as the UI is not freezing or blanking out?

Why not lower expectations, and wait for half a second? Of course, there are apps for which 500ms is unacceptable - but this doesn't seem to be one of them.

50. mirzap ◴[] No.42735197[source]
Exactly. Tech people almost always go to the "performance wormhole" arguing about ms and how it could be improved 10x - myself included. But working at a startup the past couple of years, I came to the conclusion that it does not matter to the end users at all. If an app is "nice" and "speedy" as you say, that’s enough. Shopify made a good decision and tradeoffs; it works for them, and I would argue it would work for 90% of other companies as well. You don't really need a native app for most purposes; React Native and Flutter are good enough.
51. chrisandchris ◴[] No.42735209{5}[source]
Moorsches Law v2 (/s) states that while computers get faster, we add more layers so computers actually get slower.
replies(1): >>42735747 #
52. yxhuvud ◴[] No.42735231{4}[source]
If Shopify app P75 response time is that slow due to that the users are in Australia, then they should get a data center there.
replies(2): >>42735410 #>>42735986 #
53. fingerlocks ◴[] No.42735286{6}[source]
OP gave a render budget of 100ms _after_ the data has loaded. That’s unacceptable. If this were a MacOS app, that would mean dragging a window corner to resize the content , forcing a new layout and redraw, would yield 10 fps of change. And yet nearly all native apps redraw and layout instantly, even with complex tables of text at various fonts and sizes.

This is also a great litmus test to check if an app was made with electron because they always redraw slowly.

54. yxhuvud ◴[] No.42735324[source]
> so 75% of users are having load times faster

No. It on a request basis, meaning that one in a four clicks a user does take more than half a second to complete. Slow times for as low percentiles as 75 mean users hit the bad cases very often in practice.

replies(1): >>42737566 #
55. bobnamob ◴[] No.42735410{5}[source]
Should they?

You could do the maths on conversion rate increase if that latency disappeared vs the cost of spinning up a dc & running it (including the mess that is localised dbs)

I’m not sure the economics works out for most businesses (I say this as an Australian)

replies(1): >>42736822 #
56. ezekiel68 ◴[] No.42735747{6}[source]
Back when "WinTel" was a true duopoly, we used to call this "Gates Law".
replies(1): >>42737403 #
57. ezekiel68 ◴[] No.42735780{3}[source]
Beg all you want. They're still going to dump JSON strings (not even jsonb) and UUIDs in them anyway, because, "Move fast and break things."

I lament along with you.

replies(1): >>42737207 #
58. netdevphoenix ◴[] No.42735986{5}[source]
In the real world, you can't just optimise for the sake of it. You need to get a business case for it. Because it all boils down to revenue vs expenses
replies(1): >>42736717 #
59. netdevphoenix ◴[] No.42736003{5}[source]
Not sure why people keep bringing the old (my machine x years ago was faster). Machines nowadays do way more than machines from 80s. Whether the tasks they do are useful or not is separate discussion.
replies(1): >>42738041 #
60. akie ◴[] No.42736223{6}[source]
Agreed. The P95 and P99 in particular are likely to be over 1 second, possibly over 2. They chose P75 to be able to post a seemingly impressive number.

I personally wouldn't be very happy with a P75 of 500 ms. It's slow.

61. kristiandupont ◴[] No.42736410{5}[source]
Sure, and the screen in text mode was 80 x 25 chars = 2000 bytes of memory. A new phone has perhaps three million pixels, each taking 4 bytes. There's a significant difference.
replies(1): >>42737390 #
62. philipwhiuk ◴[] No.42736717{6}[source]
If the P75 is bad because of Australia that means 25% of their customer base is Australian.
63. yxhuvud ◴[] No.42736822{6}[source]
Probably not, because the if-statement in my post is likely false. The Australian user base is likely not high enough.
64. sgarland ◴[] No.42737193{4}[source]
Correction: DRAM latency is ~10 - 20 nsec on most DDR4 and DDR5 sticks. The access time as seen by a running program is much more than that.

As an actual example of RAM latency, DDR4-3200 with CL22 would be (22 cycles * 2E9 nsec/sec / 3200E6 cycles/sec) == 13.75 nsec.

65. sgarland ◴[] No.42737207{4}[source]
“We’re disrupting!”

“Yeah, you’re disrupting my sleep by breaking the DB.”

66. pjc50 ◴[] No.42737372[source]
Indeed. The games industry uses immediate mode GUIs and people get upset if they achieve less than 60fps. Having everything be this slow is just a huge failure of coordination on behalf of the industry.

(next mini question: why is it seemingly impossible to make an Android app smaller than 60mb? I'm sure it is possible, but almost all the ones I have from the app store are that size)

replies(1): >>42740260 #
67. flohofwoe ◴[] No.42737390{6}[source]
And yet the GPU in your phone can run a small program for each pixel taking hundreds or even thousands of clock cycles to complete and still hit a 60Hz frame rate or more. It's not the hardware that's the problem, but the modern software Jenga tower that drives it.
68. flohofwoe ◴[] No.42737403{7}[source]
"What Andy giveth, Bill taketh away."
69. refset ◴[] No.42737433{3}[source]
> just unrolling several unnecessary nested subqueries, and adding a more selective predicate

And state of the art query optimizers can even do all this automatically!

replies(1): >>42738251 #
70. ◴[] No.42737566{3}[source]
71. fxtentacle ◴[] No.42737646{3}[source]
I have a 160ms ping to news.ycombinator.com. Loading your comment took 1.427s of wall clock time. <s>Clearly, HN is so bad, it's complete bonkers ;)</s>

time curl -o tmp.del "https://news.ycombinator.com/item?id=42730748"

real 0m1.427s

"if you're taking 200ms for a DB query on an app with millions of users, you're screwed"

My calculation was 200ms for the DB queries and the time it takes your server-side framework ORM system to parse the results and transform it into JSON. But even in general, I disagree. For high-throughput systems it typically makes sense to make the servers stateless (which adds additional DB queries) in exchange for the ability to just start 20 servers in parallel. And especially for PostgreSql index scans where all the IO is cached in RAM anyway, single-core CPU performance quickly becomes a bottleneck. But a 100+ core EPYC machine can still reach 1000+ TPS for index scans that take 100ms each. And, BTW, the basic Shopify plan only allows 1 visitor per 17 seconds to your shop. That means a single EPYC server could still host 17,000 customers on the basic plan even if each visit causes 100ms of DB queries.

replies(2): >>42737841 #>>42742380 #
72. fxtentacle ◴[] No.42737686{3}[source]
"All that to say, a small point query on a well-designed schema should easily execute in sub-msec times if the pages are in the DB’s buffer pool"

Shopify is hosting a large number of webshops with billions of product descriptions, but each store only has a low visitor count. So we are talking about a very large and, hence, uncacheable dataset with sparse access. That means almost every DB query to fetch a product description will hit the disk. I'd even assume a RAID of spinning HDDs for price reasons.

replies(1): >>42738208 #
73. sgarland ◴[] No.42737841{4}[source]
Having indices doesn’t guarantee anything is cached, it just means that fetching tuples is often faster. And unless you have a covering index, you’re still going to have to hit the heap (which itself might also be partially or fully cached). Even then, you still might have to hit the heap to determine tuple visibility, if the pages are being frequently updated.

Also, Postgres has supported parallel scans for quite a long time, so single-core performance isn’t necessarily the dominating factor.

74. sgarland ◴[] No.42738041{6}[source]
Casey Muratori has a clip [0] discussing the performance differences between Visual Studio in 2004 vs. today.

Anecdotally, I’ve been playing AoE2: DE a lot recently, and have noticed it briefly stuttering / freezing during battles. My PC isn’t state of the art by any means (Ryzen 7 3700X, 32GB PC4-24000, RX580 8GB), but this is an isometric RTS we’re talking about. In 2004, I was playing AoE2 (the original) on an AMD XP2000+ with maybe 1GB of RAM at most. I do not ever remember it stuttering, freezing, or in any way struggling. Prior to that, I was playing it on a Pentium III 550 MHz, and a Celeron 333 MHz. Same thing.

A great anti-example of this pattern is Factorio. It’s also an isometric top-down game, with RTS elements, but the devs are serious about performance. It’s tracking god knows how many tens or hundreds of thousands of objects (they’re simulating fluid flow in pipes FFS), with a goal of 60 FPS/UPS.

Yes, computers today are doing more than computers from the 80s or 90s, but the hardware is so many orders of magnitude faster that it shouldn’t matter. Software is by and large slower, and it’s a deliberate choice, because it doesn’t have to be that way.

[0]: https://www.youtube.com/watch?v=MR4i3Ho9zZY

replies(2): >>42738352 #>>42748763 #
75. dboreham ◴[] No.42738077{5}[source]
Even more amazing is that it has always been that good, since 20 years ago at least.
76. sgarland ◴[] No.42738208{4}[source]
Shopify runs a heavily sharded MySQL backend. Their Shop app uses Vitess; last I knew the main Shopify backend wasn’t on Vitess (still sharded, just in-house), but I could be wrong.

I would be very surprised if “almost every query” was hitting disk, and I’d be even more surprised to learn that they used spinners.

77. sgarland ◴[] No.42738251{4}[source]
Sometimes, yes. Sometimes not. This was on MySQL 5.7, and I wound up needing to trace the optimizer path to figure out why it was slower than expected.

While I do very much appreciate things like WHERE foo IN —> WHERE EXISTS being automatically done, I also would love it if devs would just write the latter form. Planners are fickle, and if statistics get borked, query plans can flip. It’s much harder to diagnose when all along, the planner has been silently rewriting your query, and only now is actually running it as written.

replies(1): >>42743728 #
78. mustafa01ali ◴[] No.42738342[source]
author here - the stated screen load time includes server round-trip, parsing, layout, formatting, and rendering.
replies(1): >>42741609 #
79. netdevphoenix ◴[] No.42738352{7}[source]
> I’ve been playing AoE2

If you buy poor software instead of good software (yes, branding, IP and whatever but that's just even more reason for companies not to make it good), complaining doesn't help does it. Commercial software is made to be sold and if it sells enough that's all company executives care about. As long as enough people buy it, it will continue to be made.

Company devs trying to get more time/resources to improve performance will be told no unless they can make a realistic business case that explains how the expense of increased focus on performance will be financially worth in terms of revenue. If enough people buy poor software, improving it is not business smart. Companies exist to make money not necessarily to make good products or provide a good service.

I understand your point but you need to understand that business execs don't care about that unless it significantly impacts revenue or costs in the present or very near future.

replies(1): >>42738815 #
80. sgarland ◴[] No.42738815{8}[source]
Nah, it’s not just that. IME, most devs are completely unaware of how this stuff works. They don’t need to, because there are so many abstractions, and because the industry expectation has shifted such that it isn’t a requirement. I’ve also met some who are aware, but don’t care at all, because no one above them cares.

Tech interviews are wildly stupid: they’ll hammer you on being able to optimally code some algorithm under pressure on a time limit, but there’s zero mention of physical attributes like cache line access, let alone a realistic problem involving data structures. Just once, I’d love to see “code a simple B+tree, and then discuss how its use in RDBMS impacts query times depending on the selected key.”

81. spockz ◴[] No.42739303{4}[source]
Ah. I see we are spoiled with <4ms queries on our database. See, it all depends on perspective and use case. :)
82. tarentel ◴[] No.42740260[source]
Can't speak for every app but I've worked on several through the years, a sizeable chunk of all the apps I've worked on were assets. It's possible to hide a lot of it from the app store size if you really wanted to but you'd end up downloading all the assets at some point anyway so there's really no point in putting the extra engineering effort in just to make your app store number look smaller.

This obviously isn't the case for every app and most of the ones I've worked on had a lot of bloat/crap in them as well.

83. irskep ◴[] No.42741609{3}[source]
In that case, I apologize for misunderstanding, and would edit my original comment if I could.
84. e12e ◴[] No.42742380{4}[source]
That seems really slow for a get request to hn without a session cookie (fetching only cacheable data).

And being not logged in - probably a poor comparison with Shopify app.

85. refset ◴[] No.42743728{5}[source]
Explicit query plan pinning helps a lot, alongside strong profiling and monitoring tools.
86. jgalt212 ◴[] No.42748763{7}[source]
My 16GB box was crashing due to VS Code. When I went to 32 GB, it stopped crashing. And I'm not running any resource hungry plugins. It blows my mind you can be this junky and still have #1 market share.
replies(3): >>42749152 #>>42767653 #>>42809028 #
87. sgarland ◴[] No.42749152{8}[source]
Yeah, I went to Neovim a couple of years ago, and haven’t looked back. There are enough plugins to make it equivalent in useful features, IMO.
88. netdevphoenix ◴[] No.42767653{8}[source]
Devs often think that you need polished engineering to make it into the market. But the reality often is that half baked products built on lush dinners with prospective clients, dreamy promises, strong sales skills and effective market win the game over and over. Of course, if you also have a well engineered product, even better. But it is clearly not necessary
89. markus_zhang ◴[] No.42809028{8}[source]
The new idea is to push out shit when baking and incrementally improve as many users test for free.

Or maybe not new. I remember Power BI was barely useable back in 2018 as the editor lacks a lot of things.