←back to thread

324 points onnnon | 1 comments | | HN request time: 0.207s | source
Show context
irskep ◴[] No.42729983[source]
I agree with most of the other comments here, and it sounds like Shopify made sound tradeoffs for their business. I'm sure the people who use Shopify's apps are able to accomplish the tasks they need to.

But as a user of computers and occasional native mobile app developer, hearing "<500ms screen load times" stated as a win is very disappointing. Having your app burn battery for half a second doing absolutely nothing is bad UX. That kind of latency does have a meaningful effect on productivity for a heavy user.

Besides that, having done a serious evaluation of whether to migrate a pair of native apps supported by multi-person engineering teams to RN, I think this is a very level-headed take on how to make such a migration work in practice. If you're going to take this path, this is the way to do it. I just hope that people choose targets closer to 100ms.

replies(11): >>42730123 #>>42730268 #>>42730440 #>>42730580 #>>42730668 #>>42730720 #>>42732024 #>>42732603 #>>42734492 #>>42735167 #>>42737372 #
fxtentacle ◴[] No.42730123[source]
I would read the <500ms screen loads as follows:

When the user clicks a button, we start a server round-trip and fetch the data and do client-side parsing, layout, formatting and rendering and then less than 500ms later, the user can see the result on his/her screen.

With a worst-case ping of 200ms for a round-trip, that leaves about 200ms for DB queries and then 100ms for the GUI rendering, which is roughly what you'd expect.

replies(7): >>42730497 #>>42730551 #>>42730748 #>>42731484 #>>42732820 #>>42733328 #>>42733722 #
joaohaas ◴[] No.42730748[source]
Since the post is about the benefits of react, I'm sure if requests were involved they would mention it.

Also, even if it was involved, 200ms for round-trip and DB queries is complete bonkers. Most round-trips don't take more than 100ms, and if you're taking 200ms for a DB query on an app with millions of users, you're screwed. Most queries should take max 20-30ms, with some outliers in places where optimization is hard taking up to 80ms.

replies(4): >>42732645 #>>42733310 #>>42734929 #>>42737646 #
fxtentacle ◴[] No.42737646[source]
I have a 160ms ping to news.ycombinator.com. Loading your comment took 1.427s of wall clock time. <s>Clearly, HN is so bad, it's complete bonkers ;)</s>

time curl -o tmp.del "https://news.ycombinator.com/item?id=42730748"

real 0m1.427s

"if you're taking 200ms for a DB query on an app with millions of users, you're screwed"

My calculation was 200ms for the DB queries and the time it takes your server-side framework ORM system to parse the results and transform it into JSON. But even in general, I disagree. For high-throughput systems it typically makes sense to make the servers stateless (which adds additional DB queries) in exchange for the ability to just start 20 servers in parallel. And especially for PostgreSql index scans where all the IO is cached in RAM anyway, single-core CPU performance quickly becomes a bottleneck. But a 100+ core EPYC machine can still reach 1000+ TPS for index scans that take 100ms each. And, BTW, the basic Shopify plan only allows 1 visitor per 17 seconds to your shop. That means a single EPYC server could still host 17,000 customers on the basic plan even if each visit causes 100ms of DB queries.

replies(2): >>42737841 #>>42742380 #
1. sgarland ◴[] No.42737841[source]
Having indices doesn’t guarantee anything is cached, it just means that fetching tuples is often faster. And unless you have a covering index, you’re still going to have to hit the heap (which itself might also be partially or fully cached). Even then, you still might have to hit the heap to determine tuple visibility, if the pages are being frequently updated.

Also, Postgres has supported parallel scans for quite a long time, so single-core performance isn’t necessarily the dominating factor.