←back to thread

768 points cyndunlop | 2 comments | | HN request time: 0.434s | source
1. artee_49 ◴[] No.43107494[source]
I am a bit perplexed though as to why they have implemented fan-out in a way that each "page" is blocking fetching further pages, they would not have been affected by the high tail latencies if they had not done this,

"In the case of timelines, each “page” of followers is 10,000 users large and each “page” must be fanned out before we fetch the next page. This means that our slowest writes will hold up the fetching and Fanout of the next page."

Basically means that they block on each page, process all the items on the page, and then move on to the next page. Why wouldn't you rather decouple page fetcher and the processing of the pages?

A page fetching activity should be able to continuously keep fetching further set of followers one after another and should not wait for each of the items in the page to be updated to continue.

Something that comes to mind would be to have a fetcher component that fetches pages, stores each page in S3 and publishes the metadata (content) and the S3 location to a queue (SQS) that can be consumed by timeline publishers which can scale independently based on load. You can control the concurrency in this system much better, and you could also partition based on the shards with another system like Kafka by utilizing the shards as keys in the queue to even "slow down" the work without having to effectively drop tweets from timelines (timelines are eventually consistent regardless).

I feel like I'm missing something and there's a valid reason to do it this way.

replies(1): >>43107648 #
2. abound ◴[] No.43107648[source]
I interpreted this as a batch write, e.g. "write these 10k entries and then come back". The benefit of that is way less overhead versus 10k concurrent background routines each writing individual rows to the DB. The downside is, as you've noted, that you can't "stream" new writes in as older ones finish.

There's a tradeoff here between batch size and concurrency, but perhaps they've already benchmarked it and "single-threaded" batches of 10k writes performed best.