←back to thread

183 points crescit_eundo | 1 comments | | HN request time: 0.354s | source
Show context
stephenlf ◴[] No.45054376[source]
Remember when Amazon Video moved from serverless back to a monolith because they were using S3 for storing video streams for near realtime processing? This feels the same. Except Amazon Video is an actual company trying to build real software.

Amazon Video’s original blog post is gone, but here is a third party writeup. https://medium.com/@hellomeenu1/why-amazon-prime-video-rever...

replies(2): >>45054623 #>>45054764 #
thrance ◴[] No.45054764[source]
IIRC they were storing individual frames in S3 buckets and hitting their own internal lambda limits. Funny story tbh.
replies(3): >>45054852 #>>45054914 #>>45054920 #
LeifCarrotson ◴[] No.45054914[source]
You remember correctly:

> The main scaling bottleneck in the architecture was the orchestration management that was implemented using AWS Step Functions. Our service performed multiple state transitions for every second of the stream, so we quickly reached account limits. Besides that, AWS Step Functions charges users per state transition.

> The second cost problem we discovered was about the way we were passing video frames (images) around different components. To reduce computationally expensive video conversion jobs, we built a microservice that splits videos into frames and temporarily uploads images to an Amazon Simple Storage Service (Amazon S3) bucket. Defect detectors (where each of them also runs as a separate microservice) then download images and processed it concurrently using AWS Lambda. However, the high number of Tier-1 calls to the S3 bucket was expensive.

They were really deeply drinking the AWS serverless kool-aid if they thought the right way to stream video was multiple microservices accessing individual frames on S3...

replies(4): >>45054988 #>>45055848 #>>45056309 #>>45057092 #
1. ◴[] No.45057092[source]