←back to thread

256 points MattSayar | 2 comments | | HN request time: 0.52s | source
Show context
BhavdeepSethi ◴[] No.43543128[source]
15 years ago, the first start up I worked for provided APIs for music streaming in India. One of the founders who managed all infra was in US, and so the servers (bare metal) were in LA. I still find it amusing, that it was cheaper (and faster) just to fly to India, buy bunch of portable hard drives, upload the media, fly back to US and upload the data to the file server, than uploading the media directly from India to the US server. Obviously only applies when data is in order of TBs. Later saw the same thing with AWS Snowball and Snowmobile.
replies(4): >>43543290 #>>43543489 #>>43543576 #>>43547029 #
1. MarceliusK ◴[] No.43543489[source]
How much global infrastructure has improved… but also how physical logistics can still beat the internet when you're dealing with massive datasets.
replies(1): >>43544556 #
2. dkh ◴[] No.43544556[source]
The technical requirements always seem to increase at the same rate as the technical advances. I've found this especially true in film/TV. Sure, by 2014ish we were shooting on solid state and had giant RAIDs on set and storage was cheaper than it ever had been, but we easily negated all of that by shooting on multiple RED cameras in raw at resolutions of 6.5k+. Terabytes of new data each day, even duplicating it before leaving took a lot of time! And then storing it at the office while letting more than 1 editor work with it at the time meant building a 36-disk ZFS server with 10GbE to each client. Just playing the footage back on a computer required a dedicated PCIe card