←back to thread

256 points MattSayar | 6 comments | | HN request time: 0.419s | source | bottom
1. BhavdeepSethi ◴[] No.43543128[source]
15 years ago, the first start up I worked for provided APIs for music streaming in India. One of the founders who managed all infra was in US, and so the servers (bare metal) were in LA. I still find it amusing, that it was cheaper (and faster) just to fly to India, buy bunch of portable hard drives, upload the media, fly back to US and upload the data to the file server, than uploading the media directly from India to the US server. Obviously only applies when data is in order of TBs. Later saw the same thing with AWS Snowball and Snowmobile.
replies(4): >>43543290 #>>43543489 #>>43543576 #>>43547029 #
2. VectorLock ◴[] No.43543290[source]
"Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway." - Andrew S. Tanenbaum, Computer Networks, 3rd ed., p. 83. (paraphrasing Dr. Warren Jackson, Director, University of Toronto Computing Services (UTCS) circa 1985)
3. MarceliusK ◴[] No.43543489[source]
How much global infrastructure has improved… but also how physical logistics can still beat the internet when you're dealing with massive datasets.
replies(1): >>43544556 #
4. Foobar8568 ◴[] No.43543576[source]
We had to transfer a few 10GBs, if not 100GBs between Europe and the US back 15yo.

Bandwidth was of 100KB/sec at most, I suggested to do that fly over things if the systems team didn't want to raise the priority of that transfer, after prod tried 3 times over the weekend, sadly, they changed the priority of that flow, it took still like 40h? For the initial load.

5. dkh ◴[] No.43544556[source]
The technical requirements always seem to increase at the same rate as the technical advances. I've found this especially true in film/TV. Sure, by 2014ish we were shooting on solid state and had giant RAIDs on set and storage was cheaper than it ever had been, but we easily negated all of that by shooting on multiple RED cameras in raw at resolutions of 6.5k+. Terabytes of new data each day, even duplicating it before leaving took a lot of time! And then storing it at the office while letting more than 1 editor work with it at the time meant building a 36-disk ZFS server with 10GbE to each client. Just playing the footage back on a computer required a dedicated PCIe card
6. okdood64 ◴[] No.43547029[source]
Mailing hard drives of LARGE amounts of data was relatively common as recently as the mid 2010s.