←back to thread

816 points tosh | 2 comments | | HN request time: 0.003s | source
Show context
netsec_burn ◴[] No.41276529[source]
I've used wormhole once to move a 70 GB file. Couldn't possibly do that before. And yes, I know I used the bandwidth of the relay server, I donated to Debian immediately afterwards (they run the relay for the version in the apt package).
replies(3): >>41276736 #>>41276769 #>>41277271 #
lotharrr ◴[] No.41276769[source]
(magic-wormhole author here)

Thanks for making a donation!

I run the relay server, but the Debian maintainer agreed to bake an alternate hostname into the packaged versions (a CNAME for the same address that the upstream git code uses), so we could change it easily if the cost ever got to be a burden. It hasn't been a problem so far, it moves 10-15 TB per month, but shares a bandwidth pool with other servers I'm renting anyways, so I've only ever had to pay an overage charge once. And TBH if someone made a donation to me, I'd just send it off to Debian anyways.

Every once in a while, somebody moves half a terabyte through it, and then I think I should either move to a slower-but-flat-rate provider, or implement some better rate-limiting code, or finally implement the protocol extension where clients state up front how much data they're going to transfer, and the server can say no. But so far it's never climbed the priority ranking high enough to take action on.

Thanks for using magic wormhole!

replies(4): >>41276923 #>>41276954 #>>41277403 #>>41281702 #
1. pyrolistical ◴[] No.41276923[source]
Seems like the only way to ensure wormhole to scale is to only to use relay server to setup direct connections.

I know this requires one of the ends to be able to open ports or whatever but that should be baked into the wormhole setup.

replies(1): >>41277879 #
2. fullspectrumdev ◴[] No.41277879[source]
Maybe hole punching or similar might be worth examining?