Thanks for making a donation!
I run the relay server, but the Debian maintainer agreed to bake an alternate hostname into the packaged versions (a CNAME for the same address that the upstream git code uses), so we could change it easily if the cost ever got to be a burden. It hasn't been a problem so far, it moves 10-15 TB per month, but shares a bandwidth pool with other servers I'm renting anyways, so I've only ever had to pay an overage charge once. And TBH if someone made a donation to me, I'd just send it off to Debian anyways.
Every once in a while, somebody moves half a terabyte through it, and then I think I should either move to a slower-but-flat-rate provider, or implement some better rate-limiting code, or finally implement the protocol extension where clients state up front how much data they're going to transfer, and the server can say no. But so far it's never climbed the priority ranking high enough to take action on.
Thanks for using magic wormhole!
I know this requires one of the ends to be able to open ports or whatever but that should be baked into the wormhole setup.
As I'm sure you're aware: https://www.scaleway.com/en/stardust-instances/ "up to 100Mbps" for $4/month
It relys on some singular or small set of donated servers?
NAT <-> NAT traversal is obviously the biggest motivator, since otherwise you just scp or rsync or sftp if you don't have the dual barrier.
Is the relay server configurable? Seemed to be implied it is somewhat hardcoded.
We don't have that yet, but the two sides attempt direct connections first (to all the private addresses they can find, which will include a public address if they aren't behind NAT). They both wait a couple of seconds before trying the relay, and the first successful negotiation wins, so in most cases it will use a direct connection if at all possible.
Then, to send the bulk data, if the two sides can't establish a direct connection, they fall back to the "transit relay helper" server. You only need that one if both sides are behind NAT.
The client has addresses for both servers baked in, so everything works out-of-the-box, but you can override either one with CLI args or environment variables.
Both sides must use the same mailbox server. But they can use different transit relay helpers, since the helper's address just gets included in the "I want to send you a file" conversation. If I use `--transit-helper tcp:helperA.example.com:1234` and use use `--transit-helper tcp:helperB.example.com:1234`, then we'll both try all of:
* my public IP addresses * your public IP addresses * helperA (after a short delay) * helperB (after a short delay)
and the first one to negotiate successfully will get used.
> since otherwise you just scp or rsync or sftp if you don't have the dual barrier
True, but wormhole also means you don't have to set up pubkey ahead of time.
But you can't currently force that from one side: if you do that, but the other side doesn't override it too, then you'll both include their relay hint in the list.
Note that using the relay doesn't affect the security of the transfer: there's nothing the relay can do to violate your confidentiality (learn what you're sending) or integrity (cause you to receive something other than what the sender intended). The worst the relay can do is to prevent your transfer from happening entirely, or make it go slowly.
You can also import the wormhole library directly and use its API to run whatever protocol you want. That mode uses the same kinds of codes as the file-sending tool, but with a different "application ID" so they aren't competing for the same short code numbers. https://github.com/magic-wormhole/magic-wormhole/blob/master... has details.
A technique like this is used to do "invites" in Magic Folder, and also in Tahoe-LAFS. That is, they speak a custom protocol over just the Mailbox server in order to do some secrets-exchanging. They never set up a "bulk transport" link.
There is also a Haskell implementation, if that's of interest.
I love to learn about "non-file-transfer" use-cases for Magic Wormhole, so please connect via GitHub (or https://meejah.ca/contact)
Do you also think we should legislate the price of BMWs? You're not forced to buy AWS, there's plenty of alternatives, and the prices that AWS charges is well known. I'm not sure why the government should be involved other than a vague sense of "I want cheap stuff".
If something is overpriced, somebody should jump in and take advantage of a business opportunity. If nobody is jumping in, perhaps the item is not overpriced. Or perhaps there is some systemic issue preventing willing competitors from jumping in. Imagine if somebody tackled the real issue and it unclogged the plumbing for producers of all sorts of medicine beside insulin at the same time.
If a government mandates the sale of an item below the cost of production, they drive out all producers and that product disappears from the market. That is, unless they create some government subsidy or other graft to compensate the government-appointed winners. Any way you slice it, it is a recipe for disaster.
If parties are allowed to compete fairly with each other, somebody will offer a cheaper price. This is already the case with AWS. Consumers may decide that the cheaper product is somehow inferior, but that is not a problem that lawmakers should interfere in.
The various factors causing strong lock-in effects, their dominance, and the insanely high pricing of moving data out of AWS - I wouldn't be surprised if they got their antitrust moment within a few years.
"Evergreening", a process where the drug manufacturers slightly change the formula or delivery when one patent is running out, to gain a new patent, then stop manufacturing the old formula.
Not saying I want to see AWS bandwidth prices regulated (though I think they could come down and still make a massive profit). But in the case of insulin, the industry has left little choice but government intervention.
insulin is off patent. anyone can in theory manufacture it, but the ROI is just not worth it even at the current prices. Manufacturing it is not easy, there are humongous amounts of regulations, you will probably need to do a couple of clinical trials too... so you end up with an oligopoly that are incumbents that nobody wants to challenge, and prices that are all aligned.
Throttling after 32 TB: https://help.contabo.com/en/support/solutions/articles/10300...
Some commentary: https://hostingrevelations.com/contabo-bandwidth-limit/
I wouldn't say that they're super dependable or that the VPSes are very performant, but for the most part they work and are affordable.
Alternatively, there's also Hetzner, as sibling comments mentioned: https://www.hetzner.com/cloud/
They do have additional fees, though:
> You’ll get at least 20 TB of inclusive traffic for cloud servers at EU and US locations and 1 TB in Singapore. For each additional TB, we charge € 1.00 in the EU and US, and € 7.40 in Singapore. (Prices excl. VAT)
I also used to use Time4VPS, however they have gradually been rising prices and the traffic I'd get before being throttled would be less than that of Contabo.
All of them require an account on the other machine and aren't really suitable for quick transfer one-off file transfer from one computer to another that you don't own.
If I have a direct network connection I tend to go with:
python3 -m http.server
or tar ...| nc
Neither of which is great, but at least you'll find them on many machines already preinstalled.