p2pcopy https://github.com/psantosl/p2pcopy
pcp https://github.com/dennis-tra/pcp
wormhole-william https://github.com/psanford/wormhole-william
python -m http.server
Typically, a firewall allows outbound connections without needing an explicit entry for the protocol, and in the case of magic wormhole, both sides are an outbound connection. So it passes right through.
If you've got security-minded folk managing that sort of thing for you, it's possible that magic wormhole will upset them for this reason. More for policy/compliance reasons than actual security ones.
On every *nix platform I would just install the `syncthing` package and use it quite easily. I've experimented with some wormhole stuff before and looked at this package some, but there would be a lot of extra steps involved right because of the packaging choices.
The package was removed from Fedora in 37 with the "replacement" being use a Snap instead [1]. That doesn't make any sense because that platform is heavily invested in Flatpak and it's very "against the grain." There are some other "Wormhole" apps on Flathub that are verified, but none of them are the same as this. Are they compatible protocol wise or just named similar things? That's assuming you want to enter the game of "is this app safe or made by the same entity?"
I want to enjoy this project and others like it, but it's very confusing. The goal of these tools is to simplify transfer of files and to take most of the "pain" in doing that away. Yet, to actually use most of these tools in any meaningful way between two computers you would need to invest more time into getting this to run on those systems. My brain tells me to make this work you need a big button on the homepage for each well supported platform that just says "Download for Windows" along with a one-click solutions for various Linux platforms (one line command, Flatpak, AppImage, etc.)
[1]: https://magic-wormhole.readthedocs.io/en/latest/welcome.html...
For some more exotic testing, I was able to run my own magic wormhole relay[1], which let me tweak some things for faster/more reliable huge file copies. I still hate how often Google Drive will fall over when you throw a 10s-of-GB file at it.
[1] https://www.jeffgeerling.com/blog/2023/my-own-magic-wormhole...
Thanks for making a donation!
I run the relay server, but the Debian maintainer agreed to bake an alternate hostname into the packaged versions (a CNAME for the same address that the upstream git code uses), so we could change it easily if the cost ever got to be a burden. It hasn't been a problem so far, it moves 10-15 TB per month, but shares a bandwidth pool with other servers I'm renting anyways, so I've only ever had to pay an overage charge once. And TBH if someone made a donation to me, I'd just send it off to Debian anyways.
Every once in a while, somebody moves half a terabyte through it, and then I think I should either move to a slower-but-flat-rate provider, or implement some better rate-limiting code, or finally implement the protocol extension where clients state up front how much data they're going to transfer, and the server can say no. But so far it's never climbed the priority ranking high enough to take action on.
Thanks for using magic wormhole!
Magic Wormhole project has its own alternate implementation https://github.com/magic-wormhole/magic-wormhole.rs, which is also used by delightfully designed GNOME and Android apps; [Warp](https://apps.gnome.org/Warp/), and [Wormhole](https://play.google.com/store/apps/details?id=eu.heili.wormh...), respectively.
The protocol enumerates all the IPv4 addresses on each side, and attempts to connect to all of them, and the first successful handshake wins.
So if your VPN arrangement enables a direct connection, `wormhole send` will use that, which will be faster than going through the relay (and cheaper for the relay operator).
A basic VPN gets you safe connectivity between the two sides, but transferring a file still requires something extra, like a preconfigured ssh/scp account, or a webserver and the recipient running `curl`/`wget`. magic-wormhole is intended make that last part easy, at least for one-off transfers, just `wormhole send FILENAME` on one side, and `wormhole receive [CODE]` on the other.
"Rust implementation of Magic Wormhole, with new features and enhancements": https://github.com/magic-wormhole/magic-wormhole.rs
I know this requires one of the ends to be able to open ports or whatever but that should be baked into the wormhole setup.
As I'm sure you're aware: https://www.scaleway.com/en/stardust-instances/ "up to 100Mbps" for $4/month
magic-wormhole doesn't need the initial configuration, but only lets you transfer one file (or one directory). So it's better for ad-hoc transfers, or for safely establishing the configuration data you need for a more long-term tool. The analogy might be that magic-wormhole is to synthing as scp is to rsync.
The snap/flatpak thing is weird, and I share your discomfort with uncertain provenance of software delivered that way.
I wrote the original version in Python, and took advantage of a number of useful dependencies (Twisted, to begin with), but a consequence is that installing it requires a dozen other packages, plus everything that Python itself wants. I've watched multiple people express dismay when they do a `brew install magic-wormhole` and the screen fills with dependencies being downloaded. If I knew Go, or if Rust had existed when I first wrote it, I might have managed to produce a single-file executable, which would be a lot better for deployment/distribution purposes, like wormhole-william provides today.
Setting this up on a PC and Mac to transfer files back and forth.
There’s an XKCD for this, too: https://xkcd.com/949/
I think on mac Safari usually doesn't work as well as in Chrome, but I've been able to transfer from Windows to iOS, Windows to macOS and macOS to iOS without installing a thing.
I would have expected the relay server only being used for initial handshake to punch through NAT, after which the transfer is P2P. Only in the case of some network restrictions the data really flows through the relay. How could they afford running the free relay otherwise?
I've been meaning to find the time to add NAT-hole-punching for years, but haven't managed it yet. We'd use the mailbox server messages to help the two sides learn about the IP addresses to use. That would increase the percentage of transfers that avoid the relay, but the last I read, something like 20% of peer-pairs would still need the relay, because their NATs are too restrictive.
The relay usage hasn't been expensive enough to worry about, but if it gets more popular, that might change.
It relys on some singular or small set of donated servers?
NAT <-> NAT traversal is obviously the biggest motivator, since otherwise you just scp or rsync or sftp if you don't have the dual barrier.
Is the relay server configurable? Seemed to be implied it is somewhat hardcoded.
- Generate a short code
- Use the code as the seed to deterministically generate a Syncthing device key + config
Since the Syncthing device key could be generated deterministically, sharing the code with both sides would be enough to complete a dir/file transfer and then discard the keys.
Yup it's what I do, that 3rd computer having a fixed IP. Conveniently that computer can also keep a copy of the file(s).
Linux/BSDs/OS X (which is kinda a Unx too) all come stock with scp* and I don't really use Windows, so I'm a happy camper.
Once that's established, and assuming that the two machines can reach each other (the server isn't behind a NAT box), then the client can `scp` and `rsync` all they want.
Magic-wormhole doesn't require that coordination phase. The human sending the file runs `wormhole send FILENAME` and the tool prints a code. The human receiving the file runs `wormhole rx CODE`. The two programs handle the rest. You don't need a new account on the receiving machine. The CODE is much much shorter than the two pubkeys that an SSH client/server pair require, short enough that you can yell it across the room, just a number and two words, like "4-purple-sausages". And you only need to send the code in one direction, not both.
Currently, the wormhole programs don't remember anything about the connection they just established: it's one-shot, ephemeral. So if you want to send a second file later, you have to repeat the tell-your-friend-a-code dance (with a new code). We have plans to leverage the first connection into making subsequent ones easier to establish, but no code yet.
Incidentally, `wormhole ssh` is a subcommand to set up the ~/.ssh/authorized_keys file from a wormhole code, which might help get the best of both worlds, at least for repeated transfers.
The entire setup phase of magic wormhole is "copy those 3 words" and boom you're done.
We don't have that yet, but the two sides attempt direct connections first (to all the private addresses they can find, which will include a public address if they aren't behind NAT). They both wait a couple of seconds before trying the relay, and the first successful negotiation wins, so in most cases it will use a direct connection if at all possible.
Magic Wormhole: Get things from one computer to another, safely - https://news.ycombinator.com/item?id=27262193 - May 2021 (178 comments)
The lack of improvement in these tools is pretty devastating. There was a flurry of activity around PAKEs like 6 years ago now, but we're still missing:
* reliable hole punching so you don't need a slow relay server
* multiple simultaneous TCP streams (or a carefully designed UDP protocol) to get large amounts of data through long fat pipes quickly
Last time I tried using a Wormhole to transmit a large amount of data, I was limited to 20 MB/sec thanks to the bandwidth-delay product. I ended up using plain old http, with aria2c and multiple streams I maxed out a 1 Gbps line.
IMO there's no reason why PAKE tools shouldn't have completely displaced over-complicated stuff like Globus (proprietary) for long distance transfer of huge data, but here we are stuck in the past.
Then, to send the bulk data, if the two sides can't establish a direct connection, they fall back to the "transit relay helper" server. You only need that one if both sides are behind NAT.
The client has addresses for both servers baked in, so everything works out-of-the-box, but you can override either one with CLI args or environment variables.
Both sides must use the same mailbox server. But they can use different transit relay helpers, since the helper's address just gets included in the "I want to send you a file" conversation. If I use `--transit-helper tcp:helperA.example.com:1234` and use use `--transit-helper tcp:helperB.example.com:1234`, then we'll both try all of:
* my public IP addresses * your public IP addresses * helperA (after a short delay) * helperB (after a short delay)
and the first one to negotiate successfully will get used.
> since otherwise you just scp or rsync or sftp if you don't have the dual barrier
True, but wormhole also means you don't have to set up pubkey ahead of time.
The encrypted connection is used to exchange IP addresses.. maybe you're thinking of the module that e.g. can modify FTP messages to replace the IP addresses with NAT-translated ones? Our encryption layer would prevent that, but we'd probably get more benefit from implementing WebRTC or a more general hole-punching scheme, than by having the kernel be able to fiddle with the addresses.
Magic-wormhole can't use that approach, because our security model rules out reliance on servers for confidentiality or integrity. We could safely store ciphertext without violating the model, but you need an interactive protocol with the sender to get the decryption key (otherwise the wormhole code would be a lot larger), so it wouldn't improve the experience very much, and would cost a lot more to operate. The wormhole servers have trivial storage requirements, so the only real costs are bandwidth for the transit relay helper, for when the two sides can't make a direct connection.
How about Warpinator [1]?
It's the application that I use simply because it came by default with my choice of Linux distro, and it works fine. Main use case for me is sending recently taken photos from my phone to the computer.
When you choose the files you want to transfer, it gives you a 6 digit code or a QR code. Once you enter that, the files are transferred! It's available for most all major platforms, but isn't open source. [3]
I haven't read their privacy policy. Frankly, I'd rather not know...
[1] https://send-anywhere.com/
[2] https://support.send-anywhere.com/hc/en-us/articles/11500385...
[3] https://support.send-anywhere.com/hc/en-us/articles/11500388...
https://zynk.it is a new project I've been working on together with a small team aimed at delivering a truly easy, fast, efficient, unlimited, privacy-respecting and pain free file-sharing experience. It’s peer-to-peer, E2EE and avoids centralized storage, aligning with the ethos of control and transparency we often discuss here. It allows users to send and receive any file(s) or folder(s) without any limits whatsoever between any device/OS and any device/OS, send and forget, Zynk takes care of all the heavy lifting.
What I hope sets Zynk apart is that it is built to literally be used by anyone, be it a power user, or my mom.
One of my main goals with this project is to remove any pains associated with data transfer once and for all, for any use case.
I'm curious if this resonates with you—would you use it? What would make it indispensable for your workflows?
I'd be happy to discuss it more if anyone is interested. Feel free to sign up for early access on the site.
If I tried to actually come up with the actual commands for this, I'm sure I'd burn a whole afternoon on fiddling with it.
You should not need HTTP, FTP, etc. You should be able to use something which can work on any computers, such as just TCP/IP. Unfortunately, some systems (especially some Windows systems) will make that difficult. Using the more complicated such as Magic Wormhole and other programs means you will need two computers that support such a thing. I did once try to transfer a file from Windows to Linux, and had to install ncat to do so but Windows deletes it by default if you try to do that, but I was able to make it to not to do that.
If you have to install software anyway why not install wormhole directly?
Netcat works in some circumstances and it’s fine to use it in those. But wormhole covers different scenarios and your netcat proposal doesn’t cover or have advantages in some of them.
The best article I've found about NAT traversal is this article from Tailscale: https://tailscale.com/blog/how-nat-traversal-works
Form no work on FF.
Form only Works on chrome - but just gives me signup: https://i.imgur.com/ePcBVBE.png
Page has zero meat to the info...
No screenshots, all cartoons. No talk of price/model/package/install/ etc...
No confirmation email with info...
Nice blurb - but needs work.
But you can't currently force that from one side: if you do that, but the other side doesn't override it too, then you'll both include their relay hint in the list.
Note that using the relay doesn't affect the security of the transfer: there's nothing the relay can do to violate your confidentiality (learn what you're sending) or integrity (cause you to receive something other than what the sender intended). The worst the relay can do is to prevent your transfer from happening entirely, or make it go slowly.
You can also import the wormhole library directly and use its API to run whatever protocol you want. That mode uses the same kinds of codes as the file-sending tool, but with a different "application ID" so they aren't competing for the same short code numbers. https://github.com/magic-wormhole/magic-wormhole/blob/master... has details.
The file system access API is a great way to write chunks of a file at a gime, but for now Firefox doesn't support it
https://wicg.github.io/file-system-access/ https://mozilla.github.io/standards-positions/#native-file-s...
But wormhole has turned out to be more usable in some cases. I've had days where I'm sshed into a bastion host, then sshed from there into a server, then cd'd into a deep directory with lots of spaces and quotes and shell metacharacters in the path, and then found a file that I wanted to copy out. To do that with ssh, I have to first configure ProxyJump to let me reach the internal machine with a single ssh command, and then figure out how to escape the pathname correctly (which somehow never works for me). With `wormhole send` I get to skip all of that, at the cost of having to do it once per file.
However, it is true about considering NAT. Then you will need to set up the intermediary (which further affects compatibility), if you cannot connect the computers directly (which also ought to be possible with a null modem cable, but that also is often not available).
It also does not consider multiple files at once; how that should need to be handled will be different on different computers anyways though, since the files are different on a different computers.
Due to such things, other programs such as Magic Wormhole might help, although even then it will not necessarily work with all computers anyways, because then you will need a computer that is compatible with Magic Wormhole.
Another alternative, that might sometimes be suitable, would be LAN connections. This is not always suitable, but if they are, then you can use the LAN addressing directly, too.
For GPG to add security, you also have to make sure the GPG key is transferred safely, which adds work to the transfer process. Either you're GPG-encrypting to a public key (which you must have copied from the receiving side to the sending side at some point), or you're using a symmetric-key passphrase (which you must generate randomly, to be secure, and then copy it from one side to the other).
I should note that magic-wormhole's encryption scheme is not post-quantum -secure. So if you've managed to get a GPG symmetric key transferred to both sides via PQ-secure pathways (I see that current SSH 9.8 includes "kex: algorithm: sntrup761x25519-sha512@openssh.com", where NTRU is PQ-secure), then your extra GPG encryption will indeed provide you with security against a sufficiently-large quantum computer, whereas just magic-wormhole would be vulnerable.
The `wormhole send` tool is a good demonstration of what you can do with that API, and a convenient tool in its own right, but wasn't designed to be the end-all-be-all of the file transfer universe, nor to be a building block for other tools layered on top.
The application you describe would be pretty cool (the UI might look more like dropping a file into a Slack DM chat window). But I'd recommend against using automated calls to `wormhole send` to accomplish it: you'd be cutting against the grain, and adding load to the mailbox server that everyone else uses. Instead, build a separate app or daemon, which can use the magic-wormhole API to perform just the introduction step. You'd push the "invite a peer" button on your app, it would display a wormhole code, you speak that to your pal, they push the "accept invitation" button on their app, type in the code, and then the two apps exchange keys/addresses. All subsequent transfers use those established keys, and don't need to use the wormhole code again. You should never need to perform a wormhole dance more than once per peer.
A technique like this is used to do "invites" in Magic Folder, and also in Tahoe-LAFS. That is, they speak a custom protocol over just the Mailbox server in order to do some secrets-exchanging. They never set up a "bulk transport" link.
There is also a Haskell implementation, if that's of interest.
I love to learn about "non-file-transfer" use-cases for Magic Wormhole, so please connect via GitHub (or https://meejah.ca/contact)
The great thing about magic wormhole is that the protocol is open, and anyone could implement it for themselve.
For example there is the reference implementation in python, then there are implementations golang, rust and haskell. Flutter bindings so you can use it in flutter. Multiple GUI implementations for all operating systems, even mobile and the web (via WASM). It has also been implemented into other open source projects like tmux or termshark. https://magic-wormhole.readthedocs.io/en/latest/ecosystem.ht...
Also other comments in this thread mention many already existing alternatives e.g. https://news.ycombinator.com/item?id=41276443
Basically what I'm saying is, I'm locked to the applications you and your team have built. I couldn't "hack" something quickly together to integrate it into other things, I couldn't extend your clients by modifying the source code and I also couldn't verify that your code really does what it says (E2EE, privacy-respecting).
> What I hope sets Zynk apart is that it is built to literally be used by anyone, be it a power user, or my mom.
I'm sure that a more friendly UI/UX for non power users would be great, but IMO it would be even better if it used an open protocol like magic wormhole, this way the receiver does not also need to install a Zynk Client, but can use whatever he is already using. There is for example https://winden.app/about already exists, which seems to be a very user friendly UI, is open source and works without installing it.
Maybe I'm just too much of a "power user" (I use Linux on my computers/servers and a custom ROM on my phone) to understand what zynk could provide to me.
But I think (which means I don't have sources to back this up) the audience which does not care about e2ee/privacy already uses the solutions implemented into their OS (like AirDrop/Quick Share, share via iCloud/Google Drive/OneDrive/...) and from my experience the audience that cares about privacy/e2ee has a large overlap with the Open Source community which is more likely to use solutions like magic wormhole or croc.
Then, trying to use e.g. TCP Prague (or, I guess, it's congestion control with UDP-native QUIC) as a scalable congestion controller, to take care of the throughout restrictions caused by high bandwidth delay product.
As it uses the anchor tag # the passphrase doesn't even get sent to the server hosting the website, so it all happens client side.
Do you also think we should legislate the price of BMWs? You're not forced to buy AWS, there's plenty of alternatives, and the prices that AWS charges is well known. I'm not sure why the government should be involved other than a vague sense of "I want cheap stuff".
If something is overpriced, somebody should jump in and take advantage of a business opportunity. If nobody is jumping in, perhaps the item is not overpriced. Or perhaps there is some systemic issue preventing willing competitors from jumping in. Imagine if somebody tackled the real issue and it unclogged the plumbing for producers of all sorts of medicine beside insulin at the same time.
If a government mandates the sale of an item below the cost of production, they drive out all producers and that product disappears from the market. That is, unless they create some government subsidy or other graft to compensate the government-appointed winners. Any way you slice it, it is a recipe for disaster.
If parties are allowed to compete fairly with each other, somebody will offer a cheaper price. This is already the case with AWS. Consumers may decide that the cheaper product is somehow inferior, but that is not a problem that lawmakers should interfere in.
On Windows and Linux, there’s RiftShare which has a gui: https://riftshare.app/
The various factors causing strong lock-in effects, their dominance, and the insanely high pricing of moving data out of AWS - I wouldn't be surprised if they got their antitrust moment within a few years.
I'm working on a branch that considerably improves the current code and hole punching in it works like a swiss watch. If you're interested you should check out some of the features that work well already.
"Evergreening", a process where the drug manufacturers slightly change the formula or delivery when one patent is running out, to gain a new patent, then stop manufacturing the old formula.
Not saying I want to see AWS bandwidth prices regulated (though I think they could come down and still make a massive profit). But in the case of insulin, the industry has left little choice but government intervention.
How does this work from a security perspective? Given the lack of apparent entropy can’t a malicious actor conceivably enter the correct phrase before the good actor?
“An important property is that an eavesdropper or man-in-the-middle cannot obtain enough information to be able to brute-force guess a password without further interactions with the parties for each (few) guesses. This means that strong security can be obtained using weak passwords.”
To be fair our offshore team was so bad with security (“doesn’t work? Turn it off!”) it is unfortunately necessary. If I had a slightly different app “magick wormhole” they’re likely to use it if it had a pretty GUI.
Like if we didn’t have strict security policies in place how do you manage 500+ “developers” who have no repercussions? Part of it is getting the cheapest labor possible, part of it is security is hard to do right and part of it is english as a second language issue.
It is much easier to put everyone in an incredibly locked down environment than it is to have them decide what’s secure or not. If I were to fork this and internally use our own DNS and put a GUI wrapper and there’s a flaw in the implementation of magic wormhole I’d be in much more trouble than using Crowdstrike which no one will get fired for using for example.
insulin is off patent. anyone can in theory manufacture it, but the ROI is just not worth it even at the current prices. Manufacturing it is not easy, there are humongous amounts of regulations, you will probably need to do a couple of clinical trials too... so you end up with an oligopoly that are incumbents that nobody wants to challenge, and prices that are all aligned.
This issue is one of those that when people are screaming, why are folks using chrome, why haven't we all switched to Firefox I point to and say, because I want a good web, I want a fast web, I want a featureful web, and Mozilla definitely does not share my priorities.
The first thing I install on any new computer.
https://github.com/magic-wormhole/magic-wormhole/blob/master... has a larger writeup.
Just to offer a different perspective, though: I don't consider a lot of the things that Chrome does (bittorrent functionality in this case) to be part of "a good web", or really part of the web at all. I don't need my browser to be an operating system. I can use other apps to do other things.
It's much more important to me to avoid another Internet Explorer-like monoculture, and to have a browser that's relatively respectful of privacy.
We are not planning to open source it, but who knows what the future might bring.
I too love and appreciate open protocols and tools, heck, I also tend to gravitate towards that by default as well, but when something better comes up that's not open source and I can use it better/easier, I do.
We'll release a CLI for Windows, macOS and Linux which will easy to use and flexible/scriptable so you could use that to hack together anything you need.
You definitely are a power user. While I don't disagree on the P2P/privacy overlap with open source I do think the world has yet to have the final say about data transfer. Yes, literally countless tools and methods of moving data exist out there, but they aren't universal (AirDrop/Quickshare don't work between all platforms), they pretty much always have both random limits, and limitations, and in most cases aren't really efficient nor pain free -- we're trying to do better, hope we make it! :)
Stay tuned and if you or anyone else would like to give it a try in the mean time drop me a line, m <at> zynk.it
That said I can see how autocompleting from the first three letters of each word for "beaver-grass-hypocondriac-shelf" might be easier for a human than typing "beagrahypshe"
Throttling after 32 TB: https://help.contabo.com/en/support/solutions/articles/10300...
Some commentary: https://hostingrevelations.com/contabo-bandwidth-limit/
I wouldn't say that they're super dependable or that the VPSes are very performant, but for the most part they work and are affordable.
Alternatively, there's also Hetzner, as sibling comments mentioned: https://www.hetzner.com/cloud/
They do have additional fees, though:
> You’ll get at least 20 TB of inclusive traffic for cloud servers at EU and US locations and 1 TB in Singapore. For each additional TB, we charge € 1.00 in the EU and US, and € 7.40 in Singapore. (Prices excl. VAT)
I also used to use Time4VPS, however they have gradually been rising prices and the traffic I'd get before being throttled would be less than that of Contabo.
It absolutely isn't. See my rant: https://news.ycombinator.com/item?id=24519895
Just think of each word as being one character from a large-ish alphabet.
Code-words are used so that the one time secret can be easily remembered or shared over a voice channel.
All of them require an account on the other machine and aren't really suitable for quick transfer one-off file transfer from one computer to another that you don't own.
If I have a direct network connection I tend to go with:
python3 -m http.server
or tar ...| nc
Neither of which is great, but at least you'll find them on many machines already preinstalled.Why aren't people who know about this and hold important positions doing something about the ecosystem? What can people with no experience but care do to ensure the longevity of open source tools like this?
That is as opposed to sending a public key or key fingerprint. In that case there would be little value to the attacker in seeing the transfer. They would have to MITM the transfer of the key itself. If you wanted to prevent the attacker from sending bogus files you would also have to transfer some sort of signing key.
So a short, time limited, secret vs a longer public value.
The secret can be any string you like, the protocol doesn't care, instead of "4-purple-sausages" it could be "4-65535" or "4-qtx", and have the same resistance to attack. The CLI encodes the secret as two words from the PGP word list, which was designed to be spoken and transcribed accurately even over a noisy voice channel (sort of like the Alpha/Bravo/Charlie/.. "military phonetic alphabet", except it's two alternating lists of 256 words each). In practice that pair of words is much easier to speak and listen and hold in your head for a minute or two than a random number, or the first two letters of each word divorced from the words themselves.
There are some provisions in the protocol (not yet implemented) to allow alternate word lists, so if the sender uses e.g. a French wordlist instead of the default English one, the receiving CLI learns about it early enough so that "wormhole rx" can auto-complete against the correct list. The server/attacker could learn which wordlist is in use, but still faces the same level of entropy about the PAKE secret itself.
We've sketched out some approaches to working in a disconnected environment like that, using local multicast and mDNS/ZeroConf/Bonjour to act as an alternate mailbox server (https://github.com/magic-wormhole/magic-wormhole/issues/48). There's still design work needed, though, and I fear it would degrade the experience for fully-connected nodes (extra timeouts), so it might want to be opt-in with a `--offline` flag on both sides.
I was a little surprised to learn that they'll use Swift for future development. It's not among the languages I usually think of for cross-platform work. On the other hand, maybe Ladybird using it will help drive improvement in that area.
https://nitter.privacydev.net/awesomekling/status/1822236888...
read the doc.
SSH requires previous arrangement (you need to transfer the SSH key to your friend), magic wormhole is a way to arrange such a meeting without physical proximity.
or requires typing a password into a phone.
A bigger reason you want multiple streams is because most network providers use a stream identifier like the 5-tuple hash to spread traffic, and support single-stream bandwidth much lower than whatever aggregate they may advertise.
I use it with my wifes phone to transfer files between her drawing tablet and her Linux system she uses for Blender every day.
* is there an app for it? where i can share the password via qrcode? for when the data is to big for qrcodes? * what do you plan on doing regarding quantum computation? switching to some pqsafe cryptography, also to be safe before save-now-decrypt-later-attack? * is it possible to extend your protocol over more generic proxies like turn servers?
- You do not need an existing trust relationship (or trust on first use)
- Easier to punch holes through NAT/firewalls
- Easier for non-technical users
Worse than rsync (over SSH): - Multi-file support is poor (basically it .zips up everything before even starting to transfer)
- Zero support for incremental transfers
- Cannot reuse existing trust relationships (and thus cannot be used non-interactively)
- Easier DoSed
- 1/65536 chance of connection being hijacked (by default)
- Higher CPU usage
I put it on npm primarily so I could send things to other JS developers an absolute minimum of fuss: one command, total, instead of installing a tool and then running the command.
Yeah, that's the issue. I didn't have root permissions on either side. Moreover, a transfer tool should just work without requiring its users to have expert knowledge like this.
In this case, I checked the roundtrip ping time and multiplied it by the buffer size, and it agreed with the speeds I was seeing within ~5%, so it was not an issue with throttling. Actually, if I were a network provider interested in doing this, I would throttle on the 2-tuple as well.
I used the term "1 Gbps line" just because it's a well known quantity - the limitation of Gigabit Ethernet. The point wasn't that multiplexing TCP can get you 6x better speeds, it's that it improved the speed so much that the TCP bandwidth-delay product was no longer the limiting factor in the transfer.
What about using Onionshare to solve NAT'ing or at least Topr for handshaking?
Tor is basically a distributed set of proxy servers, so using onion servers (aka Hidden Services) is a viable, albeit somewhat slow, way to manage even the strict NAT boxes.
If you have Tor installed, then `wormhole send --tor` will automatically use an onion service to do exactly that.
Might be a totally dumb question but how does this work? Wouldn’t you already have to have communication to set a time?
The server sends an application/octet-stream response and a few other headers and it works.
See also https://github.com/magic-wormhole/magic-wormhole-protocols/
The Python implementation has the most features. More about which implementations support what features is here: https://magic-wormhole.readthedocs.io/en/latest/ecosystem.ht...