I gave up and just setup wireguard directly instead, I don't trust Tailscale either if that's their attitude towards privacy, it's permanently marred my vision of their product.
I gave up and just setup wireguard directly instead, I don't trust Tailscale either if that's their attitude towards privacy, it's permanently marred my vision of their product.
There also exists an open source implementation of the tailscale control server [1] that you could self host.
People sometimes ask me to describe the differences between Nebula and Tailscale. One of the most important relates to performance and scale. Nebula can handle the amount of internal network traffic and scalability of nodes (100k+ nodes, constant churn) required on a large network like Slack's, but Tailscale cannot. Tailscale's performance is fine for many situations, but not suitable for infrastructure. It is just a fundamentally different set of goals.
Nebula was created and open sourced before Tailscale was offering their product, but their architecture is similar to older offerings in the market, and is something we purposely avoided when creating Nebula.
Fwiw, I even recommend Tailscale to friends who want to do things like connect to their Plex server or Synology or [other thing] at home remotely. It simplifies this kind of thing greatly and doesn't require you to set up any infrastructure you control directly, which can be a headache for folks who just want to reach a handful of computers/devices.
Anywho, the more important bit is my point about performance. Nebula is significantly faster than userspace Wireguard, and plain userspace Wireguard is (last I checked) a bit faster than Tailscale, due to the additional code needed for things like your ACLs. At gigabit type scale it is probably fine and not noticeable, but at Slack, we needed to scale to 10G+ on links, while ensuring we didn't take a significant hit on CPU resources.
Again, I think Tailscale is very good for its target use case as a VPN replacement, and congrats on raising these funds!
That's only true if you can actually articulate a reason why it won't scale to some matbitut that some user might actually need today or at some point in the future.
For example, Go may be "not as scalable at C" (or vice versa! Or both!), but what matters is the scale to which it is actually desired to be deployed.
I don't have 100k hosts on a large network to test deploying Tailscale, but if I did, I'd be benchmarking the cpu/network/storage overhead of telling 99,999 hosts about a new one that comes online, every time that happens, or every time its pubkey changes. You can optimize this away _if_ your "fan out" is not as large, but there are plenty of cases where every host on your network needs to talk to a particular host, so all of them need to know about its keys as soon as possible.
Again these aren't unsolvable problems, to a point, but we didn't want to solve a problem when we could avoid it entirely, so that's the path we chose. It removes complexity and is a good part of the reason the system we built has been resilient.
A complaint some people express about tailscale is the battery life on mobile (or at least iOS). This exists because there is coordination overhead on even idle tailscale nodes. Back when we ported Nebula to iOS, we sweated details like "how often it wakes the radios" and did a lot of profiling. I never turn Nebula "off" on my iPhone, and it just sits in there in the background not using any resources most of the time.
We worked hard to optimize this out of our architecture, so that Nebula avoids generating traffic that is unrelated to the actual communication between hosts or lookups to lighthouses. An idle nebula tunnel can truly be idle indefinitely, and that also matters as the set of hosts becomes larger.
I do not think the Nebula project and Tailscale are direct replacements for each other in any fashion, and afaik neither is trying to be. I'm just pointing out that different design goals led to unique advantages and disadvantages to each architecture.