←back to thread

Tailscale raises $100M

(tailscale.com)
854 points gmemstr | 3 comments | | HN request time: 0.624s | source
Show context
arsome ◴[] No.31261100[source]
I was going to try TailScale but then it seemed the only option to do so as an individual was to login with a 3rd party cloud provider, which I in no way want tied into my networks.

I gave up and just setup wireguard directly instead, I don't trust Tailscale either if that's their attitude towards privacy, it's permanently marred my vision of their product.

replies(10): >>31261128 #>>31261230 #>>31261250 #>>31261558 #>>31261667 #>>31261807 #>>31261815 #>>31261981 #>>31262022 #>>31262899 #
JeremyNT ◴[] No.31261250[source]
Indeed, this is why I won't use it either. I settled on Slack's Nebula [0] instead of wireguard because it handles direct p2p communication between nodes automatically.

There also exists an open source implementation of the tailscale control server [1] that you could self host.

[0] https://github.com/slackhq/nebula

[1] https://github.com/juanfont/headscale

replies(2): >>31261607 #>>31261688 #
rhuber ◴[] No.31261607[source]
(Nebula coauthor here)

People sometimes ask me to describe the differences between Nebula and Tailscale. One of the most important relates to performance and scale. Nebula can handle the amount of internal network traffic and scalability of nodes (100k+ nodes, constant churn) required on a large network like Slack's, but Tailscale cannot. Tailscale's performance is fine for many situations, but not suitable for infrastructure. It is just a fundamentally different set of goals.

Nebula was created and open sourced before Tailscale was offering their product, but their architecture is similar to older offerings in the market, and is something we purposely avoided when creating Nebula.

Fwiw, I even recommend Tailscale to friends who want to do things like connect to their Plex server or Synology or [other thing] at home remotely. It simplifies this kind of thing greatly and doesn't require you to set up any infrastructure you control directly, which can be a headache for folks who just want to reach a handful of computers/devices.

replies(6): >>31261776 #>>31261960 #>>31262150 #>>31262492 #>>31263218 #>>31264233 #
crawshaw ◴[] No.31262150[source]
Tailscalar here. Tailscale can handle 100k+ nodes with lots of churn just fine.
replies(1): >>31262350 #
1. rhuber ◴[] No.31262350[source]
Fair enough. I am sure the key distribution is fast and all that, but not needing peer key distribution at all was a goal and the overhead associated is less scalable than just not doing it at all. Regardless, very cool that you can handle that many nodes, which is a hard problem. I assume you do just-in-time key distribution or something, because (n-1) distribution of peer keys would be ... less than ideal.

Anywho, the more important bit is my point about performance. Nebula is significantly faster than userspace Wireguard, and plain userspace Wireguard is (last I checked) a bit faster than Tailscale, due to the additional code needed for things like your ACLs. At gigabit type scale it is probably fine and not noticeable, but at Slack, we needed to scale to 10G+ on links, while ensuring we didn't take a significant hit on CPU resources.

Again, I think Tailscale is very good for its target use case as a VPN replacement, and congrats on raising these funds!

replies(1): >>31264794 #
2. lupire ◴[] No.31264794[source]
> the overhead associated is less scalable than just not doing it at all

That's only true if you can actually articulate a reason why it won't scale to some matbitut that some user might actually need today or at some point in the future.

For example, Go may be "not as scalable at C" (or vice versa! Or both!), but what matters is the scale to which it is actually desired to be deployed.

replies(1): >>31265394 #
3. rhuber ◴[] No.31265394[source]
I mean... the title of the Tailscale blog post is "Tailscale raises $100M… to fix the Internet", and that's pretty massive scale. /s

I don't have 100k hosts on a large network to test deploying Tailscale, but if I did, I'd be benchmarking the cpu/network/storage overhead of telling 99,999 hosts about a new one that comes online, every time that happens, or every time its pubkey changes. You can optimize this away _if_ your "fan out" is not as large, but there are plenty of cases where every host on your network needs to talk to a particular host, so all of them need to know about its keys as soon as possible.

Again these aren't unsolvable problems, to a point, but we didn't want to solve a problem when we could avoid it entirely, so that's the path we chose. It removes complexity and is a good part of the reason the system we built has been resilient.

A complaint some people express about tailscale is the battery life on mobile (or at least iOS). This exists because there is coordination overhead on even idle tailscale nodes. Back when we ported Nebula to iOS, we sweated details like "how often it wakes the radios" and did a lot of profiling. I never turn Nebula "off" on my iPhone, and it just sits in there in the background not using any resources most of the time.

We worked hard to optimize this out of our architecture, so that Nebula avoids generating traffic that is unrelated to the actual communication between hosts or lookups to lighthouses. An idle nebula tunnel can truly be idle indefinitely, and that also matters as the set of hosts becomes larger.

I do not think the Nebula project and Tailscale are direct replacements for each other in any fashion, and afaik neither is trying to be. I'm just pointing out that different design goals led to unique advantages and disadvantages to each architecture.