←back to thread

160 points Metalnem | 8 comments | | HN request time: 1.082s | source | bottom
Show context
gslin ◴[] No.44495857[source]
> You Should Run a Certificate Transparency Log

And:

> Bandwidth: 2 – 3 Gbps outbound.

I am not sure if this is correct, is 2-3Gbps really required for CT?

replies(3): >>44497422 #>>44497536 #>>44501107 #
1. remus ◴[] No.44497422[source]
It seems like Fillipo has been working quite closely with people running existing ct logs to try and reduce the requirements for running a log, so I'd assume he has a fairly realistic handle on the requirements.

Do you have a reason to think his number is off?

replies(2): >>44497620 #>>44498502 #
2. ApeWithCompiler ◴[] No.44497620[source]
> or an engineer looking to justify an overprovisioned homelab

In Germany 2 – 3 Gbps outbound is a milestone, even for enterprises. As a individual I am privileged to have 250Mbs down/50Mbs up.

So it`s at least off by what any individual in this country could imagine.

replies(2): >>44497934 #>>44503843 #
3. jeroenhd ◴[] No.44497934[source]
You can rent 10gbps service from various VPS providers if you can't get the bandwidth at home. Your home ISP will probably have something to say about a continuous 2gbps upstream anyway, whether it's through data caps or fair use policy.

Still, even in Germany, with its particularly lacking internet infrastructure for the wealth the country possesses, M-net is slowly rolling out 5gbps internet.

4. gslin ◴[] No.44498502[source]
Let's Encrypt issues 9M certs per day (https://letsencrypt.org/stats/), and its market share is 50%+ (https://w3techs.com/technologies/overview/ssl_certificate), so I assume there are <20M certs issued per day.

If all certs are sent to just one CT log server, and each cert generates ~10KBytes outbound traffic, it's ~200GB/day, or ~20Mbps (full & even traffic), not in the same ballpark (2-3Gbps).

So I guess there are something I don't understnad?

replies(1): >>44498707 #
5. bo0tzz ◴[] No.44498707[source]
I've been trying to get an understanding of this number myself as well. I'm not quite there yet, but I believe it's talking about read traffic, ie serving clients that are looking at the log, not handling new certificates coming in.
replies(1): >>44499058 #
6. FiloSottile ◴[] No.44499058{3}[source]
I added a footnote about it. It’s indeed read traffic, so it’s (certificate volume x number of monitors x compression ratio) on average. But then you have to let new monitors catch up, so you need burst.

It’s unfortunately an estimate, because right now we see 300 Mbps peaks, but as Tuscolo moves to Usable and more monitors implement Static CT, 5-10x is plausible.

It might turn out that 1 Gbps is enough and the P95 is 500 Mbps. Hard to tell right now, so I didn’t want to get people in trouble down the line.

Happy to discuss this further with anyone interested in running a log via email or Slack!

replies(1): >>44499533 #
7. bo0tzz ◴[] No.44499533{4}[source]
Thanks, that clarifies a lot!
8. nucleardog ◴[] No.44503843[source]
Yeah the requirements aren't too steep here. I could easily host this in my "homelab" if I gave a friend a key to access my utility room if I were away / unavailable.

But 2-3Gbps of bandwidth makes this pretty inaccessible unless you're just offloading the bulk of this on to CloudFront/CloudFlare at which point... it seems to me we don't really have more people running logs in a very meaningful sense, just somebody paying Amazon a _lot_ of money. If I'm doing my math right this is something like 960TB/mo which is like a $7.2m/yr CloudFront bill. Even some lesser-known CDN providers we're still talking like $60k/yr.

Seems to me the bandwidth requirement means this is only going to work if you already have some unmetered connections laying around.

If anyone wants to pay the build out costs to put an unmetered 10Gbps line out to my house I'll happily donate some massively overprovisioned hardware, redundant power, etc!