Most active commenters

    ←back to thread

    160 points Metalnem | 14 comments | | HN request time: 1.548s | source | bottom
    1. gslin ◴[] No.44495857[source]
    > You Should Run a Certificate Transparency Log

    And:

    > Bandwidth: 2 – 3 Gbps outbound.

    I am not sure if this is correct, is 2-3Gbps really required for CT?

    replies(3): >>44497422 #>>44497536 #>>44501107 #
    2. remus ◴[] No.44497422[source]
    It seems like Fillipo has been working quite closely with people running existing ct logs to try and reduce the requirements for running a log, so I'd assume he has a fairly realistic handle on the requirements.

    Do you have a reason to think his number is off?

    replies(2): >>44497620 #>>44498502 #
    3. xiconfjs ◴[] No.44497536[source]
    So we are talking about 650TB+ traffic per month or $700 per month just for bandwith…so surr not a one-man-project
    replies(2): >>44501616 #>>44502327 #
    4. ApeWithCompiler ◴[] No.44497620[source]
    > or an engineer looking to justify an overprovisioned homelab

    In Germany 2 – 3 Gbps outbound is a milestone, even for enterprises. As a individual I am privileged to have 250Mbs down/50Mbs up.

    So it`s at least off by what any individual in this country could imagine.

    replies(2): >>44497934 #>>44503843 #
    5. jeroenhd ◴[] No.44497934{3}[source]
    You can rent 10gbps service from various VPS providers if you can't get the bandwidth at home. Your home ISP will probably have something to say about a continuous 2gbps upstream anyway, whether it's through data caps or fair use policy.

    Still, even in Germany, with its particularly lacking internet infrastructure for the wealth the country possesses, M-net is slowly rolling out 5gbps internet.

    6. gslin ◴[] No.44498502[source]
    Let's Encrypt issues 9M certs per day (https://letsencrypt.org/stats/), and its market share is 50%+ (https://w3techs.com/technologies/overview/ssl_certificate), so I assume there are <20M certs issued per day.

    If all certs are sent to just one CT log server, and each cert generates ~10KBytes outbound traffic, it's ~200GB/day, or ~20Mbps (full & even traffic), not in the same ballpark (2-3Gbps).

    So I guess there are something I don't understnad?

    replies(1): >>44498707 #
    7. bo0tzz ◴[] No.44498707{3}[source]
    I've been trying to get an understanding of this number myself as well. I'm not quite there yet, but I believe it's talking about read traffic, ie serving clients that are looking at the log, not handling new certificates coming in.
    replies(1): >>44499058 #
    8. FiloSottile ◴[] No.44499058{4}[source]
    I added a footnote about it. It’s indeed read traffic, so it’s (certificate volume x number of monitors x compression ratio) on average. But then you have to let new monitors catch up, so you need burst.

    It’s unfortunately an estimate, because right now we see 300 Mbps peaks, but as Tuscolo moves to Usable and more monitors implement Static CT, 5-10x is plausible.

    It might turn out that 1 Gbps is enough and the P95 is 500 Mbps. Hard to tell right now, so I didn’t want to get people in trouble down the line.

    Happy to discuss this further with anyone interested in running a log via email or Slack!

    replies(1): >>44499533 #
    9. bo0tzz ◴[] No.44499533{5}[source]
    Thanks, that clarifies a lot!
    10. nomaxx117 ◴[] No.44501107[source]
    I wonder how much putting a CDN in front of this would reduce this.

    According to the readme, it seems like the bulk of the traffic is highly cacheable, so presumably you could park something a CDN in front and substantially reduce the bandwidth requirements.

    replies(1): >>44501739 #
    11. dilyevsky ◴[] No.44501616[source]
    If you’re paying metered you’re off by an order of magnitude - much more expensive. Even bandwidth based transit will be more expensive than that at most colos
    12. mcpherrinm ◴[] No.44501739[source]
    Yes, the static-ct api is designed to be highly cacheable by a CDN.

    That is one of the primary motivations of its design over the previous CT API, which had some relatively flexible requests that led to less good caching.

    13. dpifke ◴[] No.44502327[source]
    I pay roughly $800/mo each for two 10 Gbps transit connections (including cross-connect fees), plus $150/mo for another 10 Gbps peering connection to my local IX. 2-3 Gbps works out to less than $200/mo. (This is at a colo in Denver for my one-man LLC.)
    14. nucleardog ◴[] No.44503843{3}[source]
    Yeah the requirements aren't too steep here. I could easily host this in my "homelab" if I gave a friend a key to access my utility room if I were away / unavailable.

    But 2-3Gbps of bandwidth makes this pretty inaccessible unless you're just offloading the bulk of this on to CloudFront/CloudFlare at which point... it seems to me we don't really have more people running logs in a very meaningful sense, just somebody paying Amazon a _lot_ of money. If I'm doing my math right this is something like 960TB/mo which is like a $7.2m/yr CloudFront bill. Even some lesser-known CDN providers we're still talking like $60k/yr.

    Seems to me the bandwidth requirement means this is only going to work if you already have some unmetered connections laying around.

    If anyone wants to pay the build out costs to put an unmetered 10Gbps line out to my house I'll happily donate some massively overprovisioned hardware, redundant power, etc!