←back to thread

From S3 to R2: An economic opportunity

(dansdatathoughts.substack.com)
274 points dangoldin | 2 comments | | HN request time: 0.521s | source
Show context
simonsarris ◴[] No.38118991[source]
Cloudflare has been attacking the S3 egress problem by creating Sippy: https://developers.cloudflare.com/r2/data-migration/sippy/

It allows you to incrementally migrate off of providers like S3 and onto the egress-free Cloudflare R2. Very clever idea.

He calls R2 an undiscovered gem and IMO this is the gem's undiscovered gem. (Understandable since Sippy is very new and still in beta)

replies(4): >>38119194 #>>38120069 #>>38120641 #>>38122400 #
ravetcofx ◴[] No.38119194[source]
What are the economics that Amazon and other providers have egress fees and R2 doesn't? Is it acting as a loss leader or does this model still make money for CloudFlare?
replies(9): >>38119285 #>>38119489 #>>38119521 #>>38119701 #>>38119768 #>>38119769 #>>38120649 #>>38121416 #>>38125131 #
johnjohnnotjohn ◴[] No.38119769[source]
I’m inherently suspicious of services that are free (like Cloudflare egress). Maybe I’ve been burned too many times over the years, but I almost expect some kind of hostility or u-turn in the long run (I do really like Cloudflare’s products right now!).

I almost wish they had some kind of sustainable usage-based charge that was much lower than AWS.

Feel free to tell me why I’m wrong! I’d love to jump onboard - it just seems too good to be true in the long-term.

replies(1): >>38126808 #
dgacmu ◴[] No.38126808[source]
Because they're a CDN. You pay for storage already, so an object that isn't downloaded much is paid for. An object that gets downloaded a lot uses bandwidth, but the more popular it is, the more effective the CDN caching is.

There probably needs to be an abuse prevention rate limit (and probably is), but it's not quite as crazy as it sounds to just rely on their CDN bandwidth sharing policies instead of charging.

replies(1): >>38128896 #
johnjohnnotjohn ◴[] No.38128896[source]
What happens if I host an incredibly popular file, and start eating up everyone else’s share of the bandwidth? ie - I become a popular Linux distro package mirror?

I do think there are “soft limits” in place like you say - it’s just my personal preference to have documented limits (or pay fairly for what you use). IMO it helps stop abuse, and prevents billing surprises for legitimate heavy use-cases.

replies(1): >>38135758 #
1. dgacmu ◴[] No.38135758[source]
They undoubtedly limit the % of bandwidth you can use when the link is full. The problem with that is that it's very hard to quantify, because whether or not they have spare bandwidth for you depends a lot on location, timing, and what else is happening on the network.

But that's really no different from the guarantee you get from most CDN services. If you're using cloudflare in front of S3, for example, you'll end up with the same behavior.

replies(1): >>38141074 #
2. johnjohnnotjohn ◴[] No.38141074[source]
> But that's really no different from the guarantee you get from most CDN services. If you're using cloudflare in front of S3, for example, you'll end up with the same behavior.

But in my mind it’s also comforting that something like Cloudfront has a long-term sustainable model (I should also add with fewer strings attached like hosting video).

I do think the prices ant AWS are too high, but it discourages bad actors from filling up the shared pipes. ISPs are sometimes a classic example of what happens when a link is over subscribed.

Cloudflare’s “soft limits” are also somewhat of a dark pattern if you ask me. I like to know exactly how much something will cost, and it’s really hard to figure out with Cloudflare if you’re a high-traffic source. Do I hit the “soft limits,” or not? It’s really hard to say with their current model.

FWIW, I think Cloudflare is a great product right now - I am just skeptical they can keep it up forever.