2. R2 doesn't support file versioning like S3. As I understand it, Wasabi supports it.
3. R2's storage pricing is designed for frequently accessed files. They charge a flat $0.015 per GB-month stored. This is a lot cheaper than S3 Standard standard pricing ($0.023 per GB-month), but more expensive than Glacier and marginally more expensive than S3 Standard - Infrequent Access. Wasabi is even cheaper at $0.0068 per GB-month but with a 1 TB billing minimum.
4. If you want public access to the files in your S3 bucket using your own domain name, you can create a CNAME record with whatever DNS provider you use. With R2 you cannot use a custom domain unless the domain is set up in Cloudflare. I had to register a new domain name for this purpose since I could not switch DNS providers for something like this.
5. If you care about the geographical region your data is stored in, AWS has way more options. At a previous job I needed to control the specific US state my data was in, which is easy to do in AWS if there is an AWS Region there. In contrast R2 and Wasabi both have few options. R2 has a "Jurisdictional Restriction" feature in Beta right now to restrict data to a specific legal jurisdiction, but they only support EU right now. Not helpful if you need your data to be stored in Brazil or something.
If you already use AWS for lots of other things, yes.
Every cloud provider has outages sometimes but CF has been horrendous.
We were actually planning on migrating some other parts to R2 but we are just ditching CF altogether and just going to pay a bit more on AWS for reliability.
So if R2 has been impacted even a third as much as CF images, that would definitely be an important consideration.
And they won't increase it unless you become an enterprise customer in which case they'll generously double it.
If you don't mind having your bits reside elsewhere, Backblaze B2 and Bunny.net single location storage are both cheaper than Cloudflare.
Another (that probably contributes directly to the write latency issues) is region selection and replication. S3 just offers a ton more control here. I have a bunch of S3 buckets replicating async across regions around the world to enable fast writes everywhere (my use case can tolerate eventual consistency here). R2 still seems very light on region selection and replication options. Kinda disappointed since they're supposed to be _the_ edge company.
I found https://isdown.app/integrations/cloudflare/cloudflare-sites-...
What could you have a petabyte of that you're pretty sure you'll never need again? What kind of datasets are you storing?
That said we don't use any queues, KV, etc. Just pure JS isolates so that probably contributes to the robustness.
We do use the Cache API though and have ran into weirdness there. We also needed to implement our own Stale-While-Revalidate (SWR) because CF still refuses to implement this properly.
Overall CF is a provider that I would say we begrudging acknowledge as good. Stuff like the SWR thing can be really frustrating but overall reliability and performance are much better since moving to CF.
It doesn't have to be nearly that stark.
If we factor out egress, since it's the same for everything, the bulk retrieval cost for glacier deep archive is only $2.50/TB.
That means that a full year of storage ($12) plus four retrievals ($10) is roughly the same price as a single month of normal S3 storage ($23).
Otherwise, I've been using R2 now in production for wakatime.com for almost a month now with Sippy enabled. The latency and error rates are the same as S3, with DigitalOcean having slightly higher latency and error rates.
I don't understand. You say that you used a very small subset of their offering in a very specific and limited way; and with that you conclude that their offering is "good"? Shouldn't you make that conclusion after reviewing at least 50% of their offering?